text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Roald Amundsen ’ s contributions to our knowledge of the magnetic fields of the Earth and the Sun
Abstract. Roald Amundsen (1872–1928) was known as one of the premier polar explorers in the golden age of polar exploration. His accomplishments clearly document that he has contributed to knowledge in fields as diverse as ethnography, meteorology and geophysics. In this paper we will concentrate on his studies of the Earth's magnetic field. With his unique observations at the polar station Gjoahavn (geographic coordinates 68°37'10'' N; 95°53'25'' W), Amundsen was first to demonstrate, without doubt, that the north magnetic dip-pole does not have a permanent location, but steadily moves its position in a regular manner. In addition, his carefully calibrated measurements at high latitudes were the first and only observations of the Earth's magnetic field in the polar regions for decades until modern polar observatories were established. After a short review of earlier measurements of the geomagnetic field, we tabulate the facts regarding his measurements at the observatories and the eight field stations associated with the Gjoa expedition. The quality of his magnetic observations may be seen to be equal to that of the late 20th century observations by subjecting them to analytical techniques showing the newly discovered relationship between the diurnal variation of high latitude magnetic observations and the direction of the horizontal component of the interplanetary magnetic field (IMF By). Indeed, the observations at Gjoahavn offer a glimpse of the character of the solar wind 50 yr before it was known to exist. Our motivation for this paper is to illuminate the contributions of Amundsen as a scientist and to celebrate his attainment of the South Pole as an explorer 100 yr ago.
results that Amundsen contributed significantly to the emergence of Norway as a dominant nation in polar meteorology, geophysics, cosmic physics and even ethnography.His entrepreneurial style of exploration extended to his scientific efforts as well.Here we will concentrate on his contributions to the study of geomagnetism, mainly during the 1903-1906 expedition through the Northwest Passage (see e.g.Huntford, 1987).
Amundsen was fascinated by the invisible magnetic field, and learned early on that a compass was important for orientation and navigation, particularly out at sea.During the Belgica expedition, 1897-1999, on which he sailed as first mate, he followed with interest the magnetic observations that were carried out.In his diary on Monday 15 August 1898, he wrote that he and two other crew members were planning to locate the magnetic pole in Antarctica (Kløver, 2009).Unfortunately, that trip was not carried out.
Amundsen started preparation for the Northwest Passage expedition by proposing to locate the north magnetic dip pole (NMDP) (Fig. 1).His background knowledge in geomagnetism was limited.Shortly after he was back from the Belgica expedition, he started a more serious education in geomagnetism, both theoretical and observational, before he bought the Gjøa vessel, in 1901.The first scientist to whom Amundsen mentioned his plan to locate the NMDP was the Deputy-Director of the Norwegian Meteorological Institute in Oslo, Dr. Axel S. Steen, who at once became interested (Steen et al., 1933).
Armed with an introduction from Dr. Nansen, he soon made his way to the Director of Deutsche Seewarte near Hamburg, where his goal of geomagnetic studies and locating the NMDP was met with enthusiastic support by Director Georg von Neumayer.Amundsen, together with the Gjøa crew member Gustaf Wiik, spent several months in 1902 and 1903 at the Deutsche Seewarte Institute and at the Magnetic Observatory in Potsdam, near Berlin.Professor Georg von Neumayer and Professor Adolf Schmidt helped considerably both in planning the magnetic observations and ordering the best instruments (cf. Schröder et al., 2010).However, Dr. Steen was his main scientific guide.Mr. Wiik (1887-1906) was an unusually dedicated worker, the fact of which is clearly demonstrated by inspection of all the magnetic recordings from the expedition.Extreme temperatures and stormy weather never stopped him from his daily routine of the magnetic observations.The goals of the Gjøa expedition was the subject of news reports in many national and international newspapers as the headline from the Los Angeles Times in Fig. 1 illustrates.
Geomagnetism -a brief introduction
Magnetic fields are a fascinating subject, and the most mysterious of the force fields we experience every day.Only by long-term measurements of the Earth's magnetic field at many observatories can we acquire knowledge about the internal field characteristics.In addition, the changes in the field intensity and direction at individual stations reveals facts about the electric currents in the upper atmosphere, the local geology and the solar activity responsible for the external field variations at that station.These parameters are introduced to models of the internal (geomagnetic) field that are calculated to provide geomagnetic latitude and longitude coordinates for the Earth.Thus, the model geomagnetic field at any point on Earth (including the pole) will not actually correspond to observations of the field at that point, but it will place the station in the context of the total internal geomagnetic field.
The earliest recorded evidence of measurement of the Earth's magnetic field is connected with the direction-finding capability of the compass, and that is dated to the eleventh century in Chinese history (cf.The Encyclopaedist Shon-Kau, AD 1030-1093), while other sources claim that Chinese had knowledge of the compass two thousand years before Christ.In European literature, the earliest mention of the compass and its application to navigation appeared in two works by Alexander Neekan, a monk at St. Albans (AD 1157(AD -1217)).It is mentioned that the "mariners used that means to find their course when the sky was cloudy".The directive properties of the magnets are a reliable direction finder in dark and cloudy weather.Therefore, the geomagnetic field has been linked to navigation for years.Petrus Peregrinus, a French engineer, described the first real experiments in Europe with lodestones in 1269 (Brown, 1949).By the fourteenth century, many sailing ships carried compasses.That the direction to magnetic north, relative to geographic north, differs over most of the globe had also been known for a long time.The difference is called magnetic declination.A more detailed history can be found in the paper: Follow the needle: seeking the magnetic poles, by Good (1991; see also Silverman and Smith, 1994).Chapter XXVI in Chapman and Bartels (1940) monumental work on geomagnetism includes an early history of geomagnetism.
More than 400 yr ago, it was demonstrated that the Earth itself is a giant magnet (Gilbert, 1600(Gilbert, , 1958)).The Earth's magnetic field is also called the geomagnetic field.To the first approximation the geomagnetic field has a dipole form, with a north and a south pole located near, but not aligned with the south and the north geographic poles, respectively (see Fig. 2).Due to the strong dipolar character, the shielding effect against charged cosmic and solar wind particles is weakest in the polar regions because of the open field lines.There are variations in the main field, resulting in movements of the poles, polar reversals, and changes in the strength of the field.These usually occur over periods of time longer than a year, and are called secular variations.At any single location on Earth, variations in observations of the main field occur on shorter time scales and are due to local ionospheric currents, the local geology and solar activity.
Locating the north magnetic dip pole (NMDP) before 1900
Before 1900, changes in the position of the north magnetic pole were thought to be important for new sailing routes, particularly from Europe to Asia.Therefore, its location had raised public interest because many people thought (and still do) that compasses point to the pole, despite evidence that the needle is responding only to the local magnetic field.measurement of the magnetic field at any point on the Earth.
Because each model or representation of the field has a pole at different locations, there is still confusion over the meaning and location of the magnetic pole.However, the only pole that can be directly measured, is the dip pole, called the NMDP.This is the point on the Earth where the horizontal magnetic component is zero and where the vertical component is maximum; thus it is the spot where a magnetized needle would stand vertically to the Earth's surface; i.e. the inclination is 90 • .The search for the NMDP began in earnest around 1800 with the British Royal Navy's campaign to discover the Northwest Passage.Because the NMDP is located where the climate is so extreme and so remotely from where people are living, the task to locate its position was difficult.James Clark Ross (cf.Commander John Ross's 1835 monograph and lecture at The Royal Society, December 1833, Ross, 1834) reported his magnetic observations during the Victory expedition on the west coast of Boothia Peninsula (Fig. 3).Measurements of NMDP were made by a dip circle instrument, which was a sort of a vertical compass.James Ross measured an inclination of 89 • 59 with his dip circle instrument on 1 June 1831.For all practical purposes, he had found the NMDP.The geographic coordinates for the pole were revised by Nippoldt (Nippoldt, 1929) to 70 • 05 north (N) and 96 • 46 west (W).
It was suggested by Ross, that the magnetic pole had an area of about 50 miles in diameter, for which there was no apparent horizontal force (1 statute mile = 1.852 km).Ross's dip circle readings were given for noon, 03:00 p.m., 05:00 p.m. and 07:00 p.m., i.e. four observations.The final value given is the mean of all his readings.The results were variable, which he could not explain.With a simple dip circle instrument, it is difficult to determine accurately the position of the pole.James Ross did not claim to have stood on the very spot of the magnetic pole, but thought he was within a mile or so of the pole.Unfortunately, he did not know that even on a magnetic relatively quiet day, the pole he tried hard to locate can change by about 5 km in one hour.
Table 1.For the various magnetic measurements, the Expedition was provided with these instruments.The Reference Name is how the instrument is referred to in the data lists.Units 6, 7 and 8 were recording instruments (see Steen et al., 1933). No
Roald Amundsen's measurements of the magnetic field at Gjøahavn
On 1 March 1903 the Gjøa Expedition sailed out of Oslo harbor.The preparation and care with which Amundsen approached the polar magnetic studies for the Gjøa Expedition, were unprecedented.Because most previous explorers were constantly moving, there were only point measurements of the field at different places, which gave no indication of the local variations with time.Amundsen followed the advice of experts and established a permanent station approximately 200 km from the assumed position of the pole and made continuous observations for almost two years.His studies were, no doubt, sparked by the location of the Magnetic Pole near the route of the Northwest Passage.The main purpose was to explore the region and "to determine the present geographical coordinates of the magnetic pole-point".
The magnetic instruments and accessories are listed in Table 1.Altogether three magnetic instruments were chosen with great care in Germany, two were purchased in England, while three were borrowed from the Norwegian Meteorolog-ical Office and from Professor Kr.Birkeland at the University of Oslo.The most important instruments were the standard variographs for the continuous recording of the field.In addition, they had three instruments -inclinatoria, for determine the inclination angle.For recording declination they had two instruments.Some instruments were especially constructed for field observations close to the pole, where the Hcomponent is small while Z is large (cf.Steen et al., 1933).
The survey to locate the magnetic pole is in principle similar to a magnetic survey of any other region.However, the climate at the remote location of the magnetic pole imposed several practical constraints.The characteristic properties of the Earth's main field, and its changes, are difficult to measure from one station, and impossible to map accurately by observing over a short period.This is because there are many different contributions to the main geomagnetic field and they are variable.
The permanent magnetic station (Fig. 4) was built by Amundsen and his crew at Petersen bay, on King William Land and was called Gjøahavn.Gjøahavn was well into the polar cap and estimated to be within 200 km of the NMDP, shown on the map in Fig. 5.The observations were going day and night, without interruption for more than a year and a half.In addition, absolute calibrations were made regularly (Fig. 6).Thus, about 360 absolute measurements of the magnetic elements were carried out.Figure 7 shows the first routine magnetograms from the three variometers for H, D, and Z.
Eight magnetic field stations, four in the neighbourhood of Gjøahavn, numbered 1, 2, 3 and 4, and four farther away at Boothia Felix, where Ross had reported the location of the pole, called I, II, III and IV, were operated.Their locations (coordinates) are shown in Table 2. Short periodic observations were carried out at the field stations in 1904 and 1905.The average values for the different magnetic elements, as well as the estimated distances to the magnetic pole (d) from the stations are also shown in Table 2 (Wasserfall, 1939).Amundsen's interest in geomagnetism dominated the scientific efforts during the three years spent on the way through the Northwest Passage.Two over-winterings with magnetic observatories made the data from the expedition unique.On 13 August 1905 they sailed out of Gjøahavn.They stopped in Sitka, Alaska to intercalibrate the magnetic instruments with the magnetometer established there.There Amundsen met Harry Edmonds, who worked for Louis A. Bauer, a driving force in Washington, DC for the improvement of magnetic observations worldwide.Bauer was interested in the magnetic data, and offered to reduce and prepare them for publication.Amundsen, however, felt obligated to have this work carried out in Norway (Silverman and Smith, 1994).
Around 1900, knowledge about geomagnetism was still an emerging science.The expedition was carried through during sunspot cycle 14.The monthly average number of sunspots was near 30 in 1903, but increased to 60 in 1905.Thus, the expedition was carried out during moderately disturbed solar activity.Even if it is difficult today to evaluate the magnetic observations, the recordings indicate high quality.
Amundsen himself has not carried out any detailed analyses of the observations.His few results are published in The Northwest Passage (1908a) and in his lecture at The Royal Geographic Society, on 11 February 1907 (Amundsen, 1908b).He appointed a committee consisting of Aksel S. Steen, Deputy Director at The Norwegian Meteorogical Institute, as chairman, while K. F. Wasserfall, at The Magnetic Byrå in Bergen and meteorologist N. Russeltvedt were the other two members.Particularly Wasserfall had education and experience in geomagnetic studies.The editing of the magnetic observations was not completed before 1932 (Steen et al., 1933).The most complete examination was carried out by Wasserfall in his 1938and 1939papers (Wasserfall, 1938, 1939).
An inspection of the magnetic records shows: 1.The average Gjøahavn magnetic values for the 19 months were: H = 760 ± 100 nT, D = 5.0 • ± 3 • and I = 89 • 17 18 ± 6 .Changes in direction and intensity were large and variable on time scales from minutes to months.
2. A yearly variation with a significant maximum intensity during the summer months is noted (cf.Fig. 8).Other regular periodic variations can not be seen directly, www.hist-geo-space-sci.net/2/99/2011/ Hist.Geo Space Sci., 2, 99-112, 2011 except -during some of the months, a periodic variation of ∼28 days.
3. The largest sunspot maximum appeared between 22 August and 2 September 1904, "but it did not generate any intense magnetic disturbances".Wasserfall (1939) concluded that the sunspot curve and the magnetic observations for 1904 do not seem to show any general similarity.
The regular diurnal variation in the H-component, called the Sq variations, is marked during all quiet days.Wasserfall (1927Wasserfall ( , 1939) also mentioned a period of ∼3 days, or 80 h, both in the H and the sunspot data, but they are not shown here.
The periodicity of polar magnetic storms in relation to the solar rotation is of considerable interest.Based on a lot of observations -mainly at low and medium latitudes, a significant period of nearly 27.3 days, the same as the rotation period of the Sun, has been found (Chapman and Bartels, 1940).However, Amundsen's magnetic data from Gjøahavn show a period of 28.3 days (Wasserfall, 1927 and Fig. 9).
For comparison, Professor Birkeland, based on magnetic observations from four high latitude stations during the winter 1902-1903 and from the first International Polar Year 1882-1983, found a period close to 29 days."Regarding the connection between sun-spots and magnetic storms it seems improbably that the sun-spots can be the direct cause of magnetic storms", Birkeland concluded (Egeland and Leer, 1973).This periodicity is generally diminished during high solar activity.
Regarding a 14-day periodicity, Birkeland's terrella (as a cathode) simulations of the Sun are of interest (Egeland and Burke, 2005).When the charge was sufficiently strong, the rays had a remarkable tendency to concentrate about two spots diametrically opposite to each other, 14 days apart.Both the 28-and the 14-day periodicity are fairly well marked in 1904 data, but the 14-day periodicity is not detected every month (Wasserfall, 1927 and Fig. 10).
The longitudinal distributions of sunspots had a tendency to concentrate along two meridians which have an inter distance of about 180 degrees on the Sun's surface.This Hist.Geo Space Sci., 2, 99-112, 2011 www.hist-geo-space-sci.net/2/99/2011/ A. Egeland and C. S. Deehr: Roald Amundsen's contributions to our knowledge The vertical scale is in nT while the months -given by their number, is listed above.This curve shows that the 28.3-day period is maximum during equinoxes.From October to December this period is very weak (from Wasserfall, 1927).
26
Figure peculiarity was one of the most important points from the Gjøahavn results.Thus, the Sun's rotations could cause periodicities of both 14 and 28 days in the Earth's polar atmosphere and in terrestrial magnetism.What Wasserfall could not have known was that the relationship to multiples of the solar rotation period was not connected to sunspots but to high speed streams of solar wind particles from regions of the solar corona called coronal holes.These were not discovered until X-ray pictures of the Sun from spacecraft showed them as dark patches on the Sun, stable for as many as six to ten solar rotations, producing magnetic effects on Earth at regular multiples of the solar rotation (Zhang et al., 2005) 5 The position of the pole The location of the north magnetic dip pole was an important goal of the Amundsen expedition.In the age of sail, when the compass was one of the most important navigational instruments, it was regarded as a legitimate research objective, even though it is of little interest to contemporary scientific studies of the Earth's magnetic field."Unfortunately, the place on the Earth where the magnetic field is vertical, is neither the magnetic pole nor a geophysically important location" Campbell ( 2003) concludes in response to recent attempts to locate the NMDP.For Amundsen it was important to find out if the pole had moved since the Victory expedition.For weeks during 1904, Amundsen and co-workers were hunting for the Pole, but could not pinpoint its position.A few times, they believed they were at its new position, but when they the following day carried out a double check, the dip needle swung far off, indicating that the dip pole now was located farther away.They concluded that the Pole had moved considerably farther northward, between 1831 and 1904.Amundsen discovered that the NMDP "has not an immovable and stationary situation, but, in all probability, is in continual movement" (Amundsen, 1908a).This was a significant result of Amundsen's scientific studies of geomagnetism.He was disappointed because he wrote in his diary: "Our journey was not a brilliant success".He thought he had failed to attain one of his goals.
The geographic coordinates of the Pole as listed by Amundsen in 1904 were 70 In preparation for the expedition, Prof. Schmidt advised Amundsen to locate the permanent observatory at some Using these values, and the distance from Gjøahavn to the estimated pole location (205 km where 1 mile = 1.852 km), the equations were solved for a, so that the one-hour averaged values of the variation of H and D could be substituted giving the variation in km of the location of the pole.The variation of H yielded the north-south changes in the location of the pole and the variation of D yielded changes in the east-west direction (Graarud and Russeltvedt, 1926).
The result of the calculation of the diurnal variation in pole location is shown in Fig. 11, where the NMDP undergoes a regular, diurnal drift caused mainly by ionospheric current systems, created and driven by sunlight.These variations are larger in summer than during the winter months, but the variations are biggest during very disturbed days.Thus, when we today talk about the location of the pole, we are referring to an average position.The pole wanders daily in a roughly elliptical path around this average position, and may frequently be as much as 25 km away from this position during disturbed conditions.Figure 12 shows the rather smaller annual variation of the location of the NMDP.
Hist.Geo Space Sci., 2, 99-112, 2011 www.hist-geo-space-sci.net/2/99/2011/ A. Egeland and C. S. Deehr: Roald Amundsen's contributions to our knowledge 6 Solar wind, interplanetary magnetic field and magnetic sector structures Interplanetary space, not long ago believed to be empty of matter, is filled with electrons and ions of solar origin.These streaming particles carry with them the solar magnetic field and are collectively called the solar wind.The presence of the solar wind including the interplanetary magnetic field (IMF) was verified as soon as in-situ measurements were carried out (Wilcox and Ness, 1965).Even if its amplitude is only of the order of a few nT, it is an important field which significantly influences disturbances on the Earth.The solar wind together with the IMF is the connecting link between solar activity and geophysical disturbances such as large variations in the Earth's magnetic field and auroras.The 28-and 14-day variations observed are caused by solar particles and are thus of special interest in relation to Amundsen's field measurements.Mainly due to the regular average 27.3 day rotation of the Sun, the IMF is spiral-shaped.Near the Earth, the field makes an angle of about 45 • with the radial direction (Egeland et al., 1973).Solar observations accumulated over time indicated that the polarity of the field is organized in a regular pattern.The interplanetary sector structures were discovered during the descending phase of sunspot cycle 19, with four stable sectors (Wilcox and Ness, 1965).The field was found to point predominately outward from or inward toward the Sun for about a week at a time and then change in a relatively short time to the opposite polarity.This pattern was found to repeat, with only minor changes, for several rotations of the Sun.
Both the two-and four-sectors -and more complicated patterns, have been shown to be present at different epochs.The two-sector pattern is consistent with the dipole field assumption.The four sector pattern implies a more compli- cated solar magnetic field with a wavy neutral sheet (see Sect. 7).The result is shorter intervals of unchanged polarity, but with the same basic period of about 27 days (Egeland et al., 1973;Kivelson and Russell, 1995).
7 Solar wind, interplanetary magnetic field and magnetic sector structures 100 yr ago estimated from Amundsen's magnetic observations The two main objectives of subjecting Amundsen's Gjøahavn data to modern analysis, are: firstly, to show it is equal in quality and accuracy with those of the modern observatories of the late 20th century.Secondly, to learn about the Sun and solar wind activity several decades before polar region magnetic observatories were established.What follows are the Gjøahavn data showing the recently discovered relationship of the high latitude variations of the local magnetic field to changes in the direction of IMF B y and the solar wind.
An objective method of inferring the polarity of the IMF B y component from high latitude magnetic observations was developed by Svalgaard (1972).Named the Svalgaard-Mansurov Effect (Wilcox, 1972), this discovery led to the development of a new method to infer the IMF direction using the H-component observed at Godhavn (69 • 15 N, 53 • 32 W) after 1926 (Svalgaard, 1975).Because Godhavn and Gjøahavn are at roughly the same magnetic latitude, but separated by ∼3 h in longitude, we can subject the Gjøahavn data to the same analysis that was carried out on the Godhavn data.Svalgaard plotted the variation of the one-hour averaged H-component from the monthly mean for 1950 from Godhavn. Figure 13b shows the diurnal variations of the onehour averaged Gjøahavn H-component of June 1904 from the monthly mean, for (1) all data (middle curve), (2) days when a broad positive perturbation is observed between the www.hist-geo-space-sci.net/2/99/2011/ Hist.Geo Space Sci., 2, 99-112, 2011 hours of 12:00 and 18:00 UT (upper curve: IMF away), and (3) days when a broad negative perturbation in the field intensity is observed between the hours of 12:00 and 18:00 UT (the lower curve: IMF toward).Figure 13a shows similar data, but from Godhavn for 1950 (Svalgaard, 1975).Taken together, Fig. 13a and b show the nature of both the current systems that affect the diurnal curve of the magnetic variation at stations in the polar cap such as these.Notice that the curves of all of the data (middle curves) are of same sinusoidal character reported by Wasserfall (1938).The difference between the two stations is that the sine waves are out of phase.The reason for this is that the stations pass under sunward-directed ionospheric currents going across the pole that are fixed relative to the Sun, so that the stations pass under the currents, at different Universal Times, resulting in maxima and minima at different times.
It is apparent that we may use this effect on the Gjøahavn H-component to infer the direction of the IMF B y component as a function of time to ascertain the nature of the IMF during 1904 (Svalgaard, 1975;Sandholt et al., 2002, p. 54).
To do this, we averaged the Gjøahavn H variation for the period 12:00 to 18:00 UT for each day for May, June and July of 1904, and plotted it as a function of time, superposing the three Carrington solar rotations.The result, shown in Fig. 14, indicates that the IMF B y component changed di- This picture of the interaction of the magnetosphere with the solar wind is consistent with similar conditions today.Note that the period of the Gjøahavn measurements, 1903-1906, occurred just as solar activity peaked in solar cycle 14.Indeed, the magnetospheric storm that occurred on 31 October 1903 was among the largest storms ever recorded (Cliver and Svalgaard, 2004).Figure 15a shows the relationship of the solar magnetic field, carried outward from the Sun by the solar wind, to the Earth's orbital plane.Because the solar spin pole is tilted with respect to the ecliptic plane, the solar magnetic field seen at Earth, changes direction twice each solar rotation when the solar magnetic field equator is undistorted and coincides roughly with the solar spin equator.
For solar activity levels indicated by the sunspot numbers for the years 1903-1906, we would expect to see the solar magnetic equator distorted into a large sine wave on the Sun.The resulting waves in the solar magnetic equatorial plane introduce more interceptions of the Earth by the neutral sheet with each solar rotation.Figure 15b shows the neutral sheet forming a "ballerina skirt" that results when the solar magnetic equator becomes significantly distorted, resulting in four sector changes per solar rotation.Because each wave results in two sector changes seen at Earth the number of sector changes is usually even, although changes in the solar magnetic field can occur during one Carrington rotation (CR), leading to an odd number of IMF B y changes during one rotation.It appears, however, that the four crossings during each of CR 677-679, seen in the Gjøahavn data (Fig. 14) is consistent with the solar wind that we see today, 100 yr later.1903-1904(from Wasserfall, 1938)).
Effect of the IMF on the diurnal variation of the NMDP position
One of the most remarkable aspects of the high latitude magnetograms is the strikingly constant, large diurnal variation of all the magnetic elements.Notice that it generally dominated the traces in the Gjøahavn data, even during periods of auroral activity (Fig. 16).This led to the relatively smooth ellipse found in the calculated diurnal variation of the pole position from data for the entire year (Fig. 11).Wasserfall plotted the July 1904 diurnal variation of the location of the NMDP (Fig. 17a) and found a skewed distribution compared to the regular ellipse shown in Fig. 11.Our plot of the diurnal variation of the location of the NMDP (Fig. 17b) shows the same shape as Fig. 17a for all of the data from June 1904.When we separated the days with IMF Toward and Away from the Sun, we found that the main reason for the skewed distribution of the summer observations, relative to those from the entire year (Fig. 11), was the overwhelming effect of the IMF during that time.
Conclusions
We wind magnitude, direction and occurrence was similar to that which we observe directly today.It is a testament to the scientific abilities of Amundsen himself and his crew to design and carry out the first continuous magnetic variation recording inside the polar cap for a period of 19 months under almost impossible conditions.In addition, the data set is so well-calibrated and corrected, that, besides describing the ordinary geomagnetic disturbances at a high latitude station, we can infer the interplanetary magnetic field structure and direction, for a time 60 yr before its discovery.
Edited by: K. Schlegel Reviewed by: S. Silverman and M. Korte
Figure 1 .
Figure 1.Amundsen's Gjøa expedition received much attention in the international media (from The Los Angeles Times, 1906).
Figure 2 .
Figure2.The figure illustrates a simple model of the Earth's magnetic field in a plane.It is assumed that the field is due to a magnetic dipole located in the centre of the Earth.The dotted line, which makes an angle of ∼11 • with the geographic axis, is called the dipole axis, and where it intersects with the surface of the Earth, the field is hypothetically vertical.The point where the horizontal component is zero, the vertical component is a maximum and the inclination, or dip is 90 degrees, is called the North Magnetic Dip Pole (NMDP).The latitude is measured from the equator.The magnetic elements, the horizontal component H, the vertical component Z, the north component, X, the east component Y, the declination D, and the inclination I are marked.The field has the direction as indicated by the arrows.Thus, the Earth's magnetic field is positive pointing downward toward the Northern Hemisphere, and the dipole pole in the Northern Hemisphere is a south pole.Because opposites attract, the south dipole pole attracts the north pole of a compass.
Figure 3 .
Figure 3.A drawing showing James Clark and co-workers locating the North Magnetic Dip Pole in 1831 (from Ross, 1835).
Figure 4 .
Figure 4.The magnetometer hut at Gjøahavn.The hut was built from shipping cases and non-ferrous metals, it was covered to keep out light and cold.Amundsen suffered CO poisoning to his heart muscle when he remained too long inside, tending the magnetometer (fromSteen et al., 1933).
Figure Five Figure 5 .
Figure Five
Figure 6 .
Figure 6.Amundsen making an absolute magnetic measurement during the Maud expedition through the northeast passage (from Amundsen's archives at The FRAM Museum).
Figure 7 .
Figure7.Showing the first routine recordings of magnetic components H, D, and Z at Gjøahavn (fromSteen et al., 1930).Imagine Amundsen's surprise when the newly-installed Gjøahavn magnetometers recorded one of the largest magnetic storms ever recorded on 31 October 1903 the first day they started the routine measurements (Cliver andSvalgaard, 2007).
Figure 8 .
Figure 8.The average daily intensity in nT (vertical scale) of the horizontal component at Gjøahavn from 1 November 1903 to 1 June 1905, is shown.Notice that the variation from day to day is irregular and large; i.e. about 20 %.Furthermore, the seasonal variation shows a marked minimum in the winter months.The oscillations in the daily variations during the summer months are largest.February, March and April 1905 is significantly more disturbed then the same period in the preceding year (from Wasserfall, 1938).
Figure 9 .
Figure9.The mean duration of the oscillations in the H-component at Gjøahavn for the year 1904 is 28.3 days.The vertical scale is in nT while the months -given by their number, is listed above.This curve shows that the 28.3-day period is maximum during equinoxes.From October to December this period is very weak (fromWasserfall, 1927).
Figure 10 .
Figure10.This figure illustrates the 14-day period -actually the period is 14.3 days, for the horizontal component for year 1904.This period is most marked from March to September (fromWasserfall, 1927).
• 30 N and 96 • 30 W (Amundsen, 1908b).The coordinates reported by Amundsen, were changed by Wasserfall in 1939 to 70 • 38 N, and 96 • 42 W.These values probably represent the best obtainable.Accepting these values, the average velocity of the north magnetic pole from 1831 to 1904 has been a couple of km per year in a northward direction.The next determination of the pole position was carried out by the Canada government scientists shortly after World War II.Changes in the pole position since 1590 is recently discussed by Korte and Mandea (Korte and Mandea, 2008).
Figure 11 .
Figure 11.The nearly elliptical curve shows the average diurnal variation of NMDP, observed from Gjøahavn, in 1904.During quiet conditions, the NMDP drifted 10-15 km, while during the summer the drift was typically twice these values (from Graarud and Russeltvedt, 1926).
Figure 12 .
Figure 12.The average annual location of the NMDP observed from Gjøahavn for 1904 (from Graarud and Russeltvedt, 1926).
Figure 13a .
Figure 13a.Diurnal variation of the horizontal component at Godhavn during 1950.The curves labeled A and C are the average variations on days classified as being of type A and of type C, respectively.In the interval, shown by the dashed lines, the largest difference between the two types occurs (from Svalgaard, 1975).
Figure
Figure Thirteen b.
Figure 13b .
Figure 13b.Diurnal variations of the H-component from the monthly mean at Gjøahavn for June 1904.The blue, red and yellow curves are respectively the average variations of the H-component on days with significant away from the Sun sector polarity (a broad positive perturbation), the average value for the whole month and a toward the Sun polarity (i.e. a broad negative perturbation) intensities.
Figure
Figure fifteen b.
Figure 15b .
Figure 15b.Similar to Fig. 14a, but showing only the neutral sheet to illustrate the distortion introduced by the departure of the solar magnetic equator from the solar spin equator and resulting in four sector structure crossing per solar rotation as in CR 677-679, observed from Gjøahavn (figure W. Heil, personal communication, 2011).
Figure 17a .
Figure Seventeen a
Table 2 .
The nine magnetic observation sites, with geographic coordinates, average values for the magnetic elements, declination (D), horizontal component (H) and inclination (I), are listed.The last column shows the estimated distance to the NMDP.Beechey Island was a station on the route to Gjøahavn. | 8,156.4 | 2011-12-10T00:00:00.000 | [
"Physics",
"Geology"
] |
Assessment of Minimal Residual Disease in Ewing Sarcoma
Advances in molecular pathology now allow for identification of rare tumor cells in cancer patients. Identification of this minimal residual disease is particularly relevant for Ewing sarcoma, given the potential for recurrence even after complete remission is achieved. Using RT-PCR to detect specific tumor-associated fusion transcripts, otherwise occult tumor cells are found in blood or bone marrow in 20–30% of Ewing sarcoma patients, and their presence is associated with inferior outcomes. Although RT-PCR has excellent sensitivity and specificity for identifying tumor cells, technical challenges may limit its widespread applicability. The use of flow cytometry to identify tumor-specific antigens is a recently described method that may circumvent these difficulties. In this manuscript, we compare the advantages and drawbacks of these approaches, present data on a third method using fluorescent in situ hybridization, and discuss issues affecting the further development of these strategies.
Introduction
Ewing sarcoma (ES) is the second most common bone tumor in children and young adults. The majority of ES patients have tumors localized to one bone, with no metastases identified using the conventional assessment techniques of imaging and pathologic examination of the bone marrow. However, treatment with aggressive surgical removal alone cures only 10% of these patients [1], while the remaining patients develop fatal metastatic disease presumably arising from otherwise undetected tumor cells in tissues such as lungs or bone marrow that were present at least transiently in the blood. Chemotherapy is used to eradicate this minimal residual disease (MRD), although clinicians have no routine method for knowing the extent of MRD which remains in any given patient. If a reliable, sensitive, and widely applicable assay could be developed for MRD detection in ES, there would be several obvious applications. First, since the finding of MRD in patients with localized tumors is associated with worse outcome [2], these patients could be identified early on for more appropriate high-risk therapies. Second, MRD testing could be used to assess patients for ongoing response to chemotherapy [3], particularly after surgical removal of tumor when imaging no longer can be used to monitor changes in tumor characteristics. Third, identification of the return of low levels of disease may allow for early identification of relapsing patients who have already completed therapy [4][5][6]. Finally, MRD assessment may be beneficial with clinical decision making in patients with equivocal imaging findings, such as nonspecific lung nodules identified on computed tomography scans [7], as it may support the diagnosis of relapsed disease.
Therefore, given all these potential benefits, investigators have tried for the past two decades to identify methods that are not just sensitive and specific, but widely applicable and feasible in multicenter trials. Ewing sarcoma is well suited for such investigation, given its characteristic genetic and immunophenotypic features which allow for distinction of Sarcoma N (pts) Key Findings Peter [9] 36 31% had MRD in either blood or marrow at diagnosis at diagnosis Pfeiderer [13] 16 38% had MRD in BM, while only 6% had CTC at diagnosis West [12] 2 8 25% of newly diagnosed localized pts had MRD in blood or marrow, compared to 50% with relapsed/metastatic disease Fagnou [11] 6 7 26% of patients had MRD in blood at diagnosis, but this was not correlated with clinical features or outcome.33% of patients had BM MRD, with worse outcome De Alava [6] 28 MRD in blood and/or marrow developed prior to clinical progression Zoubek [14] 35 7/23 (30%) had BM MRD at diagnosis, but this did not predict relapse Thomson [3] 9 Survival was correlated with speed of clearance of MRD in blood and BM Sumerauer [10] 22 MRD in marrow found in 31% of localized and 50% of metastatic pts Merino [15] 12 Quantitative RT-PCR can be used to measure efficacy of stem cell purging methods tumor cells from normal hematopoietic cells. In this manuscript, we review the successes and challenges of the two most common methodologies employed for MRD detection in ES. In addition, we present preliminary data using a third molecular assay and describe an ongoing clinical trial designed to directly compare these assessment strategies.
RT-PCR for MRD Detection
The majority of studies to assess MRD in ES patients have focused on the use of reverse transcriptase-polymerase chain reaction (RT-PCR) to identify tumor-specific fusion transcripts. This method is based on the fact that approximately 85% of ES tumors are characterized by the EWS-FLI1 translocation [8]. Tumors not containing FLI1 translocations usually have other partners for EWS, including ERG, ETV1, E1AF, and FEV. RT-PCR is attractive for use in MRD detection because of its excellent specificity as well as its sensitivity, determined in spiking experiments to be one tumor cell in one million mononuclear blood cells [9]. In the largest RT-PCR study to date, 20% of 107 ES patients who were considered to have localized tumors using conventional assessments did indeed have evidence of micrometastatic disease in the peripheral blood [2]. Interestingly, 19% of such patients also had MRD identified in the bone marrow, although there was incomplete overlap between those with MRD at either site. Importantly, patients with MRD at either site had worse event-free survival compared to other patients with localized disease, thus showing the potential utility of MRD assessment as a prognostic indicator in a prospective study. Multiple other smaller RT-PCR trials have confirmed that up to one-fourth of newly diagnosed patients with apparently localized tumors have MRD detectable by RT-PCR in blood or marrow [2,5,[9][10][11][12][13][14]. Other important applications demonstrated with RT-PCR testing include the ability to assess the efficacy of induction chemotherapy regimens [9] as well as novel purging techniques for peripheral blood stem cell grafts [15]. In addition, several studies have demonstrated that MRD testing can identify relapse in patients before it is clinically apparent by conventional imaging studies [5,6,16]. Table 1 summarizes some of the important RT-PCR studies done to date, and these trials provide confirmation of the potential clinical relevance of MRD testing in this disease. Limitations of RT-PCR include the potential for contamination causing false positive results as well as degradation of mRNA resulting in false negative results [3]. The latter may be particularly important for multicenter trials in which same-day testing is not available. Another potential drawback of RT-PCR is that prior knowledge of the patient's specific translocation is needed so that the appropriate primer sets can be used (EWS-FLI1 versus EWS-ERG versus other). Without this knowledge, interpretation of negative test results is difficult. Historically, RT-PCR had often been performed on initial tumor biopsies as a confirmatory test to support the diagnosis of ES. However, because of technical limitations such as sample size, tissue viability, or absence of frozen tissue, RT-PCR of biopsy material is not always feasible. For example, in the largest multicenter study of RT-PCR to detect MRD in ES, only 117 (68%) of 172 patients had adequate tissue allowing identification of translocations by RT-PCR [2]. It is likely that this number may continue to decrease given the now widespread use of fluorescent in situ hybridization (FISH) probes to identify translocations involving EWS [17], which can readily be done on paraffinembedded tissue.
One way to circumvent the requirement for knowledge of the specific translocation partners is to instead assess for individual genes universally expressed in tumor cells but not hematopoietic cells. Cheung Figure 1: Sequential gating to identify Ewing sarcoma cells. Cultured A673 cells undergo sequential gating to identify Ewing sarcoma cells. Mononuclear cells are separated from blood or marrow by density gradient centrifugation, stained with monoclonal antibodies (CD99 PE, CD45 FITC, CD14 PerCP, CD34 APC), exposed to anti-PE magnetic microbeads to enrich CD99+ cells using MACS technology (Miltenyi Biotec, Cologne, Germany), and then analyzed by flow cytometry. Analysis is performed using sequential gating strategy (gates 1-5) to purify the CD99 bright positive CD45 negative tumor cells as shown in this example of A673 Ewing sarcoma culture cells. data to identify 3 such genes meeting these criteria: STEAP1, CCND1, and NKX2-2 [18]. The expression of at least one of these 3 genes in histologically negative bone marrow samples from 35 Ewing sarcoma patients was associated with progression-free and overall survival. Additional follow-up studies using this approach have not yet been reported.
Flow Cytometry for MRD Detection
Another strategy that obviates the need to know the specific translocation is to use multiparameter flow cytometry to identify surface expression of tumor cell antigens. For example, CD99 is universally present on ES tumor cells, and immunostaining for this protein has routinely been used to confirm the diagnosis of ES in primary tumor samples [19]. However, since CD99 is also expressed on some blood cells as well, negative selection for the leukocyte common antigen CD45 is used to exclude hematopoietic cells. Further, to reduce the low level background positivity seen in normal blood and marrow samples, an additional sequential gating strategy is used with a viability dye to remove dead cells, CD14 to exclude monocytes, and CD34 to remove early hematopoietic progenitors which may not yet express CD45. This strategy was used in the first published report of flow cytometry for MRD detection in ES by Dubois et al. [20]. They showed that residual ES cells from two different cell lines can reliably be detected in spiking experiments of peripheral blood and bone marrow at the level of 1 tumor cell in 500,000 or 1 tumor cell in 10,000 mononuclear cells, respectively. We have instituted a clinical trial which uses the sequential gating strategy employed by Dubois and shown in Figure 1. In addition, we have modified the assay by incorporating magnetic microbeads to enrich the tumor cell concentration in the residual sample. Variations on this enrichment approach have been described previously [21] and can increase the confidence at which low numbers of tumor cells can be identified. Figure 2 demonstrates how identification can be improved through enrichment of CD99+ cells. Notably, when enrichment techniques are used, sensitivity in spiking experiments is similar or better to that achieved with RT-PCR, with identification of tumor cells at the range of 1 in one million or more blood mononuclear cells. Ash and colleagues have recently reported an alternative flow cytometry method which identifies tumor cells expressing both CD99 and CD90 but which are negative for a hematopoietic panel including CD45, CD3, CD14, CD16, and CD19. CD90 is a cell surface protein expressed on some hematopoietic and nonhematopoietic stem cells as well as Ewing sarcoma cells [22]. They assessed previously frozen archival bone marrow samples from 46 patients, including 35 with localized tumors, as well as 10 control samples from patients without malignancy. While the control samples remained negative, CD99+/CD90+ cells were identified in all tested cell lines and patient samples. The range of tumor burden identified in the patient samples was 0.001-0.4%, and the reported sensitivity of the assay using spiking experiments was 0.001% (one tumor cell in 100,000 mononuclear cells). Tumor cells identified by this method were then tested for expression of CD56, which is an isoform of neural cell adhesion molecule (NCAM) found in natural killer cells and neuroectodermal derivatives, including Ewing sarcoma [23]. Sixty percent of the 45 diagnostic samples had high levels of CD56 expression (defined as present in >22% of tumor cells), and this identified a group with greater risk of recurrence. In fact, in this study high CD56 expression in CD99+/CD90+ cells was determined to be an independent prognostic marker with an 11-fold risk of relapse. Although these results should be confirmed in additional studies, they underscore that identification of molecular prognostic markers may be another potential application of flow cytometry.
The fact that prior knowledge of a patient's specific translocation status is not required may make flow cytometry a relevant MRD assessment tool for all ES patients. In addition, the assay is rapid and less labor-intensive than RT-PCR, uses commercially available antibodies, and is well suited for overnight delivery and analysis at a central laboratory. For example, in children with acute lymphoblastic leukemia, flow cytometry performed in a central reference laboratory to assess response to induction therapy has been a feasible and reliable prognostic marker in multi-institutional studies [24] and has become part of the risk assessment strategy on Children's Oncology Group trials.
FISH for MRD Detection
Another potential method to assess MRD is the use of a FISH break-apart probe to identify translocations involving the EWS gene. This method is now commonly used as an adjunct to pathological diagnosis of ES in primary tumor samples. Although many FISH probes do not identify the partner gene for EWS, recent studies suggest the specific translocation partner does not hold prognostic significance for patients treated with contemporary therapy [8], and so knowledge of which gene fuses with EWS may no longer be relevant for the routine care of ES patients. FISH can also be readily performed on peripheral blood or bone marrow samples and has been used to monitor MRD in leukemia patients [25]. However, there are no previous reports to our knowledge of using FISH in ES for this purpose. In our institution, up to 500 cells are routinely counted when testing for minimal residual disease, which by definition limits the sensitivity to this number. However, potential advantages of FISH testing include the ability to easily test archived samples and the clear visual conformation of the characteristic tumor-specific change in the EWS gene. However, even this can sometimes be difficult, depending on the probe being used. A false positive interpretation may occur due to DNA decondensation, which may cause the probes to be sufficiently separated to mimic a true break-apart event. This finding can generally be recognized by an expert cytogeneticist and must be carefully considered when interpreting positive samples. Figure 3 demonstrates findings seen in normal cells, cancer cells, and in cells deemed to be false positives due to this stretching artifact.
At our institution, we have conducted FISH testing on bone marrow aspirate samples from a limited number of ES patients for the past 5 years, using the Vysis EWSR1 dualcolor break apart probe gene localized to chromosome 22q12 (Abbott Molecular, Abbott Park, IL). The tests were obtained for clinical reasons at the discretion of the treating physician and so were not ordered in any systematic fashion. In fact, testing was not necessarily done on consecutive patients, or even on all samples from an individual patient. Generally, bone marrow samples were pooled together from both sides for a single analysis. FISH testing was performed on 21 bone marrow aspirates from 9 ES patients with either newly diagnosed or relapsed disease who were undergoing evaluations for routine clinical care at Cincinnati Children's Hospital. Of these 21 pooled samples, 14 were negative for tumor by both standard pathology assessment and FISH. In 6 samples from 3 patients, likely tumor cells were identified by FISH alone, with no tumor identified on conventional pathology evaluation. In those patients, the percentage of cells reported with possible EWSR1 rearrangement ranged from 0.2% to 7% (median 2.5%) of 200-500 tested mononuclear cells. One patient sample had unequivocal tumor cells identified by morphology on the bone marrow aspirate and biopsy but was negative by FISH. The reason for this false negative remains unclear, as FISH readily showed the characteristic EWSR1 break apart in the primary bone tumor, as well in a subsequent bone marrow sample done after induction chemotherapy, in which a low level of residual tumor cells was identified despite conventional morphology showing bone marrow remission. We conclude from this limited preliminary data that FISH analysis may detect tumor at low levels not appreciated by conventional morphology in 29% of samples, although one false negative test did occur.
Because the ideal method of MRD assessment in ES is unknown at this time, we are currently performing a trial which prospectively compares RT-PCR versus flow cytometry versus FISH in blood and marrow samples collected from ES patients. Results will be compared between methods as Sarcoma well as with bone marrow pathology reports and imaging studies to correlate the utility of MRD testing with other standard methods of disease assessment. Multiple institutions are participating, which will allow us to assess the feasibility of shipping samples overnight and testing the following day in a central laboratory.
Additional Issues regarding MRD Assessment
There are several issues which must be worked out for MRD assessment to have broad utility in ES. First, it is unclear which site (blood or bone marrow) will ultimately provide the greatest clinical relevance. In patients with extensive tumor burden, assessment of either site is likely to yield the same result, although these patients will benefit the least from MRD testing because their disease is already clinically apparent. For patients diagnosed with initially localized disease, the impact of minimal bone marrow involvement on outcome has been inconsistent in smaller studies [5,14]. However, results were more convincing in the largest trial to date [2], which reported a decrease in 2-year disease-free survival from 80% versus 53% when bone marrow MRD testing was positive (P = 0.043). It is possible that this may reflect that the impact on outcome is only apparent when a sufficiently large number of patients are tested. Another factor potentially leading to variable results is that bone marrow involvement in ES is more heterogeneous than that in leukemia, and it is common for morphology assessments of disease to differ between sides, and between the aspirates and core biopsies. This was evident in our institutional experience using FISH, in which one patient had aspirates from each side analyzed separately, with disparate results (3% versus 7%). There is somewhat less data available regarding analysis of circulating tumor cells in ES. As with bone marrow a convincing effect on survival being related to circulating tumor cells at diagnosis is seen in larger [2] but not some smaller studies [5,6]. Collection of blood samples is far less cumbercsome for patients than bone marrow, and is well suited for long-term monitoring either during or after completion of therapy. In fact, the latter approach may be particularly relevant, as several patients have been reported to have circulating tumor cells prior to clinically apparent relapse [5,6,16]. In one of the larger studies, 10 of 11 patients with recurrence had tumor cells identified in blood or bone marrow by RT-PCR prior to overt relapse, with a median time lag of 4.5 months (range 1-24 months) [5]. In our current trial, we are performing peripheral blood MRD evaluations any time patients undergo imaging assessments (at diagnosis, on therapy, or after therapy), while bone marrow testing is only performed when marrow samples would be routinely obtained for clinical purposes.
Quantification of RT-PCR results has not been generally reported, with the exception of Merino et al., who used realtime quantitative RT-PCR to estimate the effectiveness of a bone marrow purging method [15]. It is possible that this approach would provide standardization of methodology and consistency in determining exactly what constitutes a positive test result. Similar standardization attempts would be helpful for flow cytometry, given the difficulties in interpreting results when there are only one or two events in the gated field.
Another question is whether cells identified by these methods are truly cancer cells, as each assay has the potential for false positives. Although RT-PCR detects pathognomonic EWS changes not found in hematopoietic cells, contamination during RNA collection and testing may occur. For FISH, changes in the EWS gene during decondensation of DNA can cause an occasional cell to appear as if there may be a true rearrangement, as discussed earlier and noted in Figure 2. For flow cytometry, despite the use of a panel of markers to exclude hematopoietic cells, there is always the possibility of illegitimate transcription of these hematopoietic markers in tumor cells. In fact, in the most recent report by Ash et al. [22], flow cytometry was reported to identify tumor cells in all 35 diagnostic bone marrow samples from patients with localized disease, and this incidence of 100% is in sharp contrast to all previous reports estimating the incidence of marrow micrometastases to be 20-30% in this patient population. Because the sensitivity of their assay is within the same range of that reported with RT-PCR, the question is raised whether all of these cells were indeed tumor cells. Efforts to reduce the potential for false positive results should continue.
Other methodologic issues include the specific protocols regarding how samples are collected and in what volume. Using a large volume (10 mL or perhaps more) may be ideal for collecting blood samples, particularly in patients who are on therapy and who may have treatment-related reductions in the number of circulating mononuclear cells. However, more may not necessarily be better with bone marrow collections, as demonstrated in a recent report by Helgestad et al. [26]. They showed that the density of nucleated cells in the bone marrow of leukemia patients is markedly reduced with larger volume aspirates, due to potential dilution with peripheral blood during the collection. In fact, this dilution effect from larger aspirations resulted in several samples being interpreted as negative (below limits of sensitivity by flow cytometry), despite clearly containing >0.1% tumor cells in the first small volume sample withdrawn. Further, bilateral bone marrow aspirations are routinely performed for ES patients, due to the typically patchy tumor involvement. Most studies do not specify whether both sides are pooled together or analyzed separately. Attention to standardization of collection procedures will help improve interpretation of test results.
Finally, it remains unclear which assay has the greatest utility. Because of the success of flow cytometry for MRD assessment in leukemia, the readiness of commercially available antibodies, and the encouraging results noted so far in preliminary studies, it is likely that there will be further exploration of flow cytometry for MRD detection in ES. Results from ongoing trials which directly compare these methodologies will hopefully provide input on which assay to study in larger prospective clinical trials.
Summary
Detection of MRD in blood or bone marrow is best established for patients with childhood leukemia, where flow cytometry to assess response to therapy is now a standard part of risk assessment [24]. In adult carcinomas, FDA-approved methods like the CellSearch assay identify circulating tumor cells through positive enrichment using epithelial cell-specific Ep-CAM antibodies followed by image analysis [27] and are now widely employed. Among pediatric solid tumors, there has been considerable work in MRD detection in neuroblastoma (reviewed in [28]), which like ES is characterized by disease recurrence following complete remission in a substantial subset of patients. ES appears particularly well suited for MRD detection due to tumor-specific translocations that facilitate RT-PCR and FISH detection as well as expression of tumor-specific cell surface proteins like CD99 that facilitate detection by flow cytometry. Studies using RT-PCR have demonstrated that otherwise occult tumor cells can indeed be identified at initial diagnosis in the blood and/or marrow of approximately one-fifth of ES patients with otherwise localized disease and that such patients generally have inferior outcome. Smaller studies have shown that return of MRD as detected in the blood or marrow often precedes clinically apparent relapse and that MRD assessment can be used to follow response to chemotherapy regimens. Although effective therapeutic interventions for these findings may not yet be available in some cases, the results to date support the contention that clinically meaningful information can be obtained from assessing MRD in ES patients, and that further study is indicated.
While the aforementioned studies have used RT-PCR, flow cytometry offers a commercially available, less labor-intensive approach with similar sensitivity that may be more widely applicable, given that detection does not require prior knowledge of the particular chromosomal translocation. Also, this method may be less susceptible to degradation of sample integrity if overnight shipping to a central laboratory is required. However, further validation in additional studies is required, and standardization of sample collection, testing methods, and reporting of results will be critical. Trials are currently underway which will compare these modalities to each other, and to compare MRD test results with imaging studies and overall outcome to further define the overall utility and clinical relevance of MRD assessment in this disease. | 5,598.6 | 2012-03-12T00:00:00.000 | [
"Biology",
"Medicine"
] |
The extended mind argument against phenomenal intentionality
This paper offers a novel argument against the phenomenal intentionality thesis (or PIT for short). The argument, which I'll call the extended mind argument against phenomenal intentionality, is centered around two claims: the first asserts that some source intentional states extend into the environment, while the second maintains that no conscious states extend into the environment. If these two claims are correct, then PIT is false, for PIT implies that the extension of source intentionality is predicated upon the extension of phenomenal consciousness. The argument is important because it undermines an increasingly prominent account of the nature of intentionality. PIT has entered the philosophical mainstream and is now a serious contender to naturalistic views of intentionality like the tracking theory and the functional role theory (Loar 1987, 2003; Searle 1990; Strawson 1994; Horgan and Tienson 2002; Pitt 2004; Farkas 2008; Kriegel 2013; Montague 2016; Bordini 2017; Forrest 2017; Mendelovici 2018). The extended mind argument against PIT challenges the popular sentiment that consciousness grounds intentionality.
second maintains that no conscious mental states extend into the environment. If these two claims are correct, then PIT is false, for PIT implies that the extension of source intentionality is predicated upon the extension of phenomenal consciousness. Put differently, I submit that the following three propositions, when properly understood, constitute an inconsistent triad: (1) the extended mind thesis is true, (2) the extended consciousness thesis is false, and (3) PIT is true. 1 To avoid a contradiction it must be the case that at least one of these propositions is false. The argument presented here motivates (1) and (2) in an effort to refute (3).
The argument is important because it undermines an increasingly prominent account of the nature of intentionality. PIT has entered the philosophical mainstream and is now a serious contender to naturalistic views of intentionality like the tracking theory and the functional role theory (Bordini, 2017;Farkas, 2008;Forrest, 2017;Horgan & Tienson, 2002;Kriegel, 2013;Loar, 1987Loar, , 2003Mendelovici, 2018;Montague, 2016;Pitt, 2004;Searle, 1990;Strawson, 1994). The extended mind argument against PIT challenges the popular sentiment that intentionality is grounded in consciousness. Notably, this paper aims not to prove with certainty that the extended mind argument against PIT is sound but rather to show that each premise of the argument is highly plausible given certain philosophical assumptions about the nature of consciousness and extended cognition.
The general structure of the paper is as follows. Section 1 introduces PIT and briefly contrasts the view with naturalistic theories of intentionality. Section 2 presents the extended mind argument against PIT and describes Clark and Chalmers' (1998) extended mind thesis, clarifying that the thesis does not tacitly assume the falsity of PIT by presupposing the functional role theory of intentionality. Section 3 draws on an argument from Clark (2008), which I call the self-stimulating loop argument, to motivate the idea that the extended mind thesis is not restricted to dispositional states with derived intentionality but includes within its scope some occurrent propositional attitudes with source intentionality (i.e. some source intentional states are extended). Section 4 then draws on an argument from Chalmers (2018), which I call the direct access argument, to substantiate the claim that the extended consciousness thesis is false (i.e. no conscious states are extended). The conjunction of these two subarguments functions to undermine PIT by showing that some extended mental states possess source intentionality but lack phenomenal consciousness. Section 5 concludes.
The phenomenal intentionality thesis
The phenomenal intentionality thesis (PIT) is an increasingly popular view in the philosophy of mind about the nature of intentionality. Before unpacking the view, it will be instructive to distinguish between source intentionality and derived intentionality and introduce the two primary theories of intentionality that serve as rivals to PIT.
There are many different theories about the origin of intentionality. Nearly everyone agrees that, regardless of its origin, intentionality can be 'passed around' (Kriegel, 2013), so to speak, to other things which did not previously have it. 2 For example, both linguistic signs and physical road signs exhibit 'aboutness', or have a representational nature, but they do not do so intrinsically; rather, the aboutness of both types of signs derives from the aboutness of the intentional mental states of the humans responsible for fixing their respective meanings. Reflection upon the intentionality of signs suggests that one can draw a distinction between two basic kinds of intentionality: (a) source intentionality (or 'original intentionality' as it is sometimes called), and (b) derived intentionality. 'Source intentionality' refers to those things that are intrinsically intentional (and so that serve as the source of intentionality), whereas 'derived intentionality' refers to those things that have intentionality in virtue of some other thing that has intentionality. Providing an account of derived intentionality is an essential task for any complete theory of intentionality. 3 However, the main goal of such a theory is to explain the nature of source intentionality.
During the twentieth century, the popular philosophical project concerning source intentionality was to 'naturalize' it by providing a reductionistic explanation of the phenomenon in terms of properties and processes that are fully comprehensible by natural science (Dretske, 1981(Dretske, , 1995Fodor, 1987Fodor, , 1990Millikan, 1984;Neander, 1996;Papineau, 1984;Rupert, 1999;Stampe, 1977). 4 The two most popular naturalistic theories of this sort are tracking theories of intentionality and functional role theories of intentionality. Tracking theories conceptualize source intentionality in terms of tracking relations, where 'tracking' is a matter of brain states "detecting, carrying information about, or otherwise corresponding with external items in the environment" (Mendelovici & Bourget, 2014, p. 326). Functional roles theories, by contrast, define source intentionality in terms of the functional roles that brain states play.
The phenomenal intentionality thesis (PIT) is a relatively new theory of intentionality that is increasingly regarded as a promising alternative to tracking theories and functional role theories. While the idea of phenomenal intentionality derives from the work of Brian Loar at the end of the 1980s (Loar, 1987), the term 'phenomenal intentionality' did not officially enter the philosophical lexicon until the early 2000s (Horgan & Tienson, 2002;Loar, 2003). PIT can be understood as the conjunction of the following two claims: (1) There is a kind of intentionality, called 'phenomenal intentionality ', grounded in a type of phenomenal character (i.e. a type of conscious experience).
(2) Phenomenal intentionality is the only kind of source intentionality.
There are many reasons to endorse PIT, a significant one: the view arguably avoids the problem of content determinacy that afflicts naturalistic theories of intentionality. Source intentional states possess determinate content, which is to say, the intentional objects of such states are represented in a fine-grained, unambiguous manner. Kriegel defines the concept of 'determinate content' in the following manner: "By 'determinate content', I simply mean content which is as fine-grained as one's intentional contents appear pre-theoretically to be. For example, pre-theoretically it seems that one's thoughts are fine-grained enough to be about rabbits rather than undetected rabbit parts, about Phosphorus rather than Hesperus, about triangles rather than closed trilateral figures, and so on. If a kind of intentional state is not this fine-grained, I say that its content is indeterminate" (Kriegel, 2013, p. 10).
Functional role theories and tracking theories of intentionality both appear unable to account for the fact that source intentional states bear determinate content. Bourget and Mendelovici (2014) formulate the problem in an epistemic fashion via Quine's (1960) famous 'rabbit/undetached rabbit parts' example. They point out that complete knowledge of all of the relevant tracking relations or functional role relations does not translate into knowledge of determinate content: "A Martian looking down on Earth and having complete knowledge of all Earthly physical facts could not tell whether we are representing rabbits or undetected rabbit parts. Thus, it appears that a physical-functional theory of intentionality will predict that one's concept RABBIT is indeterminate between the two contents" (Bourget & Mendelovici, 2014). The idea, to paraphrase, is that naturalistic theories of intentionality confront an epistemic problem of content determinacy that arguably has ontological implications. Natural properties are seemingly unable to secure determinate mental content if knowledge of all relevant naturalistic facts does not convert into knowledge of content determinacy. This is where PIT enters into the picture. Graham et al. (2007), Horgan and Graham (2012), and Searle (1990) all contend that phenomenal consciousness is the only thing capable of securing determinate mental content. This is known as the content determinacy argument for PIT, which is just one out of many existing arguments for the view. 5 5 Horgan and Tienson (2002) present a phenomenological argument supporting PIT, according to which it is introspectively obvious that some conscious experiences are source intentional and source intentional in virtue of being phenomenal. Two additional arguments for the view come from the work of Charles Siewert and Brian Loar, respectively. Siewert (1998) avers that PIT is true in virtue of the fact that conscious experiences are assessable for accuracy, whereas Loar (2003) connects internalism about mental content with PIT, arguing that the latter theory is true on the basis of the former.
The extended mind argument against phenomenal intentionality
I will now present the extended mind argument against PIT. As is indicated by its name, the argument appeals to conceptual tools from the extended mind literature (Chalmers, 2018;Clark, 2008;Clark & Chalmers, 1998;Colombetti & Roberts, 2015;Farkas, 2012;Muller, 2012;Rowlands, 2009;Vold, 2015), and in particular, is motivated by a perceived asymmetry according to which some source intentional mental states extend into the environment (i.e. the extended mind thesis is true), but no conscious mental states extend into the environment (i.e. the extended consciousness thesis is false). Call this the bipartite extension intuition (BEI). If BEI is true, then PIT is false, for BEI entails that there are cases of extended cognition that are source-intentional but non-phenomenal while PIT claims that source-intentional states are necessarily phenomenal. The argument has the following structure:
The extended mind argument against phenomenal intentionality
P1 Some source intentional states are extended (i.e. the extended mind thesis is true).
P2 No conscious states are extended (i.e. the extended consciousness thesis is false).
P3 P1 and P2 are mutually inconsistent if PIT is true.
C Therefore, PIT is false.
P3 is true by definition. If phenomenal intentionality is the only kind of source intentionality (i.e. PIT is true), then the extension of source intentional states implies the extension of phenomenally conscious states. That is, the combination of PIT and P1 entails the extended consciousness thesis. However, P2 rejects the extended consciousness thesis, meaning that P1 and P2 are mutually inconsistent, assuming the veracity of PIT. The proponent of PIT must, therefore, either uphold P1 and reject P2 (by affirming that some conscious states are extended), uphold P2 and reject P1 (by affirming that no source intentional states are extended), or reject both P1 and P2. The proponent cannot support both premises without violating the fundamental tenet of PIT because P1 and P2 jointly imply the existence of an extended source intentional state that lacks phenomenal consciousness, and therefore, phenomenal intentionality. So if it is held fixed that both P1 and P2 are true and mutually consistent with one another, then it must be the case that PIT is false. Put another way, the argument asserts that the conjunction of PIT and the extended mind thesis leads to a contradiction because this conjunction entails the extended consciousness thesis, but the extended consciousness thesis is false.
The strength of the extended mind argument against PIT turns on the plausibility of P1 and P2, the combination of which constitutes BEI. Before turning towards these premises, it is vital first to address an objection that one might raise to the argument as a whole. Namely, Objection 1 (O1) The extended mind thesis tacitly assumes the truth of the functional role theory of intentionality, and thus, tacitly assumes that PIT is false. Therefore, one cannot invoke the extended mind thesis to argue against PIT without begging the question.
O1 suggests that I beg the question right out of the gate simply by casting my argument against PIT within the context of the extended mind framework. One cannot fully appreciate the reasoning behind O1 without first having a grasp of the extended mind thesis. The extended mind thesis derives from Andy Clark and David Chalmers (1998). They contend that cognitive processes can under certain conditions transcend the boundaries of the skull and seep out into the external world such that extra-cranial entities partly constitute them. 6 Clark & Chalmers' original argument for the extended mind is motivated primarily by what they call the parity principle, which establishes the possibility of extended cognitive processes based on functional equivalency considerations: "If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process" (Clark & Chalmers, 1998, p. 8). Clark and Chalmers apply the parity principle to the now-famous case of Otto, a fictional Alzheimer's patient who carries around a notebook as a substitute for his impaired biological memory. Given that Otto's notebook plays the same functional role in his cognitive economy that biological memory would otherwise play, Clark and Chalmers argue that the notebook and the writings therein should be conceptualized as a part of the realization base for some of Otto's dispositional belief states (specifically his belief about the location of the Museum of Modern Art).
It is because the extended mind thesis presupposes the parity principle that O1 may appear compelling. The parity principle seems to assume a functionalist picture of mentality, according to which X counts as mental if and only if X plays the requisite functional role in the overall cognitive system. 7 What is true about mentality, so the idea goes, is also true about intentionality. Therefore, the parity principle, and by extension, the extended mind thesis, is committed to the functional role theory of intentionality.
O1 is flawed because it falsely assumes that what is true about the nature of mentality must also be true about the nature of source intentionality. It does not follow from the fact that the extended mind theorist is committed to a functionalist theory of mentality to the conclusion that she must also embrace a functionalist theory of source intentionality. This is because the question of what constitutes the mark of the mental is distinct from the question of what constitutes source intentionality. Some philosophers, to be sure, would deny the conceptual independence of these questions precisely because they identify source intentionality as the mark of the mental. But the philosophers who make this identification are not advocates of the extended mind thesis. Quite to the contrary, Adams & Aizawa (2008) equate source intentionality with the mark of the mental as a way to argue against the extended mind thesis. 8 The extended mind theorist can thus uphold a functional role theory of mentality without thereby being committed to a functional role theory of intentionality. Functionalism about mentality is compatible with PIT, which is to say, mentality may be defined in functional terms while source intentionality is defined in phenomenal terms. O1 can thus be discarded. I will now examine P1 and P2 (i.e. BEI) of the extended mind argument against PIT, starting with P1, the premise that some source intentional states are examples of extended cognition.
Premise 1: Some source intentional states are extended
Upon first glance, P1 may even strike proponents of the traditional extended mind thesis as untenable, for they might maintain that only dispositional mental states with derived intentionality can be extended. Dispositional mental states are states that an agent is disposed to instantiate given certain conditions, whereas occurrent mental states refer to states that an agent is currently instantiating. A mental state is occurrent if and only if a subject is actively entertaining the state. Dispositional mental states (e.g. my standing belief that Washington, D.C. is the United States capital) ostensively inherit their intentionality from their occurrent counterparts, and therefore, only have derived intentionality.
Clark and Chalmers' original presentation of the extended mind thesis argues for extended dispositional mental states, not extended occurrent mental states. Namely, Otto's dispositional belief concerning the location of the Museum of Modern Art extends into the environment in virtue of being constituted by a written proposition located in his notebook. Otto's occurrent belief about the museum's location, however, which he instantiates after consulting his notebook, is presumably a completely internal affair that takes place within the confines of his head. What one needs to support P1 is an independent argument for why some occurrent mental states are extended. In what follows, I draw attention to a prominent case promulgated by Clark (2008) that lends credence to the idea that the extended mind thesis encompasses some occurrent states. Clark's (2008) argument for extended occurrent states, like the original extended mind argument for extended dispositional states (1998), assumes a broadly functionalist picture of mentality. The novelty of Clark's newer argument derives from his use of the functional concept of a self-stimulating loop. A system X enacts a self-stimulating loop when X produces outputs that it then recycles back into inputs. Clark illustrates the concept via the example of a turbo-driven engine, which uses its own emissions as a self-generating boost. His central contention is that occurrent cognitive processes count as extended when these processes become transiently coupled with external entities to produce a selfstimulating feedback loop. Clark begins to make this case by drawing attention to bodily gestures, arguing that gestures are not merely expressions of thought but are often constitutive components of occurrent thought processes which function as both systemic outputs and self-stimulated inputs in a subject's cognitive system: "the act of gesturing is part and parcel of a coupled neural-bodily unfolding that is itself usefully seen as an organismically extended process of thought" (Clark, 2008, p. 144). 9 The concern for Clark's appeal to bodily gesture is that such an appeal at best establishes that cognition is embodied, not extended. To be extended, cognitive processes must be partly constituted by entities in the outer environment, not simply by extra-cranial body parts. Clark recognizes this, of course, and proceeds to explain that occurrent cognitive processes can form selfstimulating loops with external entities as well. In offering this explanation, he focuses on processes that are aided by the act of writing: This kind of cognitively pregnant unfolding need not stop at the boundary of the biological organism. Something very similar may, as frequently remarked, occur when we are busy writing and thinking at the same time. It is not always that fully formed thoughts get committed to paper. Rather, the paper provides a medium in which, this time via some kind of coupled neural-scribbling-reading unfolding, we are enabled to explore ways of thinking that might otherwise be unavailable to us (Clark, 2008, p. 144).
The mental acts of calculation and planning are good examples of the kind of occurrent cognitive processes that Clark seems to have in mind: processes enhanced through the medium of writing by forming self-stimulating feedback loops with the independent artifacts constitutive of writing (e.g. pen and paper). The marks that I make on a piece of paper when thinking through a math problem, for example, are external outputs of cognition which are then redistributed as stimulating inputs during my occurrent act of calculation. So the written marks are not merely material manifestations of thought but are active drivers of my mathematical thinking operation. Clark's basic stance, to repeat, is that occurrent cognitive processes extend into the environment when deeply and reciprocally interwoven with external entities in a feedback loop of this sort.
My intention in this section is to transform Clark's discussion of self-stimulating loops into a formal argument for P1. The key sub-premise of this argument, which I call the self-stimulating loop argument, can be represented as follows: SP1 The occurrent states involved in self-stimulating feedback loops are extended cognitive systems.
Notice that SP1 does not on its own entail the veracity of P1. To reach the conclusion that some source intentional states are extended, something like the following additional sub-premise needs to be made explicit: All occurrent states possess source intentionality. This additional sub-premise appears to be on solid footing, for as previously mentioned, it is commonly assumed in the literature that the intentionality of dispositional mental states derives from the source intentionality of their occurrent counterparts. There are various philosophers of mind, however, who reject the assumption that all occurrent states are source intentional. For example, David Woodruff Smith (2011) argues that there are occurrent states of 'pure' consciousness achievable via meditation that are entirely devoid of intentionality. 10 I propose that the key to strengthening the self-stimulating loop argument for P1 is to restrict the scope of the argument to the class of occurrent mental states known as propositional attitudes. Propositional attitudes are types of mental states wherein a subject bears a cognitive relation to a proposition. Examples of such attitudes include (but are not limited to) states of belief, desire, understanding, and imagination. The assumption that all occurrent propositional attitudes (or "OPAs") have source intentionality, combined with the claim that some OPAs extend into the environment via self-stimulating loops, ostensibly entails the veracity of P1. When conceptualized in this way, the self-stimulating loop argument has the following structure:
The self-stimulating loop argument for P1
SP1 The occurrent states involved in self-stimulating feedback loops are extended cognitive systems.
SP2 Some occurrent states involved in self-stimulating feedback loops are propositional attitudes. SC1 Therefore, some occurrent propositional attitudes are extended.
SP3
All occurrent propositional attitudes possess source intentionality. P1 Therefore, some source intentional states are extended.
Consider first SP2, which I take to be well substantiated. The occurrent states of < planning > and < calculating > that Clark focuses on when presenting the selfstimulating loop argument appear to be propositional attitudes given that these states are both linguistically represented by verbs which embed 'that-clauses.' Moreover, even if the states of < planning > and < calculating > are not propositional attitudes, it is easy to imagine how one might expand Clark's argument for extended occurrent states so that it includes within its purview standard propositional attitudes such as beliefs, judgments, desires, and fears. Colombetti and Roberts (2015), for instance, illustrate how Clark's argument can be understood to include occurrent judgments within its scope. They consider the fictional case of Eve, a teenage girl who writes in her diary when she is upset with her parents: "The case of Eve, as we imagine it, is one where, as she writes that her parents do not listen to her, do not appreciate her, and so on, she is engaged in unfolding and articulating a specific evaluative judgment, which the act of writing down helps to clarify and structure. This act also feeds back into Eve, influencing her overall evaluative perspective as she continues to be engaged in the activity" (Colombetti & Roberts, 2015, p. 1258. The case of Eve is supposed to be an example of how a standard OPA can meet Clark's 'selfstimulating loop' criteria for mental extension. Specifically, Eve's act of writing in her notebook serves as a self-stimulating feedback loop in her occurrent act of judging her parents. 11 Turn now to SP3, which also strikes me as highly credible. SP3 gains support from the fact that OPAs are paradigmatic examples of source intentional states in the literature, which is to say, all theories of source intentionality (including PIT, tracking theories, and functional role theories) appear to either explicitly or implicitly recognize SP3 to be true. While there is controversy in the philosophy of mind over whether all mental states have intentionality, and even over whether all occurrent states have source intentionality, it is uncontroversial to declare that all OPAs have source intentionality. This being said, the objector might purport that SP3 does not necessarily follow from the fact that OPAs are paradigmatic examples of source intentional states. Just because a genus X is a paradigmatic example of that which has property P does not mean that all species of X instantiate P. For instance, birds are paradigmatic examples of flying animals, but it is not the case that all species of birds can fly (e.g. ostriches). Analogously, the objector might grant that OPAs are paradigmatic examples of source intentionality, but deny that all OPAs are source-intentional. The most natural way for the proponent of PIT to maintain such a position is to allege that some OPAs lack phenomenal consciousness. 12 If OPAs include non-conscious states, then to assume that all OPAs are source intentional is to presuppose that PIT is false, for PIT maintains that source intentionality is phenomenally conscious intentionality. Thus, one might attempt to resist SP3 on the grounds that it begs the question against PIT: Objection 2 (O2) The self-stimulating loop argument fails because SP3 begs the question against PIT.
The advocate of O2 is correct to suggest that a species X does not necessarily have property P just because the genus to which X belongs is a paradigmatic example of that which instantiates P. However, the fact that OPAs are paradigmatic examples of source intentional states lends a significant degree of credence to SP3, and the burden of proof is surely on those who wish to reject this sub-premise. The proponent of PIT might try to disavow SP3 by affirming that some OPAs are nonconscious but I deny that said proponent has the conceptual resources to make this affirmation.
First, PIT is arguably wedded to the idea that all OPAs are phenomenally conscious in virtue of being motivated by what Kriegel (2013) calls inseparatism, the view that "paradigmatic sensory states in fact exhibit intentionality, which is moreover grounded by their phenomenality, and that paradigmatic cognitive states in fact boast a phenomenality, which moreover grounds their intentionality" (Kriegel, 2013, p. 5). Inseparatism contrasts with a more traditional view of mind in analytic philosophy that Horgan and John (2002) label 'separatism', which holds that sensory mental states are phenomenally conscious but non-intentional, and cognitive mental states are intentional but non-phenomenal. Since PIT is partly motivated by inseparatism, and OPAs are paradigmatic cognitive states, it would seem as if PIT is committed to the notion that all OPAs are phenomenal. Second, the proponent of PIT cannot hold that some OPAs are non-conscious without also claiming that the OPAs in question have either derived intentionality or lack intentionality altogether. But it is wildly implausible that OPAs lack intentionality altogether, and it is even difficult to see how they could have derived intentionality. To say that an OPA has derived intentionality is to say that its intentionality is explainable in terms of some further intentional system. The problem is that one cannot easily understand the intentionality of OPAs in terms of some further intentional system; at least not in the same way that the intentionality of things like highway signs and linguistic objects can be understood in terms of the intentional mental states of the humans responsible for fixing their respective meanings. These reasons suggest that PIT is committed to SP3, which means that SP3 does not beg the question against PIT. 13 The most controversial sub-premise of the self-stimulating loop argument is undoubtedly SP1. The primary way to reject SP1 is to follow Robert Rupert (2004Rupert ( , 2009aRupert ( , 2009bRupert ( , 2011 in distinguishing the extended mind thesis from the embedded mind thesis and claim that the self-stimulating loop case is best interpreted through the framework of the embedded model. This gives rise to the following objection: Objection 3 (O3) SP1 is false because self-stimulating loop cases are best construed though the framework of embedded cognition over the framework of extended cognition.
The embedded mind thesis advocates for an internalist vision of cognition while upholding that cognitive systems are nevertheless deeply interactive with and perhaps even casually reliant upon external entities. Rupert defines embedded cognition thusly: "Cognitive processes depend very heavily, in hitherto unexpected ways, on organismically external props and devices and on the structure of the external environment in which cognition takes place" (Rupert, 2004, p. 393). The thesis of embedded cognition is significantly more conservative than the extended view, which presents an externalist vision of cognition according to which cognitive systems can temporarily extend beyond the body into the outer world. Rupert argues that the embedded model is preferable over the extended model because the former possesses more explanatory power and is less metaphysically profligate than the latter. He thus denies that self-stimulating loop cases are genuine cases of mind extension. While cognition may intimately depend upon external artifacts during loop-related activity, the artifacts are not themselves a part of cognition, according to Rupert. 14 13 Upon first glance, it might seem as if Angela Mendelovici (2018Mendelovici ( , 2020) defends a view of OPAs which rejects SP3. Mendelovici is one of the leading proponents of PIT in the field and advocates for a view of OPAs, which she calls self-ascriptivism, according to which "propositional attitudes are derived representational states, deriving their contents and their attitude types from our self-ascriptions" (Mendelovici 2020: 1). Since derived representational states lack intentionality altogether on Mendelovici's view, it might seem as if she not only rejects SP3 or the assertion that all OPAs are source intentional, but that she also rejects the more moderate assertion that some OPAs are source intentional. However, Mendelovici's self-ascriptivism only accounts for the alleged contents of propositional attitudes and does not account for the immediate contents of such attitudes. The immediate contents of propositional attitudes are the contents that are directly accessible via introspection. In contrast, the alleged contents are the contents that folk psychology takes propositional attitudes to have or the contents that we normally attribute to 'that-clauses'. Mendelovici identifies the immediate contents of propositional attitudes with phenomenal contents, meaning that she believes that OPAs contain a phenomenally conscious aspect. She says, "we think in phenomenal tags that we take to stand for more complex and sophisticated contents" (Mendelovici 2018: 154). This filled-in picture of Mendelovici's view of OPAs allows us to see that she does not actually reject P* but instead holds that OPAs have both a phenomenal intentional aspect (i.e. immediate contents) and a non-phenomenal, non-intentional aspect (i.e. alleged contents).
Rupert's main contention in favor of the embedded model is that internal cognitive processes and their external functional analogs possess such different causal profiles that one should conceptualize them as distinct natural kinds. 15 Regarding self-stimulating loops in particular, Rupert (2009aRupert ( , 2009b avers that outer contributions to the production of intelligent behavior during loop-related activity asymmetrically depend on the contributions of the internal cognitive architecture. For example, the sketchpad in the aforementioned self-stimulating loop case is reliant upon some set of neural activity to help drive cognition, but the relevant neural systems are not in turn reliant upon the sketchpad given that these systems alone, in the absence of any external scaffolding, are sufficient for the realization of cognitive processes. Asymmetric relations like these between internal cognitive mechanisms and external cognitive artifacts lead Rupert to conclude that there is no compelling theoretical reason to regard self-stimulating loops as instances of extended cognition. Worse still, Rupert (2009a) argues that adopting the extended mind framework in self-stimulating loop cases comes at a significant scientific cost. He stresses that widespread acceptance of self-stimulating loops as extended cognitive systems would inhibit our ability to locate persisting biological subjects for psychological and cognitive scientific research. 16 For all of these reasons, Rupert alleges that external entities implicated in loop related activity are properly construed as cognitive scaffolds rather than cognitive extensions. Hence O3.
Clark (2008) presents a multifaceted response to O3 in defense of SP1. First, he repudiates the idea that external entities involved in self-stimulating loops should not be regarded as cognitive simply because they do not share the same fine-grained casual profile as their neural counterparts. Clark says this idea is the product of unjustified anthropocentrism and neurocentrism and results from a misinterpretation of the parity principle. The parity principle, according to Clark, does not mandate a fine-grained identity of causal profile but is rather "a call for sameness of opportunity, such that bio external elements might turn out to be parts of the machinery of cognition even if their contributions are unlike (perhaps deeply complementary to) those of the biological brain" (Clark, 2008, p. 135). Furthermore, Clark denies that relations of asymmetric dependence are relevant to the question of individuating cognitive systems. He supports this denial by noting that the internal neural architecture is composed of various mechanisms, some of which are themselves asymmetrically dependent on others for their role in cognition. Since these internal neural mechanisms clearly qualify as cognitive despite being asymmetrically dependent, 15 Rupert (2004) draws upon psychological research on working memory to make this contention, showing that internal memory processes possess significantly different fin-grained causal profiles than forms of external memory storage. For example, unlike external memory, internal working memory is integral to the successful execution of verbal exchanges, and is associated with particular kinds of interference patterns in paired-associates experiments. 16 For example, with respect to developmental psychology, Rupert maintains that the extended mind thesis "offers developmental psychologists no more reason to be interested in, for example, the series of temporal segments we normally associate with Sally from ages two-to-six than it offers to be interested in, say, Sally, aged two, together with a ball she was bouncing on some particular day, Johnny, aged five, together with the book he was reading on some particular afternoon, and Terry, aged seven, pus the stimulus item he has just been shown by an experimenter" (Rupert 2009a: 15).
Clark avers that external entities implicated in loop-related activity which are similarly asymmetrically dependent should also be eligible to be granted the status of 'cognitive. ' Finally, in response to Rupert's assertion that self-stimulating loops are not instances of extended cognition because interpreting them as such comes at a high scientific cost, Clark distinguishes between organism centered cognition and organism bound cognition. He maintains that the extended mind thesis does not diminish the scientific concept of a stable, persisting biological subject given that it preserves the vision of cognition as organism centered even though it denies that cognition is organism bound. 17 Clark thus disaffirms that conceptualizing self-stimulating loops as extended cognitive systems fails to dovetail with the methodology of cognitive science. Quite to the contrary, he insists that the embedded mind thesis is the one in danger of falling victim to a nonscientific conceptual paradigm, avowing that the embedded model "threatens to repeat for outer circuits and elements the mistake that Dennett (1991) warns us against with regard to inner circuits and elements. It [the embedded mind thesis] depicts such outer resources as doing their work only by parading structure and information in front of some thoughtful inner overseer" (Clark, 2008, p. 153). According to Clark, once we dispense with the scientifically flawed notion of a privileged inner homunculus that directs the flow of all bio external information coming into the brain, the extended mind interpretation of self-stimulating loop cases becomes significantly more credible.
Suppose now that Clark's response to O3 is successful, meaning that the OPAs involved in loop-related activity are cases of extended cognition. Even if this is true, and one acknowledges that all OPAs have source intentionality, the objector might nevertheless insist that self-stimulating loops fail to demonstrate that some source intentional states are extended. In other words, the objector might concede that all of the premises of the self-stimulating loop argument are true but maintain that the argument is invalid because the premises do not jointly imply P1. The idea, in particular, is that P1 does not follow from the conjunction of SC1 and SP3. According to this final possible objection, one cannot use the extended mind thesis to infer extended intentionality because it is a thesis about representational vehicles and not representational contents. Hence,
Objection 4 (O4)
The self-stimulating loop argument is invalid because the extended mind thesis concerns representational vehicles, not representational contents.
When discussing the concept of mental representation, philosophers of mind often distinguish between the vehicle of representation and the content of representation (Hurley, 1998). The vehicle denotes the physical substrate that does the representing, whereas the content picks out that which is represented (i.e. what the representation is 'about'). The important point in this context is that intentionality is a feature of representational contents, not vehicles, and the extended mind thesis appears to pertain exclusively to vehicles. Indeed, Clark & Chalmers originally classified the extended mind as a version of vehicle externalism. They claim that "under certain circumstances we should see the material vehicles that realize the mind as encompassing not just brain activity, but also that of the body and the material environment" (Clark & Chalmers, 1998, p. 1243. If the extended mind thesis does not also encompass representational contents, then it cannot shed light on the prospect of extended intentionality. So, while the relevant OPAs may qualify as instances of extended cognition, they are arguably not examples of extended source intentionality because only the vehicles of these attitudes extend into the environment during loop-related activity. The source intentionality of OPAs is found in their content, but this content does not extend into the environment if O4 is correct. 18 To counter O4 the defender of P1 must reject the notion that the extended mind thesis pertains exclusively to representational vehicles. Fortunately, a strong case can be made that entities in the material environment can partly constitute both vehicles and content. Holger Lyre (2016) contends that the extended mind thesis entails what he calls active content externalism, given the plausible assumption that content supervenes on vehicles. According to Lyre, the claim that content supervenes on vehicles is a relatively widespread assumption in the philosophy of mind. 19 To say that content supervenes on vehicles is to say that vehicles play a determinative role with respect to content, for determination is the converse concept of supervenience. If vehicles do determine content, and the extended mind thesis is correct in asserting that the vehicles of cognition sometimes include external factors in the environment, then it follows that external factors sometimes partly determine mental content. To illustrate, recall the fictional case of Otto and his notebook. Otto, an Alzheimer's patient, is taken to possess the extended belief that the MOMA is on 53rd street insofar as his notebook (which contains the directions to the MOMA) is a constitutive component of his cognitive system. Importantly, Otto's extended belief is true only insofar as his notebook contains the correct address to the museum, meaning that if Otto's notebook had included an inaccurate address, then his extended belief would be false. The content of Otto's extended belief state is therefore actively determined by an aspect of the external world; namely, his notebook and the writings therein. Lyre elucidates this point by presenting a twin earth scenario involving Otto: "on twin earth Twotto wants to meet Twinga. But Twotto's notebook mistakenly displays 51st street as MOMA's address. So Twotto won't meet Twinga. This shows [that]….a variation of the external component of the cognitive system, in this case the notebook entry, may change mental content in a behaviorally relevant manner" (Lyre, 2016, p. 7, emphasis added). 18 It is actually unclear whether or not Clark and Chalmers' original discussion of the extended mind is meant to apply to just vehicles. When invoking the fictional case of Otto to argue for extended beliefs, Clark and Chalmers appear to insinuate that the content of Otto's belief state depends partly on external components outside the brain (i.e. on the notebook itself). I elaborate on this point below. 19 Moreover, the assumption has recently been explicitly defended by Gottfried Vosgerau (2018).
While Lyre focuses on the case of Otto and extended dispositional states to support active content externalism, the view is also naturally supported by extended OPAs. Recall the fictional case of Eve, whose occurrent act of judging her parents extends into the environment via the self-stimulating feedback loop that she forms with her notebook. The propositional content of Eve's occurrent judgment is also actively shaped by the written outputs that she jots down onto the page, for these outputs "feed back into Eve, influencing her overall evaluative perspective as she continues to be engaged in the activity" (Colombetti & Roberts, 2015, p. 1258. The cases of Eve and Otto examples show that O4 is mistaken in suggesting that the extended mind thesis does not pertain to representational contents. By professing that content supervenes on vehicles, the advocate of the self-stimulating loop argument for P1 can uphold that the pertinent OPAs represent instances of extended source intentionality, and not merely extended cognition. Put differently, P1 does follow from the conjunction of SC1 and SP3 assuming the extended mind thesis implies active content externalism. The aim of this section has been to motivate the self-stimulating loop argument for P1 and defend the argument against anticipated objections. I have shown that P1 is highly plausible given the following philosophical assumptions: (a) some OPAs extend into the environment via self-stimulating feedback loops, (b) all OPAs possess source intentionality, and (c) the extended mind thesis pertains to both vehicles and contents. I turn now to P2 of the extended mind argument against PIT. If it is assumed for the sake of argument that P1 is true (i.e. some source intentional states are extended), and, as has already been established, source intentional states can be extended on PIT only if consciousness can be extended, then one of two things must be the case: either PIT is false or some conscious states are extended. P2 rejects the latter disjunct by affirming that no conscious mental states are extended.
Premise 2: no conscious states are extended
In this section I present and motivate an argument for P2 deriving from Chalmers (2018), which I call the direct access argument against extended consciousness. The argument invokes the idea that cognitive extension requires sensorimotor interaction in addition to functional parity and that consciousness requires direct access to information for global control. If the argument is sound and extended consciousness is impossible, then the extended OPAs involved in self-stimulating feedback loops must be non-conscious, meaning that these propositional attitudes represent instances of source intentionality in the absence of phenomenal consciousness (meaning that PIT is false). After unpacking the direct access argument, I briefly consider two possible objections: one based on so-called external circuit cases and the other based on the enactive theory of consciousness. The central problem with these objections, as I explain, is that even if they succeed in undermining the direct access argument (which is highly controversial), they nevertheless fail to undermine P2 under a proper construal of the premise. To effectively counter P2, the objector must show that some currently existing conscious states are extended. In particular, the objector must illustrate that the OPAs featured in self-stimulating loop cases are instances of extended consciousness. However, external circuit cases at best establish the possibility of extended consciousness in high-tech science fiction scenarios, whereas enactivism at best illustrates that perceptual conscious states are extended, not OPAs.
Chalmers' argument against extended consciousness can be represented as follows:
The direct access argument against extended consciousness
QP1 A mental state X is extended only if X involves sensorimotor interaction with external entities.
QP2 A mental state X is conscious only if X has direct access to information for global control.
QC1 Therefore, a mental state X is an extended conscious state only if (i) X features sensorimotor interaction with external entities and (ii) X has direct access to information for global control.
Consider first QP1. The immediate thing to notice about this premise is that it is at odds with the extended mind thesis as previously formulated. Clark and Chalmers (1998), to recall, conceptualize extended mentality in terms of the parity principle. Call this conception the 'original definition of extension':
Original Definition of Extension (ODE)
An external entity X counts as an extended mental state of subject S just in case X plays the right functional role in S's overall cognitive system. Chalmers (2018) takes issue with ODE on the grounds that it is objectionably uncontroversial or 'too weak to be interesting.' In particular, he concurs with Farkas (2012), who contends that ODE is problematic because it includes external circuit cases within its purview. External circuit cases are instances where parts of the brain are substituted with functionally isomorphic silicon parts located in the outer environment and connected directly to internal neural mechanisms by wiring or radio transmitters. Clark (2009) offers the following example of an external circuit case: But now imagine a case in which a person (call her Diva) suffers minor brain damage and loses the ability to perform a simple task of arithmetic division using only her neural resources. An external silicon circuit is added that restores the previous functionality. Diva can now divide just as before, only some small part of the work is distributed across the brain and the silicon cir-cuit: a genuinely mental process (division) is supported by a hybrid bio-technological system (Clark, 2009).
The case of Diva qualifies as an example of extended cognition according to ODE because the external silicon circuit in question satisfies the parity principle. This implication of ODE leads Farkas to conclude that ODE is objectionably weak and needs to be replaced with a more robust version of the extended mind thesis. The extended mind thesis is supposed to be a compelling, controversial hypothesis, not a statement of unquestionable veracity. The problem is that the thesis is rendered utterly uncontroversial when interpreted through the lens of ODE, for even the most ardent opponents of extended cognition (e.g. Adams & Aizawa, 2008;Rupert, 2004) concede that cognitive processes can become extended in high-tech science fiction scenarios like the case of Diva. Chalmers agrees with Farkas that the bar for extended cognition needs to be set higher. He says, "Andy and I could stand our ground and stick with our stipulated definition of the extended mind thesis, so that Adams, Aizawa, Farkas, and Rupert all count as supporters of the thesis. That would be a little akin to the US declaring victory in Vietnam and going home. I think it makes more sense to find a stronger formulation of the extended mind thesis that captures what is really at issue in the debate" (Chalmers, 2018, p. 4).
Chalmers strengthens the extended mind thesis by adding a 'sensorimotor requirement' to the conception of cognitive extension. This requirement affirms that mental states must involve sensorimotor interaction with entities in the external environment to count as extended. Chalmers' revised formulation of the thesis is comprised of both this sensorimotor requirement as well as the original parity principle:
Revised Definition of Extension (RDE)
An external entity X counts as an extended mental state of subject S just in case (a) S interacts with X via perception and action, and (b) X plays the right functional role in S's overall cognitive system.
Chalmers believes that RDE, which is essentially synonymous with QP1, gets the best of both worlds. On the one hand, it accommodates trademark cases of extended mental processes, like the fictional case of Otto (extended dispositional states) and the fictional case of Eve (extended occurrent states). On the other hand, RDE bypasses Farkas' 'too weak to be compelling' criticism of the extended mind. External circuit cases do not involve sensorimotor interaction given that the circuits in question are connected directly to the brain via radio transmitters or wiring. The case of Diva therefore fails to meet the relevant criteria for extended cognition, according to RDE. 20 Crucially, the adoption of RDE over ODE in no way affects the strength of Clark's self-stimulating loop argument for P1. OPAs bound up in self-stimulating loops with external entities satisfy the sensorimotor interaction condition for cognitive extension. For example, consider the previously discussed case of Eve and the self-stimulating loop that she forms with her notepad during the occurrent act of judging her parents. Eve engages with her notepad via perception and action; she looks at the notepad, turns its pages, and writes feverishly inside it. The relationship between Eve and her notepad is markedly different from the relationship between Diva and her external neural prosthetic device. Unlike Diva, Eve qualifies as a cognitive extender from the perspective of RDE because her cognitive artifact necessitates sensorimotor engagement instead of bypassing her perceptual faculties. The upshot of this is that the proponent of the extended mind argument against PIT can invoke RDE to help justify P2 without undercutting the self-stimulating loop argument for P1.
Turn now to QP2, the second key premise of Chalmers' argument against extended consciousness. QP2 states that a mental state is phenomenally conscious only if it has direct access to information for global control. Call this the direct access condition. 21 Chalmers (1996Chalmers ( , 1998 originally proposes the direct access condition as a 'pre-experimental bridging principle' that neuroscientists can use to help isolate the neural correlates of consciousness, or the set of mechanisms in the brain jointly sufficient for conscious experience. Discovering the neural correlates of consciousness is a notoriously daunting scientific task since consciousness is not directly observable or measurable. However, Chalmers indicates that consciousness can be indirectly measured by first propounding a functional property associated with conscious experience and then identifying the neural correlations of this functional property. The relevant functional property is regarded as a 'pre-experimental bridging principle' because it must be presupposed prior to any experimental research into the nature of consciousness. Chalmers says that the veracity of this bridging principle is not something that can be determined via scientific experimentation but rather must be established based on phenomenological or conceptual considerations. 22 The direct access condition, according to Chalmers, is strongly 21 Chalmers (2008) and Clark (2009) consider an alternative version of QP2 which invokes the notion that consciousness requires 'high bandwidth' access to information. Extended consciousness is impossible according to this line of thought because extended processes involving perception and action provide comparatively low-bandwidth access to information. However, as Karina Vold (2015) has contended, perception actually relays external information to the brain at a relatively quick rate and so does not qualify as a low bandwidth process. Vold explains that the bandwidth of perception is roughly similar to the bandwidth of internal neural mechanisms: "non-neural processes must be constantly reporting information back to the brain…at least as quickly as neural processes can operate" (Vold 2015: 21). In light of this point from Vold, Chalmers goes on to endorse the direct access condition over the high-bandwidth access condition. 22 As Chalmers explains, "Experimental research gives us a lot of information about processing; then we bring in the bridging principles to interpret the experimental results, whatever those results may be. They are the principles by which we make inferences from facts about processing to facts about consciousness, and so they are conceptually prior to the experiments themselves. We cannot actually refine them experimentally (except perhaps by first-person experimentation!), because we have no independent access to the independent variable. Instead, these principles will be based on some combination of (1) conceptual judgments about what counts as a conscious process, and (2) information gleaned from our first-person perspective on our own consciousness" (Chalmers 1998: 3). supported by phenomenological reflection, for it introspectively seems as if information in the brain is conscious if and only if it is directly accessible to guide different behavioral responses like verbal report and motor action. He elaborates: A correlation between consciousness and global availability (for short) seems to fit the first-person evidence-that gleaned from our own conscious experiencequite well. When information is present in my consciousness, it is generally reportable, and it can generally be brought to bear in controlling behavior in all sorts of ways. I can talk about it, I can point in the general direction of a stimulus, I can press bars, and so on. Conversely, when we find information that is directly available in this way for report and other aspects of control, it is generally conscious information (Chalmers, 1998, p. 5).
Chalmers also emphasizes that the direct access condition is either implicitly or explicitly assumed to be true by many prominent neuroscientists and philosophers of mind working in the field, for the mechanisms that researchers put forward as candidates for the neural correlates of consciousness typically subserve the functional property of global availability. For example, Crick and Koch (1990) famously propose that 40-Hz oscillations in the cerebral cortex constitute a neural correlate of consciousness because these oscillations play a pivotal role in integrating information related to working memory and making it directly accessible for global control. Other notable theories of consciousness which endorse the direct access condition include (but are not limited to) Bernard Baars' (1988) Global workspace theory, Michael Tye's (1995) PANIC theory, andMartha Farah's (1994) High-quality representation theory. The fact that the direct access condition is widely adopted among empirical consciousness researchers and independently substantiated by phenomenological insight lends a high degree of plausibility to the principle, according to Chalmers. After motivating RDE (i.e. QP1) and advocating for the direct access condition (i.e. QP2), Chalmers proceeds to explain why the conjunction of these two principles entails the impossibility of extended consciousness (i.e. QP3). The fundamental problem is that extended processes cannot supply direct access to information for global control because these processes are mediated by perception and action. Information can only be made directly available for global governance when located in the brain, not in the external environment. This is because information from the external environment must pass through multiple subsystems to reach the internal control system in the head. Chalmers elaborates: "these processes [cognitive processes that are extended via sensorimotor interaction] support information that is only indirectly available for global control: in order to be used in control, it must travel causal pathways from object to eye, from eye to visual cortex, and from visual cortex to the loci of control" (Chalmers, 2018, p. 10). In essence, then, because sensorimotor interaction with the environment is necessary for a conscious mental state to count as extended, but such interaction fails to satisfy the direct access condition, it follows that extended consciousness is impossible.
The direct access argument against extended consciousness is philosophically and empirically formidable, but it is not foolproof (as Chalmers himself concedes). There are three possible ways to object to the argument: (a) reject the direct access condition, (b) reject RDE, or (c) reject the idea that mental processes involving sensorimotor interaction with the environment cannot support information that is directly accessible for global control. The soundness of the direct access argument entails the veracity of P2, which is to say, the impossibility of extended consciousness implies that no actual conscious states are extended. However, it is important to recognize that the inverse entailment relation does not hold: it does not follow from the fact that extended consciousness is possible to the conclusion that some actual conscious states are extended. This is an essential point because it means that an objection to the direct access argument does not necessarily amount to an objection to P2. For example, one might reject RDE on the grounds that external circuit cases illustrate the possibility of extended consciousness. This objection to the direct access argument requires resisting Farkas 'case against ODE and Chalmers' contention that RDE is preferable to ODE. Suppose one assumes that sensorimotor interaction is not necessary for extended cognition (i.e. RDE is false) and further stipulates that the brain processes implicated in external circuit cases constitute part of the neural correlates of consciousness. It then becomes natural to regard external circuit cases as instances of extended consciousness, for it seems that substituting the neural correlates of consciousness with functionally isomorphic silicon circuits located in the outer environment would extend conscious states.
There are at least two problems with this objection. First, it is unclear whether consciousness is even preserved in external circuit cases since it is currently unknown whether synthetic consciousness is possible. If it turns out that a cognitive system must be composed of carbon substrate in order to instantiate phenomenal consciousness, then consciousness will not be sustained when carbon-based brain bits are substituted with silicon-based functional duplicates. More to the point though, even if it were granted that external circuit cases demonstrate the possibility of extended consciousness and thereby undercut the direct access argument, such cases would still not compromise P2 given that this premise concerns the actuality of extended consciousness. Thus, even if it is possible for consciousness to extend into the environment in various high-tech science fiction scenarios (e.g. external circuit cases), this provides no reason to believe that any currently existing conscious states are extended.
The most common strategy for defending the actuality of extended consciousness is to endorse enactivism, a theory of perceptual consciousness championed by O' Regan andNoe (2001), Noe (2004). The enactivist conceptualizes perception as an active process that necessitates sensorimotor interplay with the world. 23 The view stands in contrast to internalist accounts of perception, which hold that perceptual experience is fundamentally passive and relies solely on the instantiation of internal mental representations. Enactivism sees consciousness as a kind of 'doing', and explains the relationship between the perceiving subject and the world in terms of the possession of sensorimotor knowledge. 24 Various proponents of the enactivism (e.g. Noe, 2004;Ward, 2012, Pepper, forthcoming) avow that the theory entails extended consciousness. The guiding notion is that it follows from the thesis that perceptual consciousness implicates active engagement with elements in the external environment to the conclusion that these external elements form part of the physical substrate that realizes conscious experience. This leads to the following possible objection to P2: Objection 5 (O5) P2 is false because the theory of enactivism is true.
The immediate problem with O5 is that enactivism appears to be at odds with the direct access condition for consciousness. As previously discussed, cognitive processes involving sensorimotor interaction cannot support information directly accessible for global control given that information from the environment must pass through two separate processing stages to be used by internal control mechanisms: a stage of processing by the perceptual system, and a second stage of processing to be globally broadcast. Enactivism is thus arguably false because the direct access condition is true. In response to this rebuttal the enactivist must either reject the direct access condition or reject the proposition that enactivism is incompatible with the direct access condition. Kiverstein and Kirchhoff (2019) appear to pursue the latter strategy in their book Extended Consciousness and Predictive Processing: A Third Wave View. 25 24 As Adrian Downey elaborates, "there is a law-like relation between an organism's movements and its visual stimulation-when an organism moves closer to an object the object looms in the visual field, when it gets further away the object appears smaller, and so on. On sensorimotor enactivism an organism is thought capable of perceiving only when it understands this relation between sensory stimulation and movement" (Downey 2017: 2). The relevant sensorimotor knowledge is varied, as each sensory modality is subject to its own particular law-like relations. 25 Kiverstein and Kirchhoff observe that Chalmers' direct access argument against extended consciousness derives from the particular conception of 'sensorimotor interaction' at play in the Otto notebook case, and specifically, from the role that perception and action play in the retrieval of information from the notebook. They agree that this extended mind-based conception of 'sensorimotor interaction' is incompatible with the direct access condition but allege that the enactivist operates with a more sophisticated version of the concept which is mutually consistent with the direct access condition. The kind of sensorimotor interaction pertinent to enactivism, according to Kiverstein and Kirchhoff, involves what they call 'dynamic entanglement' and 'unique temporal signature.' Dynamic entanglement denotes a reciprocal causal relationship between the agent's sensory inputs and her motor outputs, while unique temporal signature refers to the idea that brain states must unfold over time to give rise to consciousness, and that this temporal unfolding requires environmental causes. They argue that conceptually engineering these two principles into the enactivist's understanding of sensorimotor interaction allows the enacivist to avoid the direct access concern and infer the existence of extended consciousness.
The main problem with O5 is that enactivism pertains to perceptual states and not propositional attitudes. Even if enactivism is the correct account of perceptual experience and entails the actuality of extended consciousness (thereby disproving the direct access argument), the view still does not compromise P2 under a proper construal of the premise. The self-stimulating loop argument for P1 concerns extended propositional attitudes; specifically, it holds that OPAs are source intentional states that extend into the environment when coupled with external entities to form a self-stimulating feedback loop. Given this premise, it follows that the kinds of conscious mental states pertinent to P2 are OPAs and that objecting to P2 requires demonstrating that OPAs implicated in loop-related activity are instances of extended consciousness. Establishing that the extended consciousness thesis applies to some existing class of mental states which are not OPAs does not help the objector dismantle the extended mind argument against PIT. As long as P1 is true, and the relevant OPAs are not instances of extended consciousness, the defender of the argument can maintain that PIT leads to the aforementioned contradiction and is therefore false. The scope of P2 should therefore be restricted in the following manner such that it only targets conscious propositional attitudes and not all conscious mental states: P2* No conscious propositional attitudes are extended.
This modification of P2 clarifies why an enactivist defense of extended consciousness fails to subvert the premise. The fundamental issue is that enactivism is a theory of perceptual consciousness and perceptual states are not types of propositional attitudes. There are, to be sure, certain versions of representationalism which regard perceptual states as propositional attitudes, but enactivism is traditionally put forward as a competitor to these representationalist theories of mind and denies that perceptual states have propositional content. 26 Thus, even if enactivism implies that perceptual conscious states are extended, it supplies no reason to think that any conscious OPAs extend into the environment.
Conclusion
The extended mind argument against PIT is motivated by what I called the 'bipartite extension intuition' (i.e. BEI), the claim that some source intentional states extend into the environment, but no conscious states do. Put another way, BEI holds that the extended mind thesis is true but the extended consciousness thesis is false, where 'the extended mind thesis' is understood to encompass some OPAs and 'the extended consciousness thesis' is stipulated to pertain exclusively to OPAs. BEI is inconsistent with PIT because the extended mind thesis entails the extended consciousness thesis if PIT is true. This paper has offered a philosophical roadmap for how BEI can be justified in a non-question begging manner. The spirit of my discussion has not been to prove with certainty that the argument against PIT is sound but rather to demonstrate that the argument is creditable given certain philosophical assumptions about the nature of consciousness and extended cognition. There are undoubtedly other ways to validate P1 and P2 (i.e. BEI) that have not been considered here. It is possible that both Clark's self-stimulating loop argument and Chalmers' direct access are flawed but that P1 and P2 are nevertheless true, for the truth of the premises might be grounded in reasons that have nothing to do with the existence of self-stimulating feedback loops or the notion that consciousness requires direct access to information for global control.
More broadly, I think that instead of drawing upon extended mental states and the extended mind literature to argue against PIT, one might alternatively seek to undermine PIT by drawing upon collective mental states and the collective intentionality literature (e.g. Gilbert, 1989;Pettit, 2003;Tuomela, 1992). Consider how a group of scientists can, throughout time, collectively understand some natural phenomenon that no individual scientist can understand. This case appears to pick out an example of distributed cognition which involves the tokening of a collective mental state, namely, a collective attitude of understanding. Moreover, the collective propositional attitude in question seems to be occurrent rather than dispositional since the process of reaching a shared scientific understanding of any natural phenomenon is one that actively unfolds over time as empirical experiments are conducted and the different stages of the scientific method are completed. If one additionally assumes that there is no phenomenally conscious group agent present in this scenario that is the bearer of the collective scientific understanding, then the scenario arguably represents an instance of a non-conscious OPA (i.e. an instance of source intentionality in the absence of phenomenal intentionality). The argument from collective intentionality that I am envisioning here is structurally similar to the extended mind argument against PIT and might be formally devised as follows:
The collective intentionality argument against PIT
P1 Collective mental states exist.
P2 Some collective mental states are OPAs.
P3 All OPAs possess source intentionality. C1 Therefore, some collective mental states possess source intentionality.
P4
No collective mental states are phenomenally conscious.
C2 Therefore, some source intentional states lack phenomenal consciousness (meaning that PIT is false).
Determining whether this argument is tenable would mandate a detailed investigation into the collective intentionality and group agency literature. I raise the argument here only to illustrate that the extended mind argument represents one species of a more general argumentative strategy against PIT. This general argumentative strategy consists in identifying a counterexample to PIT by demonstrating that some class of source intentional states is devoid of phenomenal consciousness (and therefore phenomenal intentionality). The extended mind argument avers certain extended states (i.e. extended OPAs) represent counterexamples to PIT, whereas the collective intentionality argument regards certain collective states (i.e. collective OPAs) as counterexamples. The key to making this general argumentative strategy persuasive is to avoid begging the question against PIT. The challenge is to show that there is a class of non-conscious, source intentional states without presupposing the falsity of PIT. As discussed in sections II and III, the argument presented here does not presuppose the falsity of PIT and so bypasses any question-begging concern. If the extended mind argument against PIT is sound, then the prevailing sentiment that intentionality is grounded in consciousness is mistaken.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 15,695.4 | 2021-08-18T00:00:00.000 | [
"Philosophy"
] |
Analysis of the $D\bar{D}^*K$ system with QCD sum rules
In this article, we construct the color singlet-singlet-singlet interpolating current with $I\left(J^P\right)=\frac{3}{2}\left(1^-\right)$ to study the $D\bar{D}^*K$ system through QCD sum rules approach. In calculations, we consider the contributions of the vacuum condensates up to dimension-16 and employ the formula $\mu=\sqrt{M_{X/Y/Z}^{2}-\left(2{\mathbb{M}}_{c}\right)^{2}}$ to choose the optimal energy scale of the QCD spectral density. The numerical result $M_Z=4.71_{-0.11}^{+0.19}\,\rm{GeV}$ indicates that there exists a resonance state $Z$ lying above the $D\bar{D}^*K$ threshold to saturate the QCD sum rules. This resonance state $Z$ may be found by focusing on the channel $J/\psi \pi K$ of the decay $B\longrightarrow J/\psi \pi \pi K$ in the future.
Introduction
Since the observation of the X(3872) by the Belle collaboration in 2003 [1], more and more exotic hadrons have been observed and confirmed experimentally, such as the charmonium-like X, Y , Z states, hidden-charm pentaquarks, etc [2,3,4]. Those exotic hadron states, which cannot be interpreted as the quark-antiquark mesons or three-quark baryons in the naive quark model [5], are good candidates of the multi-quark states [6,7]. The multi-quark states are color-neutral objects because of the color confinement, and provide an important platform to explore the low energy behaviors of QCD, as no free particles carrying net color charges have ever been experimentally observed. Compared to the conventional hadrons, the dynamics of the multi-quark states is poorly understood and calls for more works.
Some exotic hadrons can be understood as hadronic molecular states [8], which are analogous to the deuteron as a loosely bound state of the proton and neutron. The most impressive example is the original exotic state, the X(3872), which has been studied as the DD * molecular state by many theoretical groups [9,10]. Another impressive example is the P c (4380) and P c (4450) pentaquark states observed by the LHCb collaboration in 2015, which are good candidates for thē DΣ * c ,D * Σ c ,D * Σ * c molecular states [8]. In additional to the meson-meson type and meson-baryon type molecular state, there maybe also exist meson-meson-meson type molecular states, in other words, there maybe exist three-meson hadronic molecules.
In Refs. [11,12], the authors explore the possible existence of three-meson system DD * K molecule according to the attractive interactions of the two-body subsystems DK,DK, D * K, D * K and DD * with the Born-Oppenheimer approximation and the fixed center approximation, respectively. In this article, we study the DD * K system with QCD sum rules.
The QCD sum rules method is a powerful tool in studying the exotic hadrons [13,14,15,16], and has given many successful descriptions, for example, the mass and width of the Z c (3900) have been successfully reproduced as an axialvector tetraquark state [17,18]. In QCD sum rules, we expand the time-ordered currents into a series of quark and gluon condensates via the operator product expansion method. These quark and gluon condensates parameterize the non-perturbative properties of the QCD vacuum. According to the quark-hadron duality, the copious information about the hadronic parameters can be obtained on the phenomenological side [19,20].
In this article, the color singlet-singlet-singlet interpolating current with I J P = 3 2 (1 − ) is constructed to study the DD * K system. In calculations, the contributions of the vacuum condensates are considered up to dimension-16 in the operator product expansion and the energy-scale 2 is used to seek the ideal energy scale of the QCD spectral density.
The rest of this article is arranged as follows: in Sect.2, we derive the QCD sum rules for the mass and pole residue of the DD * K state; in Sect.3, we present the numerical results and discussions; Sect.4 is reserved for our conclusion.
2 QCD sum rules for the DD * K state In QCD sum rules, we consider the two-point correlation function, where the m, n, k are color indexes. The color singlet-singlet-singlet current operator J µ (x) has the same quantum numbers I J P = 3 2 (1 − ) as the DD * K system. On the phenomenological side, a complete set of intermediate hadronic states, which has the same quantum numbers as the current operator J µ (x), is inserted into the correlation function Π µν (p) to obtain the hadronic representation [19,20]. We isolate the ground state contribution Z from the pole term, and get the result: where the pole residue λ Z is defined by 0|J µ (0)|Z(p) = λ Z ε µ , the ε µ is the polarization vector of the vector hexaquark state Z. At the quark level, we calculate the correlation function Π µν (p) via the operator product expansion method in perturbative QCD. The u, d, s and c quark fields are contracted with the Wick theorem, and the following result is obtained: where the U ij (x), D ij (x), S ij (x) and C ij (x) are the full u, d, s and c quark propagators, respectively. We give the full quark propagators explicitly in the following, (the P ij (x) denotes the U ij (x) or and t n = λ n 2 , the λ n is the Gell-Mann matrix [20]. We compute the integrals in the coordinate space for the light quark propagators and in the momentum space for the charm quark propagators, and obtain the QCD spectral density ρ(s) via taking the imaginary part of the correlation function: ρ(s) = lim ε→0 ImΠ(s+iε) π [17]. In the operator product expansion, we take into account the contributions of vacuum condensates up to dimension-16, and keep the terms which are linear in the strange quark mass m s . We take the truncation k ≤ 1 for the operators of the order O(α k s ) in a consistent way and discard the perturbative corrections. Furthermore, the condensates qq αsGG π , qq 2 αsGG π and qq 3 αsGG π play a minor important role and are neglected. According to the quark-hadron duality, we match the correlation function Π(p 2 ) gotten on the hadron side and at the quark level below the continuum threshold s 0 , and perform Borel transform with respect to the variable P 2 = −p 2 to obtain the QCD sum rule: where the QCD spectral density the subscripts 0, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 16 denote the dimensions of the vacuum condensates, the T 2 is the Borel parameter, the lengthy and complicated expressions are neglected for simplicity. However, for the explicit expressions of the QCD special densities, the interested readers can obtain them through emailing us. We derive Eq.(9) with respect to 1 T 2 , and eliminate the pole residue λ Z to extract the QCD sum rule for the mass:
For the hadron mass, it is independent of the energy scale because of its observability. However, in calculations, the perturbative corrections are neglected, the operators of the orders O n (α k s ) with k > 1 or the dimensions n > 16 are discarded, and some higher dimensional vacuum condensates are factorized into lower dimensional ones therefore the corresponding energy-scale dependence is modified. We have to take into account the energy-scale dependence of the QCD sum rules.
In Refs. [17,22,23,24], the energy-scale dependence of the QCD sum rules is studied in detail for the hidden-charm tetraquark states and molecular states, and an energy scale formula µ = 2 is come up with to determine the optimal energy scale. This energy-scale formula enhances the pole contribution remarkably, improves the convergent behaviors in the operator product expansion, and works well for the exotic hadron states. In this article, we explore the DD * K state Z through constructing the color singlet-singlet-singlet type current based on the color-singlet qq substructure. For the two-meson molecular states, the basic constituent is also the color-singlet qq substructure [24]. Hence, the previous works can be extended to study the DD * K state. We employ the energy-scale formula µ = M 2 X/Y /Z − (2M c ) 2 with the updated value of the effective c-quark mass M c = 1.85 GeV to take the ideal energy scale of the QCD spectral density. At the present time, no candidate is observed experimentally for the hexaquark state Z with the symbolic quark constituent ccdūsū. However, in the scenario of four-quark states, the Z c (3900) and Z(4430) can be tentatively assigned to be the ground state and the first radial excited state of the axialvector four-quark states, respectively [25], while the X(3915) and X(4500) can be tentatively assigned to be the ground state and the first radial excited state of the scalar four-quark states, respectively [26]. By comparison, the energy gap is about 0.6 GeV between the ground state and the first radial excited state of the hidden-charm four-quark states. Here, we suppose the energy gap is also about 0.6 GeV between the ground state and the first radial excited state of the hiddencharm six-quark states, and take the relation √ s 0 = M Z + (0.4 − 0.6) GeV as a constraint to obey.
In Eq.(11), there are two free parameters: the Borel parameter T 2 and the continuum threshold parameter s 0 . The extracted hadron mass is a function of the Borel parameter T 2 and the continuum threshold parameter s 0 . To obtain a reliable mass sum rule analysis, we obey two criteria to choose suitable working ranges for the two free parameters. One criterion is the pole dominance on the phenomenological side, which requires the pole contribution (PC) to be about (40 − 60)%. The PC is defined as: The other criterion is the convergence of the operator product expansion. To judge the convergence, we compute the contributions of the vacuum condensates D(n) in the operator product expansion with the formula: where the n is the dimension of the vacuum condensates. In Fig. 1 6 MeV, which indicates that the Z is probably a resonance state. For some exotic resonances, the authors have combined the effective range expansion, unitarity, analyticity and compositeness coefficient to probe their inner structure in Refs. [27,28]. Their studies indicated that the underlying two-particle component (in the present case, corresponding to three-particle component) play an important or minor role, in other words, there are the other hadronic degrees of freedom inside the corresponding resonance. Hence, a resonance state embodies the net effect. Considering the conservation of the angular momentum, parity and isospin, we list out the possible hadronic decay patterns of the hexaquark state Z: Z −→ J/ψπK, η c ρ(770)K, DD * K.
To search for the X(3872), Belle, BaBar and LHCb have collected numerous data in the decay B −→ J/ψππK. Thus, the hexaquark state Z may be found by focusing on the most easy channel J/ψπK in the experiment.
Conclusion
In this article, we construct the color singlet-singlet-singlet interpolating current operator with I J P = 3 2 (1 − ) to study the DD * K system through QCD sum rules approach by taking into account the contributions of the vacuum condensates up to dimension-16 in the operator product expansion. In numerical calculations, we saturate the hadron side of the QCD sum rule with a hexaquark molecular state, employ the energy-scale formula µ = M 2 X/Y /Z − (2M c ) 2 to take the optimal energy scale of the QCD spectral density, and seek the ideal Borel parameter T 2 and continuum threshold s 0 by obeying two criteria of QCD sum rules for multi-quark states. Finally, we obtain the mass and pole residue of the corresponding hexaquark molecular state Z. The predicted mass, M Z = 4.71 +0.19 −0.11 GeV, which lies above the DD * K threshold, indicates that the Z is probably a resonance state. This resonance state Z may be found by focusing on the channel J/ψπK of the decay B −→ J/ψππK in the future. | 2,786 | 2019-01-16T00:00:00.000 | [
"Chemistry",
"Physics",
"Computer Science",
"Art",
"Mathematics"
] |
Differences in Gut Virome Related to Barrett Esophagus and Esophageal Adenocarcinoma
The relationship between viruses (dominated by bacteriophages or phages) and lower gastrointestinal (GI) tract diseases has been investigated, whereas the relationship between gut bacteriophages and upper GI tract diseases, such as esophageal diseases, which mainly include Barrett’s esophagus (BE) and esophageal adenocarcinoma (EAC), remains poorly described. This study aimed to reveal the gut bacteriophage community and their behavior in the progression of esophageal diseases. In total, we analyzed the gut phage community of sixteen samples from patients with esophageal diseases (six BE patients and four EAC patients) as well as six healthy controls. Differences were found in the community composition of abundant and rare bacteriophages among three groups. In addition, the auxiliary metabolic genes (AMGs) related to bacterial exotoxin and virulence factors such as lipopolysaccharides (LPS) biosynthesis proteins were found to be more abundant in the genome of rare phages from BE and EAC samples compared to the controls. These results suggest that the community composition of gut phages and functional traits encoded by them were different in two stages of esophageal diseases. However, the findings from this study need to be validated with larger sample sizes in the future.
Introduction
Barrett's esophagus (BE) is the only known precursor for the development of esophageal adenocarcinoma (EAC) with a five-year survival rate of less than 20%. The incidence of these diseases is on the rise globally [1,2]. Early diagnosis of patients at risk could prevent the progression of BE to EAC, and effectively reduce the development of EAC. However, as only 0.3-0.5% of BE patients develop EAC, endoscopic biopsy surveillance, while linked to higher survival rates, is only recommended for at-risk patients [3]. In addition, endoscopies are often discomforting, and sometimes lead to inconclusive results [4]. Thus, noninvasive diagnostics with higher accuracy are sought after. The human gut is home to trillions of microorganisms, including bacteria, viruses, fungi, and protozoa. These microorganisms and their human host maintain a symbiotic relationship, in which the host provides a nutrient-rich habitat, and the microbiota supplies key metabolic capabilities, protects against pathogen invasion, and trains the immune system [5]. In addition, an imbalance in gut microbiota, termed dysbiosis, is associated with several human diseases or conditions, This study aimed to investigate the alteration of gut phages in different stages of esophageal diseases. For this purpose, we (1) determined the composition of the isolated bacteriophage community in BE patients, EAC patients, and healthy controls (CT); (2) predicted the bacterial host ranges of the gut phages in all three groups; (3) identified the metabolic pathways encoded by these phages.
Sample Collection
Sixteen samples were selected from the German BarrettNET registry including six BE patients, four EAC patients, and six CT for virome analysis. The clinical data are shown in Table S1, and additional information can be found in a previous study [42]. Stool samples were collected using Stool Collection Tubes with Stool DNA Stabilizer (STRATEC Molecular GmbH, Berlin, Germany). The sampling procedure was conducted mostly at home or in the clinic if the patients were on outpatient visits. Samples were shipped to the clinic human sample biobank and stored at −80 • C until further virome DNA extraction.
Virome DNA Extraction
The stool samples were vortexed vigorously for 4 h at 4 • C, then centrifuged at 4000 g for 30 min to collect supernatant. The supernatant was passed through 0.22 µm filters (PES Membrane, Lot No. ROCB29300, Merck Millipore, Co., Cork, Ireland) to remove bacterial-associated particles, and the volume was subsequently concentrated to less than 50 µL by Amicon ® Ultra Centrifugal Filters (10 kDA, Lot No. R9EA18187, Merck Millipore, Co., Cork, Ireland). Then 1/5 volume of chloroform was mixed with the samples and centrifuged at 14,000 g for 3 min, retaining the upper phase followed by a DNAse I (1 U/µL, Lot No. 1158858, Invitrogen, Carlsbad, CA, USA) treatment for 1 h at 37 • C to remove non-phage DNA. DNase I was inactivated by adding EDTA (0.1 M). Subsequently, lysis buffer (700 µL KOH stock (0.43 g/10 mL), 430 µL DDT stock (0.8 g/10 mL), and 370 µL H 2 O, pH = 12) was added to the reaction and incubated at room temperature for 10 min followed by 2 h incubation at −80 • C, and 5 min at 55 • C. Lysed VLPs were then treated for 30 min at 55 • C with Proteinase K (20 mg/mL, Lot No. 1112907, Invitrogen, Carlsbad, CA, USA) to digest remaining viral capsid and extract the virome DNA. AMPure beads (Agencourt, Beckman Coulter, Brea, CA, USA) were added to the extracted DNA and incubated for 15 min at room temperatureF. DNA was eluted from beads by 35 µL Tris buffer (10 mM, pH = 9.8) and stored at −80 • C until it was sent for sequencing. Sequencing was performed on an Illumina HiSeq-PE150 platform.
Bioinformatic Analysis
On average, 9,358,935 ± 169,389 reads per samples were generated. Raw reads were processed with fastp (v0.20.1) [43] to remove adaptors and low-quality bases. Remaining reads were deduplicated using dedupe.sh from bbmap suite (v38.76) (https: //sourceforge.net/projects/bbmap/; accessed on 29 January 2020). Then the obtained reads were assembled into contigs using metaSPAdes (v3.14.0) [44] with default parameters retaining only contigs longer than 1 kb. Redundant contigs were removed by dedupe.sh. Remaining contigs were used to predict viral sequences by the combination of VirSorter (v1.0.6) [45], CAT (v5.0.4) [46] and DeepVirFinder (v1.0) [47]. Contigs predicted as category 1 and 2 by Virsorter, or predicted as viruses by CAT, were classified as viruses. Contigs also were classified as viruses if they were predicted as category 3 by VirSorter or could not be classified to taxonomy by CAT but were predicted as a virus by DeepVirFinder with q value < 0.01. Predicted viral contigs were clustered using CD-HIT [48] if they shared >95% identity over 80% of the contig length, the longest contigs in each cluster were retained as a representative for downstream analysis.
For each representative viral contig, ORFs were predicted using Prodigal (v2.6.3) [49] and provided to vConTACT (v2.0) [50] for taxonomy annotation. For contigs that could not be assigned a taxonomy by vConTACT, CAT annotations were used. Otherwise, Order and Family level taxonomic annotations were predicted using Demovir script (https://github.com/feargalr/Demovir; accessed on 27 July 2019) with default parameters and database. To calculate the relative abundances of viruses in each sample, clean reads from each sample were mapped to viral contigs using bbmap.sh from bbmap suite (v38.76). CoverM (v0.4.0) (https://github.com/wwood/CoverM; accessed on 20 February 2020) was used to estimate contig coverage. Feature Counts (v2.0.0) [51] was then used to estimate the number of reads that mapped to each gene. Viral proteins predicted in the previous step were fed into VIBRANT (v1.2.1) [52] to identify lytic and lysogenic phages and the function was annotated using protein mode with default parameters. VI-BRANT annotates viral proteins by searching viral proteins against KEGG [53], VOGDB and PFAM databases, which include function annotation of protein sequences and AMGs. The virus (phage)-bacteria (host) interactions were predicted by VirHostMatcher-Net, which is a method based on the combination of features: virus-virus similarity, virus-host alignment-free similarity, virus-host shared CRISPR spacers and virus-host alignmentbased matches [54]. Bacterial hosts were predicted for contigs with a length greater than 10 kb and score higher than 95% according to VirHostMatcher-Net.
Statistics Analysis
Alpha diversity of phage community was measured using qiime2 (https://qiime2.org; accessed on 29 January 2020). Principal Coordinates Analysis (PCoA) based on "Bray-Curtis" similarities was performed using R (v3.2, package vegan, The R Foundation, Vienna, Austria, 2016). Permutational Multivariate Analysis of Variance (PERMANOVA) was used to test the significant difference. All data performed statistical analyses, which were conducted in
Gut Bacteriophage Community Structure Differed for BE and EAC Compared to Their Healthy Counterparts
On average, 43 ± 2% of all reads generated through sequencing were from viruses. In total, 854 ± 50, 1136 ± 19, 920 ± 33 viral contigs were obtained from sequences identified as viruses for CT, BE, and EAC, respectively. On average, from these contigs, over 95% of sequences were assigned to phages. The order of Caudovirales, which included Herelleviridae, Myoviridae, Podoviridae, Siphoviridae, and Unclassified Caudovirales, were the most abundant phages, accounting for more than 50% of total sequences in all three groups (Figure 1a, Figure S1). Among those phage families, the relative abundance of Herelleviridae was lower than 1% in three groups, the relative abundance of Myoviridae (1. 12 in EAC) showed great variation within each group (p > 0.05). Some viral contigs were assigned to other phage or viral families including Inoviridae, Microviridae, Tectiviridae, Herpesvirales, Marseilleviridae, and Pithoviridae with a relative abundance of less than 1%. Meanwhile, the large difference in specific viral taxa between individuals was observed in the same group, which may be attributable to multiple factors such as age, gender, diet, or drug usage (Table S1). We next determined the dominant phage replication cycle (lytic versus lysogenic cycle). On average, EAC samples had more temperate phages (lysogenic cycle) than BE and CT (p > 0.05), 11.97% ± 2.43% in CT, 13.47% ± 1.15% in BE, 19.13% ± 4.90% in EAC ( Figure S2). The percentage of predicted bacterial hosts in CT, BE, and EAC. The inner cycle represents bacterial hosts at the phylum level, the outer cycle represents bacterial hosts at the class level. The low quality represents bacterial hosts predicted by contigs with a length lower than 10 kb and the score was lower than 95%; (c) Viral alpha diversity including richness (Ace) and diversity (Shannon) in samples from CT, BE, and EAC; (d) PCoA plot of the viral community composition based on the Bray-Curtis distances in CT, BE, and EAC samples. CT represents stool samples from healthy controls; BE represents stool samples from Barrett Esophagus patients; EAC represents stool samples from Esophageal Adenocarcinoma patients. Error bars indicate the average ± SE. Statistical significance was determined by Kruskal-Wallis, Dunn's post hoc test, asterisk indicates p < 0.05. We next predicted the bacterial host range of the viral contigs from different groups in the study (Figure 1b). We observed that the bacterial hosts mainly spanned the phyla Actinobacteria, Bacteroidetes, Firmicutes, and Proteobacteria, which were common across all three groups. In addition, we found that less than 0.1% of the phages were predicted to infect Fusobacteria, Spirochaetes, and Synergistetes. When the predicted bacterial host in class level was further compared, their relative abundance showed more obvious variation among the different groups, but these results were not statistically significant. For Actinobacteria, the relative abundance in CT (1.33% ± 0.28%) and BE (1.77% ± 0.21%) was higher than EAC (0.37% ± 0.11%) (p > 0.05). For Flavobacteriia, the relative abundance in CT (5.02% ± 1.45%) and EAC (5.38% ± 1.45%) was higher than BE (1.14% ± 0.16%) (p > 0.05). Notably, the classes Betaproteobacteria, Deltaproteobacteria, and Gammaproteobacteria were more abundant in CT compared with BE and EAC. Moreover, the relative abundance of Bacteroidia (13.08% ± 2.34% in CT, 4.38% ± 0.45% in BE, 1.25% ± 0.22% in EAC), Bacilli (5.97% ± 1.51% in CT,1.33% ± 0.14% in BE, <0.1% in EAC), and Erysipelotrichia (1.86% ± 0.54% in CT, 0.65% ± 0.093% in BE, <0.1% in EAC) were lower in BE and EAC compared to CT, while the relative abundance of Clostridia (15.06% ± 0.52% in CT, 18.04% ± 0.90% in BE, 29.20% ± 5.60% in EAC) was higher in BE and EAC com-pared to CT. However, there was no significant difference (Jonckheere trend test, p > 0.05). Furthermore, the remaining classes had a lower relative abundance (0.0001%-0.31%) across the three groups.
We further examined how the changes in phages community composition affected the overall diversity. For the alpha diversity, a significant difference in phage diversity (Shannon) was found among the three groups (p = 0.036), while no significant difference was observed in phage richness (Ace) (p > 0.05) (Figure 1c). Furthermore, the alpha diversity showed differences among BE and EAC compared to CT samples (p > 0.05). Specifically, in both BE (1136.17 ± 19.48) and EAC (920.50 ± 33.87), the richness (Ace) was higher compared with that in CT (854.00 ± 50.73). However, only in BE (6.50 ± 0.11), the diversity (Shannon) was higher compared with that in CT (4.53 ± 0.15). Furthermore, BE had a higher level of richness (Ace) and diversity (Shannon) than EAC. In addition, no significant difference was detected (p > 0.05) in beta diversity (PCoA) among the three groups (Figure 1d).
Abundant and Rare Phage Communities in the Gut May Contribute to the Progress of Esophageal Carcinogesis
We used a sorting approach commonly applied in ecological study that classifies microbes into three groups based on their abundance [55,56], aiming to explore the role of less abundant microbes in different ecosystems. Using this approach, the contribution of rare, less abundant, bacterial Operational Taxonomic Units (OTUs) to some of the key ecological functions was revealed in the environment [57], which was previously overlooked. We believe this approach can be beneficial for studying phages in the gut. To this end, we divided phage contigs into abundant phages (relative abundance was more than 1% in total viral contigs), moderate phages (relative abundance was more than 0.1% and less than 1% in total viral contigs), and rare phages (relative abundance was less than 0.1% in total viral contigs). At these three relative abundance levels, members of the order Caudovirales (Myoviridae, Siphoviridae, and Podoviridae) showed the highest relative abundance in all three groups (Figure 2a). Subsequently, we observed that abundant phages presented significantly higher relative abundance (79.54% ± 2.28% in CT, 54.28% ± 2.19% in BE, and 72.25% ± 4.06% in EAC) when compared with moderate (14.79% ± 1.83% in CT, 34.38% ± 1.68% in BE, and 21.19% ± 3.57% in EAC) and rare phages (4.51% ± 0.52% in CT, 11.34% ± 0.85% in BE, and 6.56% ± 0.52% in EAC) in all three groups (abundant vs. moderate p < 0.001, abundant vs rare p < 0.001) (Figure 2b,c), while the highest number of contigs was from rare phages (788 ± 48 in CT, 994 ± 18 in BE, and 836 ± 28 in EAC), exceeding abundant (13 ± 1 in CT, 17 ± 1 in BE, and 11 ± 1 in EAC) and moderate (54 ± 8 in CT, 126 ± 7 in BE, and 74 ± 15 in EAC) phages in all three groups (Figure 2b,c). The highest relative abundance of abundant phages and the highest number of contigs of rare phages may suggest their different behaviors in relation to the gut bacterial community and esophageal diseases. Moreover, a significant difference was observed in beta-diversity on abundant (p = 0.004) and rare phages (p = 0.003) ( Figure S3), which may imply that these two groups of phages showed higher sensitivity to the changes in the upper GI tract through esophageal disease progression. In addition, we found that the abundance of temperate phages that displayed a lysogenic replication cycle increased with the development of esophageal diseases. This may suggest a higher occurrence of HGT in these samples.
To further evaluate the importance of rare phages in HGT, we compared these three groups of phages to the number of bacterial hosts they infect. On the class level, we observed small differences between phage groups from different health conditions, rare phages infected 18 different bacterial classes whereas abundant phages infected 14 ( Figure 1c). However, when bacterial hosts were compared on the genus level, both diversity and abundance showed large differences, 84 for rare versus 46 for abundant phages (Table S2). In particular, contigs belonging to rare phages showed similar characteristics regarding the number of hosts they infect over three groups, showing a broader bacterial host range compared to moderate and abundant phages. For example, the contigs from rare phages were able to infect 6 or 7 different bacterial hosts at the genus level (Table S3), which was relatively higher than the bacterial hosts predicted for the contigs from abundant and moderate phages. The broader bacterial host range and higher number of contigs (Figure 2b, Tables S2 and S3) of rare phages could potentially lead to storing more AMGs in their genomes and, in turn, expand the frequency of HGT between gut bacteria. Figure 2. Composition of the rare, moderate, and abundant gut viruses in CT, BE and EAC samples. Rare, moderate, and abundant viruses were categorized based on the viral contig level. Abundant viruses represent viral contigs whose relative abundance was more than 1% in total contigs, moderate viruses represent viral contigs whose relative abundance was more than 0.1% and less than 1% in total contigs, and rare viruses represent viral contigs whose relative abundance was less than 0.1% in total contigs. (a) The relative abundance of viral families; (b) Number of contigs generated each viral contig category, rare, moderate, and abundant, on left and relative abundance of them on right. (c) Negative correlation between number of contigs, from rare, moderate, and abundant phages, and their relative abundance. CT represents stool samples from healthy controls; BE represents stool samples from Barrett Esophagus patients; EAC represents stool samples from Esophageal Adenocarcinoma patients. Statistical significance was determined by two-way analysis of variance [ANOVA], Tukey's post hoc test, asterisk indicates p < 0.05.
AMGs Found in Rare Bacteriophages Showed Increment in Esophageal Diseases
After annotation of the viral contigs, viruses were found to be involved in most of the microbial functions related to metabolism, cellular processes, genetic information processing, environment information processing, organismal system, and human disease (Figures 3a and S4). Significant differences were found for genes related to metabolism of cofactors and vitamins (p = 0.0083) and genes related to the prokaryotic defense system among the three groups (p = 0.0202) (Figure 3a). Genes involved in metabolism of cofactors and vitamins were found to be most abundant in CT phages, whereas genes related to the prokaryotic defense system were more abundant in EAC phages, suggesting a stronger arms race between phages and bacteria in this disease (Figure 3a). Notably, AMGs encoding bacterial toxins were found to be more abundant in the genome of rare bacteriophages including the spyA gene, tccC gene, entB gene and entD gene, which are involved in microbial cellular processes. The spyA gene, which encodes a C3 family ADP-ribosyltransferase (bacterial exotoxin) [58], showed a slightly higher level of relative abundance in BE and EAC (p > 0.05) compared with the other three AMGs (Figure 3b). Moreover, the spyA gene level was relatively higher in BE (0.00040 ± 0.00011) and EAC (0.0027 ± 0.0012) compared with CT (0.00031 ± 0.000012) (p > 0.05). Other AMGs that relate to LPS biosynthesis pro-teins were also found in the genome of rare phages including the lpxD gene, kdsC gene and gmnB gene, which are involved in microbial metabolism (Figure 3b). The lpxD gene only presented in BE with a relative abundance of 0.00031 ± 0.000113. The kdsC gene presented in BE (0.000089 ± 0.000036) and CT (0.0000024 ± 0.00000097). For the gmnB gene, it was relatively higher in EAC (0.00064 ± 0.00029) and BE (0.00024 ± 0.000094) compared with CT (0.00015 ± 0.000044) (p > 0.05). The higher abundance of these genes in phages from BE and EAC compared to CT may have resulted from the increase of pathogenic bacteria, mainly Gram-negatives, in the esophageal diseases, leading to a higher chance of obtaining AMGs, which are related to LPS biosynthesis proteins encoded by phages. We next explored the appearance of these genes in the Gut Phages Database (GPD) containing 142,809 non-redundant globally distributed phage genomes. We found many phages encoding these genes in GPD with one exception, tccC, showing these AMGs are ubiquitous in the human gut ( Figure S5). Toxin complex (Tc) is a multisubunit toxin consisting of three components (TcA, B, and C) encoded by pathogenic bacteria infecting both insects and humans. TcAs that make functional pores combine with TcB-TcC subunits to create active chimeric holotoxins. Tc toxins are encoded by human pathogens like Yersinia pestis, Y. pseudotuberculosis, and Morganella morganii and are believed to significantly contribute to these bacteria's pathogenicity. Yet, their role in EAC remains to be revealed [59]. The increase of these genes in phages from BE and EAC may contribute to the severity of these diseases through exchanging genes that are involved in bacterial exotoxin production and LPS biosynthesis in esophageal carcinogenesis. This warrants further investigation.
Discussion
Barrett's esophagus (BE) is a condition caused by the metaplastic replacement of the normal squamous epithelium by columnar epithelium. BE is closely associated with the development of esophageal adenocarcinoma (EAC), a disease in which cancerous cells develop in the tissues of the esophagus with a high mortality rate [42]. It has been recently shown that gut dysbiosis can activate oncogenic signaling pathways, leading to the production of tumor-promoting metabolites, and further influence the esophageal mucosal inflammation and tumorigenesis [60]. For example, gut bacteria regulate bile acid (BA) metabolism. Under stimulation such as a high-fat diet, the gut bacteria changed, and the level of BA increased accordingly [61]. The reflux of BA to the esophagus caused esophageal damage, leading to BE and subsequent EAC. In an animal experiment simulating BA reflux, overexpression of the inflammatory cells, IL-6 and TNF-α, was found [62]. This indicated that gut bacterial alterations could indirectly induce the esophageal mucosal inflammation and carcinogenesis [62][63][64]. Despite a wealth of data on the role of gut bacteria in GI tract disease, we have only recently recognized the association of gut viruses with some GI tract diseases, including CRC in which the diversity of the gut viruses is significantly increased in stool samples from CRC patients, suggesting a disease-specific signature that can be used to differentiate CRC samples from controls [37]. The CRC-associated virome includes primarily temperate bacteriophages belonging to Siphoviridae and Myoviridae families [65]. The impact of phages on gut homeostasis is not restricted to their interactions with gut bacteria as phages can directly interact with the human host. In vitro studies have demonstrated that phages can cross the epithelial cell layer through transcytosis, thereby stimulating the underlying immune cells [22,[66][67][68][69]. For example, the interaction between E.coli phages and the immune system has been associated with Type I Diabetes autoimmunity [36]. It has been reported that phages can activate IFN-γ produced by CD4+ T cells via the nucleotide-sensing receptor TLR9, which accelerates intestinal inflammation and colitis, leading to a systemic inflammation response [70]. The consistent disease-specific signature of gut viruses [27,37], suggests a potential association between gut viruses and human disease.
Studies that investigated the esophageal virome, using metagenomic data of whole microbial communities rather than profiling the isolated viral communities, have identified a range of phages, including Streptococcus, Campylobacter, Lactococcus, and γ-Proteobacteria phages [71]. The aforementioned and those that only explored the bacterial community of the esophagus have mainly used biopsy samples for virome and bacterium analysis [10,72,73]. Although, biopsies could directly reflect the disease-associated microbial signature at the lesion, the sampling procedure is invasive, time-consuming, costly, and may induce potential complications [74]. Moreover, biopsy samples often have limited microbial materials, with a lower probability of successful sequencing and downstream analysis [75]. Thus, an amplification step (e.g., whole genome amplification) is necessary, which might introduce biases to study results. On the contrary, stool samples collected by non-invasive methods often supply sufficient materials for research purposes [76].
Here we explored stool samples from BE, EAC, and CT phages community composition in esophageal diseases. Our in-depth gut virome analysis during esophageal carcinogenesis provided some evidence of gut phage community changes between different stages of esophageal diseases. Consistent with previous studies that have explored the gut viruses, mainly in the lower GI tract diseases such as IBD and CRC [27,65], phages from the order Caudovirales were the most dominant phages in the samples from esophageal diseases. Compared with CT, the alpha diversity has changed with the esophageal diseases progress, and a relatively higher alpha diversity was observed in BE samples compared to CT and EAC. This was not reflected in the beta diversity as no significant differences were observed among three groups. Using a common sorting approach in microbial ecology, we identified disease-associated differences in diversity and abundance of rare phages, suggesting a potential link between these phages and esophageal diseases. In addition, consistent with previous studies on diseases like IBD [77] and CRC [65], we observed changes in the proportion of lytic/lysogenic replication cycles of phages, and more temperate phages were observed in esophageal carcinogenesis. These results further support earlier studies that reported the dominance of virulent phages (lytic cycle) in the healthy human gut replaced by temperate phages in Crohn's disease and ulcerative colitis [23,24]. Furthermore, the relatively higher percentage of temperate phages in samples from esophageal diseases may imply more influence on the bacterial physiology through phage mediated HGT in those groups. However, we did not study the bacterial community of these samples, the community structure of the predicted bacterial hosts for the phages identified in the study may suggest a complex relationship between bacteria and bacteriophage community in esophageal diseases. Earlier studies on lower GI tract diseases such as CRC have observed that the effect of phages resulted from their interactions with the whole bacterial community, rather than the bacterial taxa directly contributing to the disease severity [65]. However, there was no direct correlation between bacterial diversity and phage diversity [27,37].
In addition, we found several AMGs in the genome of the rare phages, further emphasizing the potential role of phages in regulating bacterial physiology by supplying their host with beneficial genes. Specifically, a slightly higher abundance of spyA (p > 0.05) was observed in BE and EAC, potentially contributing to the production of bacterial exotoxins, which disrupt cytoskeletal structures and promote colonization of pathogenic bacteria [58]. The relatively higher abundance of AMGs related to LPS biosynthesis proteins were also found in BE and EAC, which may indicate the dominance of Gram-negative bacteria and the potential inflammatory effects of phage-bacteria interactions. Phages that carry these AMGs can introduce these genes to the genome of gut bacteria via integration, which may contribute to the severity of the esophageal diseases through lysogenic conversion. This could further induce gut inflammation through expression of the phage-derived virulence genes and deteriorate esophageal disease. Intestinal permeability caused by phage-mediated changes of gut microbiota could also lead to systemic inflammatory responses [78]. Given the high variability of the microbiome between individuals and the limited number of samples analyzed, it is difficult to identify significant differences in viral community structure between different groups in the current study. Thus, our findings should be further pursued with a larger sample size.
Conclusions
In summary, this study provides further evidence of potential relationship between gut phages and esophageal diseases. Interestingly, the distinct gut phage community structure was identified in two different stages of esophageal diseases, and these differences were mainly found in abundant and rare bacteriophages. Notably, rare phages and HGT mediated by them have been found to be more related to esophageal diseases. Specially, the rare phages contributed to enriching AMGs related to bacterial exotoxin and LPS biosynthesis proteins, and the possible upregulated level of these genes. These, in turn, may contribute to changes in the gut bacterial composition and inflammation, which lead to the development of esophageal diseases, as previously suggested [6]. However, given the small sample size in our study, the potential diagnostic importance of AMGs and disease-specific viral signature identified should be experimentally validated in further studies.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/microorganisms9081701/s1, Figure S1: Gene-sharing taxonomic network of viral sequences in this study, including viral RefSeq viruses v85. Figure S2: The proportion of lytic/lysogenic replication cycles predicted for the viral contigs from three groups. Figure S3: PCoA plot of the viral community composition based on the Bray-Curtis distances in CT, BE, and EAC samples. Figure S4: The relative abundance of different functional traits in viral sequences. Figure S5: The number of phages that contained the identified AMGs of this study in the Gut Phage Database (GPD). Table S1: Clinical information of individuals from three groups. Table S2: The relative abundance of bacterial host at genus level for abundant, moderate, and rare bacteriophages. Table S3: The percentage of contig relative abundance in different number of predicted bacterial genus types for abundant, moderate, and rare bacteriophages. | 6,578.6 | 2021-08-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Reconfigurable photonic temporal differentiator based on a dual-drive Mach-Zehnder modulator
We propose and demonstrate a reconfigurable photonic temporal differentiator based on a dual drive Mach-Zehnder modulator (DDMZM). The differentiator can be reconfigured to different differentiation types by simply adjusting the bias voltage of DDMZM. Both simulations and experiments are carried out to verify the proposed scheme. In the experiment, a pair of polarity-reversed field differentiation and a pair of polarity-reversed intensity differentiation are successfully generated. The differentiation accuracy and conversion efficiency versus the time delay are also investigated. ©2016 Optical Society of America OCIS codes: (060.5625) Radio frequency photonics; (070.1170) Analog optical signal processing; (200.4740) Optical processing. References and links 1. Y. Han, Z. Li, and J. Yao, “A microwave bandpass differentiator implemented based on a nonuniformly-spaced photonic microwave delay-line filter,” J. Lightwave Technol. 29(22), 3470–3475 (2011). 2. N. Q. Ngo, S. F. Yu, S. C. Tjin, and C. H. Kam, “A new theoretical basis of higher-derivative optical differentiators,” Opt. Commun. 230(1-3), 115–129 (2004). 3. J. Capmany and D. Novak, “Microwave photonics combines two worlds,” Nat. Photonics 1(6), 319–330 (2007). 4. R. Ashrafi and J. Azaña, “Figure of merit for photonic differentiators,” Opt. Express 20(3), 2626–2639 (2012). 5. S. Pan and J. Yao, “Optical generation of polarityand shape-switchable ultrawideband pulses using a chirped intensity modulator and a first-order asymmetric Mach-Zehnder interferometer,” Opt. Lett. 34(9), 1312–1314 (2009). 6. J. Niu, K. Xu, X. Sun, Q. Lv, J. Dai, J. Wu, and J. Lin, “Instantaneous microwave frequency measurement using a photonic differentiator and an opto-electric hybrid implementation,” Asia-Pacific Microwave Photonics Conference 2010 (APMP2010), Hong Kong, China, 26–28 April 2010. 7. X. Li, J. Dong, Y. Yu, and X. Zhang, “A tunable microwave photonic filter based on an all-optical differentiator,” IEEE Photonics Technol. Lett. 23(5), 308–310 (2011). 8. F. Zeng and J. Yao, “Ultrawideband impulse radio signal generation using a high-speed electrooptic phase modulator and a fiber-Bragg-grating-based frequency discriminator,” IEEE Photonics Technol. Lett. 18(19), 2062–2064 (2006). 9. P. Li, H. Chen, X. Wang, H. Yu, M. Chen, and S. Xie, “Photonic generation and transmission of 2-Gbit/s powerefficient IR-UWB signals employing an electro-optic phase modulator,” IEEE Photonics Technol. Lett. 25(2), 144–146 (2013). 10. J. Xu, X. Zhang, J. Dong, D. Liu, and D. Huang, “High-speed all-optical differentiator based on a semiconductor optical amplifier and an optical filter,” Opt. Lett. 32(13), 1872–1874 (2007). 11. F. Wang, J. Dong, E. Xu, and X. Zhang, “All-optical UWB generation and modulation using SOA-XPM effect and DWDM-based multi-channel frequency discrimination,” Opt. Express 18(24), 24588–24594 (2010). 12. V. Moreno, M. Rius, J. Mora, M. A. Muriel, and J. Capmany, “Integrable high order UWB pulse photonic generator based on cross phase modulation in a SOA-MZI,” Opt. Express 21(19), 22911–22917 (2013). 13. J. Xu, X. Zhang, J. Dong, D. Liu, and D. Huang, “All-optical differentiator based on cross-gain modulation in semiconductor optical amplifier,” Opt. Lett. 32(20), 3029–3031 (2007). 14. Q. Wang and J. Yao, “Switchable optical UWB monocycle and doublet generation using a reconfigurable photonic microwave delay-line filter,” Opt. Express 15(22), 14667–14672 (2007). 15. M. Bolea, J. Mora, B. Ortega, and J. Capmany, “Optical UWB pulse generator using an N tap microwave photonic filter and phase inversion adaptable to different pulse modulation formats,” Opt. Express 17(7), 5023– 5032 (2009). #261143 Received 15 Mar 2016; revised 13 May 2016; accepted 15 May 2016; published 20 May 2016 © 2016 OSA 30 May 2016 | Vol. 24, No. 11 | DOI:10.1364/OE.24.011739 | OPTICS EXPRESS 11739 16. F. Li, Y. Park, and J. Azaña, “Linear characterization of optical pulses with durations ranging from the picosecond to the nanosecond regime using ultrafast photonic differentiation,” J. Lightwave Technol. 27(21), 4623–4633 (2009). 17. F. Liu, T. Wang, L. Qiang, T. Ye, Z. Zhang, M. Qiu, and Y. Su, “Compact optical temporal differentiator based on silicon microring resonator,” Opt. Express 16(20), 15880–15886 (2008). 18. L. M. Rivas, K. Singh, A. Carballar, and J. Azaña, “Arbitrary-order ultrabroadband all-optical differentiators based on fiber Bragg gratings,” IEEE Photonics Technol. Lett. 19(16), 1209–1211 (2007). 19. M. Li, D. Janner, J. Yao, and V. Pruneri, “Arbitrary-order all-fiber temporal differentiator based on a fiber Bragg grating: design and experimental demonstration,” Opt. Express 17(22), 19798–19807 (2009). 20. M. A. Preciado, X. Shu, P. Harper, and K. Sugden, “Experimental demonstration of an optical differentiator based on a fiber Bragg grating in transmission,” Opt. Lett. 38(6), 917–919 (2013). 21. M. Kulishov and J. Azaña, “Long-period fiber gratings as ultrafast optical differentiators,” Opt. Lett. 30(20), 2700–2702 (2005). 22. Z. Chen, L. Yan, W. Pan, B. Luo, X. Zou, A. Yi, Y. Guo, and H. Jiang, “Reconfigurable optical intensity differentiator utilizing DGD element,” IEEE Photonics Technol. Lett. 25(14), 1369–1372 (2013). 23. J. Dong, A. Zheng, D. Gao, L. Lei, D. Huang, and X. Zhang, “Compact, flexible and versatile photonic differentiator using silicon Mach-Zehnder interferometers,” Opt. Express 21(6), 7014–7024 (2013). 24. A. Zheng, J. Dong, L. Lei, T. Yang, and X. Zhang, “Diversity of photonic differentiators based on flexible demodulation of phase signals,” Chin. Phys. B 23(3), 033201 (2014). 25. J. Dong, Y. Yu, Y. Zhang, B. Luo, T. Yang, and X. Zhang, “Arbitrary-order bandwidth-tunable temporal differentiator using a programmable optical pulse shaper,” IEEE Photonics J. 3(6), 996–1003 (2011).
Introduction
The temporal differentiation has attracted great interests due to its wide applications in system controlling, electrocardiograph (ECG) signal monitoring, radar signals analyzing and modern communications [1,2].Conventional temporal differentiators achieved in the electrical domain exhibits low processing speed and narrow band [2].Photonic processing of radio frequency (RF) signals is a very promising technique for its intrinsic advantages, such as large bandwidth, low loss, and immunity to electromagnetic interference (EMI) [3].As a branch, the temporal differentiation realized by employing optical approaches exhibits advantages of ultra fast processing and broad band, and has been extensively investigated [1].The temporal differentiators can be classified into two categories: intensity differentiation and field differentiation [4].The optical intensity differentiator is used to differentiate the intensity profile of the input waveform, which is usually applied in the microwave photonics, such as ultra-wideband (UWB) generation [5], microwave photonic frequency measurement [6], and microwave photonic filter [7].The optical intensity differentiators can be realized by using phase modulation [8,9] or cross-phase modulation (XPM) [10][11][12] and frequency discriminators, cross-gain modulation in a semiconductor optical amplifier (SOA) [13], and superimposing delayed optical pulses [14,15].Meanwhile, the field differentiator is used to differentiate the optical complex field (including the amplitude and phase) and can be used for pulse reshaping, ultrashort pulse generation, and achieving the odd-symmetric Hermite-Gaussian (OS-HG) pulses [16].Many approaches have been proposed to realize the field differentiation, such as using the microring resonator [17], apodized fiber Bragg gratings (FBGs) [18][19][20], and long period fiber gratings (LPGs) [21].
Besides the ultra-fast processing ability of the photonic differentiator, the reconfigurability, which is the ability to switch the differentiator from one differentiation type to another, is also very attractive because it can increase the flexibility of the differentiator to meet the requirements for dynamic applications.Some approaches have been proposed to achieving reconfigurable differentiators.Researchers in Yan's group have proposed to use a Mach-Zehnder modulator (MZM) and a differential-group-delay (DGD) element to realize a reconfigurable differentiator [22].By changing the polarization state of the input signal into the polarizer, three differentiation types, which are a pair of polarity-reversed intensity differentiation and positive field differentiation, are achieved.However, adjusting and aligning the polarization state make the system complex and only three differentiation types are achieved.The silicon based Mach-Zehnder interferometer (MZI) has also been proposed to realize a pair of polarity-reversed intensity modulation and a positive field differentiation [23].The reconfigurability is realized by controlling the deviation of the optical wavelength from the MZI transmission dip.Similarly, only three differentiation types are achieved.It has also been proposed to generate a pair of polarity-reversed intensity and field differentiations by using an electro-optic phase modulator (EOPM) and two cascaded delay interferometers (DIs) [24].Four differentiation types can be realized by adjusting the resonant frequencies of the two DIs.However, precisely adjusting the two MZIs increases the complexity of the system.
In this paper, a novel and reconfigurable optical differentiator is proposed and experimentally demonstrated.The scheme is realized based on a single dual-drive Mach-Zehnder modulator (DDMZM).The input RF signal is equally divided into two parts at first and a relative time delay between the two RF signals is introduced.Then the two RF signals are applied to the two arms of DDMZM respectively.The input continuous wave (CW) light is also equally split into two parts by the Y branch coupler in the DDMZM and then phase modulated by the two RF signals in each arm of the DDMZM, respectively.After combining at the output of DDMZM, the two phase-modulated signals are interfered and converted into optical intensity signals.Our scheme shows good reconfigurability to any first-order differentiator by electrically switching the differentiation type.When the bias voltage of the DDMZM is adjusted, all the four different differentiation types, which are a pair of polarityreversed intensity differentiators and a pair of polarity-reversed field differentiators, are theoretically and experimentally demonstrated.Compared with previously reported schemes [22][23][24], no precise polarization adjustment or wavelength alignment is needed in our scheme.Thus, our scheme is quite simple.The scheme also shows good extinction ratio (ER) of all the differentiated waveforms.The differentiation accuracy and the conversion efficiency are also investigated.equally split into two parts by the Y-branch coupler of the DDMZM.Then the two optical beams are propagated along the upper and the lower arms of the DDMZM, respectively.The electrical signal is also equally divided into two parts and a certain time delay is introduced in one signal.Then, the two electrical signals are applied to the two RF ports of the DDMZM, respectively.Thus, phase modulation occurs in both the upper and lower arms of the DDMZM.The bias voltage of DDMZM is used to adjust the phase difference of the two arms in DDMZM.At the output of DDMZM, the two phase modulated signals are interfered and converted into optical intensity signals.
Operation principle and simulations
The optical field injected into the DDMZM is assumed as where 0 E , 0 ω and 0 ϕ are the amplitude, angular frequency and initial phase of the CW light respectively.When the CW light is injected into the DDMZM, the optical power is also equally divided into two parts and the two parts are launched to the upper and lower arms of the DDMZM respectively.According to Eq. ( 1), when the RF signals are applied to the DDMZM, the optical field in the upper and lower arms of DDMZM can be expressed as and respectively, where ( ) s t is the electrical signal applied to the RF port of the DDMZM, V π is the half wave voltage of the DDMZM, b V is the bias voltage applied to the DDMZM, and τ is the time delay of the RF signal applied to the lower arm.The derivation of the four differentiation types is present as follows.
Positive field differentiation
When = b V V π , the optical power at the output of DDMZM can be achieved by combining Eq. ( 2) and Eq. ( 3) where / (2 ) − − is sufficiently small, Eq. ( 4) can be approximated as Ifτ is sufficiently small, Eq. ( 5) can be approximated as From Eq. ( 6), it can be concluded that the output optical power is the first-order positive field differentiation of the input signals.The generation of the positive field differentiation can be illustrated as Fig. 1(b1).When the optical phase difference caused by the bias voltage is π and the two data sequences has a small time delay ofτ , constructive and destructive interferences occur between the two phase-modulated signals and positive field differentiation is generated.If the max phase shift induced by the input data is π, the differentiated signal can get its max amplitude.
Negative field differentiation
When the bias voltage is adjusted to make =0 b V , the output optical power can be expressed as The same to Eq. ( 4), when ( ) ( ) 2 s t s t τ α τ − − can be considered as a small quantity, Eq. ( 7) can be approximated as From Eq. ( 8), it can be observed that ifτ is sufficiently small, the output optical power can be expressed as From Eq. ( 9), it can be concluded that the output waveform is the negative field differentiation of the input signal.The generation of negative field differentiation can be illustrated as Fig. 1(b2).When the phase shift caused by the bias voltage is 0, negative field differentiation can be achieved after optical interference.The phase shift of π can ensure the max amplitude of the differentiated signal, which is the same as the positive field differentiation.
Positive intensity differentiation
The last term in the braces is a second order small quantity and thus can be neglected.Therefore, Eq. ( 11) can be simplified as }. 16 2 In Eq. ( 12), When the phase shift caused by the input electrical data is π/2, the differentiated signal can achieve its max amplitude.
Negative intensity differentiation
When the bias voltage is adjusted to make =-/ 2 b V V π , similar to the positive differentiation, the output intensity signal can be expressed as Equation ( 13) can be approximated as }. 16 2 From Eq. ( 14), it can be observed that the output waveform is the negative intensity differentiation of the input signal.The generation of negative intensity differentiation can be illustrated as Fig. 1(b4).Therefore, by adjusting the bias voltage of DDMZM, a pair of polarity-reversed intensity differentiation and a pair of polarity-reversed field differentiation can be theoretically achieved.In order to verify our analysis, simulations are carried out and the simulated results are shown in Fig. 2.
In simulation, a Gaussian pulse with full width at half maximum (FWHM) of 166 ps and a super-Gaussian pulse with FWHM of 65 ps are used as input impulse respectively.Both the amplitudes of the input Gaussian and super-Gaussian pulses are 1 V.The time delay between the two signals is set at 10 ps.The half-wave voltage (V π ) in the both arms of the DDMZM is 3.5 V. Figure 2(a1) shows the input Gaussian pulse.At first, the bias voltage of the DDMZM is set at 3.5 V, which indicates that the phase difference between the two arms of DDMZM is π.The optical waveform at the output of DDMZM is shown as the black solid and rectangular curve in Fig. 2(a2).It can be observed that the output waveform is the positive field differentiation of the input signal.The ideal positive field differentiation is also shown as the red dashed curve in Fig. 2(a2).It can be observed that the positive field differentiation accords very well with the ideal result.Then, the bias voltage is set at 0, which indicates that the phase difference between the two arms is 0. The output waveform and the ideal negative #261143 field differentiation are shown in Fig. 2(a3).It can be observed that the achieved optical waveform accords very well with the ideal negative field differentiation.Thus, when the bias voltage of the DDMZM is 0, a negative field differentiation can be obtained.Thirdly, the bias voltage is set at 1.75 V, which indicates the phase difference between the two arms is π/2.The output waveform is shown in Fig. 2(a4) and positive intensity differentiation is achieved.The ideal positive intensity differentiation is also shown in Fig. 2(a4).It can be observed that the simulation result accords very well with the ideal intensity differentiation.When the bias voltage is set at −1.75 V, corresponding to a phase shift of π/2, the simulated output waveform is shown in Fig. 2(a5) and a negative intensity differentiation can be achieved.The ideal negative intensity differentiation is also plotted in Fig. 2(a5).It can be observed that the simulation result accords very well with the ideal differentiations.Therefore, in simulation, it can be observed that the four type differentiations are all accorded with the corresponding ideal differentiations, respectively.
Meanwhile, the differentiations of a super-Gaussian pulse with FWHM of 65 ps are also simulated and the simulation results are shown in Fig. 2(b1)-(b5).It can be also observed that the simulated four different types of differentiation all accord well with the simulation results.Thus, in simulation, it can be concluded that the proposed scheme can generate four different types of differentiation by simply adjusting the bias voltage of the DDMZM.To verify our analysis and theoretical simulation, an experiment as illustrated in Fig. 3 is performed.A continuous wave (CW) light emitted form a laser diode (LD, Alnair TLG-200) is injected into a DDMZM (Fujitsu, FTM7937EZ611) via a polarization controller (PC).The electrical pulse emitted from a bit pattern generator (BPG, SHF 44E) is equally divided into two parts by a radio frequency (RF) splitter.Then the two signals are simultaneously amplified by a dual broadband amplifier (DBA, Centellax OA4SMM4).One signal is delayed with a certain time compared with the other signal by a RF delay line.Then the two signals are applied to the two RF ports of the DDMZM, respectively.After modulated by the DDMZM, the output optical signals are power adjusted by an erbium-doped fiber amplifier (EDFA) and variable optical attenuator (VOA).The digital communication analyzer (DCA, Angilent Infiniium DCA-J86100C) is used to measure the waveform of the generated optical pulses.In the experiment, the relative time delay between the two arms is 12 ps.Figure 4 shows the experimental and simulated ideal results.Figure 4(a) shows the input electrical pulse (black solid curve) in the experiment and the fitted waveform (red dotted curve), respectively.It can be noted that the data sequence is "1011100110010001101110011001000 1101110".
Experimental results and discussion
At first, when the bias voltage of DDMZM is set at −3.6 V and the electrical voltage which controls the gain of DBA (v g ) is set at −0.6 V respectively, the output waveform is shown as the black solid curve in Fig. 4(b).The ideal positive field differentiation is shown as the red dotted curve in Fig. 4(b) for comparison.It can be observed that the positive field differentiation is successfully generated.Then, the bias voltage is adjusted at 1.6 V and the generated waveform is shown as the black solid curve in Fig. 4(c).The simulated waveform is shown as the red dotted curve in Fig. 4(c).Therefore, the negative field differentiation is successfully generated.The calculated average errors of the positive field differentiation and the negative field differentiation are 8.0% and 11.2%, respectively.The average error is defined as the mean absolute deviation of measured differentiation power from the ideal differentiation power during a certain period of time [25].The extinction ratio (ER) of the positive field differentiation is 17.0 dB, and the ER of the negative field differentiation is 19.1 dB, respectively.When the bias voltage of DDMZM and v g are set at −2.3 V and −0.1 V respectively, the output waveform is shown as the black solid curve in Fig. 4(d).The ideal positive intensity differentiation is shown as the red dotted curve in Fig. 4(d).It can be observed that correct positive intensity differentiation is successfully achieved.Compared with the simulated results, some small humps can be observed in the experimental results.The humps are generated by the nonideal electrical pulse.In Fig. 4(a), it can be observed that the top of the measured electrical pulse is uneven.After differentiation, the uneven top is converted to small humps.Thus, some humps can be observed in the generated waveform.When the bias voltage is adjusted at −5.8 V, the generated waveform is shown as the black solid curve in Fig. 4(e).The ideal negative intensity differentiation is shown as the red dotted curve in Fig. 4(e).It can be observed that the negative intensity differentiation is successfully generated.Similar to the positive intensity differentiation generation, there are also some humps existing in the generated waveform.The calculated average errors of the positive intensity differentiation and the negative intensity differentiation are 10.3% and 9.9%, respectively.The ER of the positive intensity differentiation waveform is 23.4 dB, and the ER of the negative intensity differentiation waveform is 23.3 dB, respectively.Thus, it can be concluded that by adjusting the bias voltage of the DDMZM, a pair of polarity-reversed field differentiations and intensity differentiations are successfully achieved.The reconfigurability from one differentiation type to another is realized simply by adjusting bias voltage.By comparing the ideal and experimental results, it can be observed that the pulse width achieved in the experiment is larger than that in simulation.This is because there is a tradeoff between the differentiation accuracy and conversion efficiency.In the experiment, a smaller time delay can result in a more accurate differentiation.However, the amplitude of generated waveform will also be smaller.Therefore, the time delay should be neither too large nor too small.In the experiment, the time delay is set at 12 ps to balance the pulse amplitude and the differential accuracy.The influences of the time delay on the differentiation accuracy and differentiation efficiency are also investigated.The measured and simulated results are shown in Fig. 5.We take an 8-th order super-Gaussian pulse with FWHM of 372 ps as the input pulse, shown as the black curve in Fig. 5(a).When the time delay is 11.6 ps, the measured negativeintensity differentiated waveform is shown as the green curve in Fig. 5(a).It can be observed that the amplitude is quite small.When the relative time delay is increased to 25.2 ps, 66.7 ps, and 125.9 ps, the measured differentiated waveforms are shown as the yellow curve, purple curve, and red curve, respectively.It can be observed that the amplitude of the differentiated waveform is increased when the relative time delay is increased.The measured normalized amplitudes of the differentiated waveforms with different time delays are shown as the rhombus in Fig. 5(b).The normalized amplitude is defined as the ratio of the amplitude of the differentiated waveform and the amplitude of the input pulse.The predicted result is also present as the purple dashed curve for comparison.It can be observed the measured and predicted results accord well with each other.When the time delay is increased, the normalized amplitude is also increased.If the time delay is sufficiently large, the normalized amplitude of differentiated waveform tends to reach its maximum value.The measured (rectangles) and predicted (red solid trace) average errors are also present in Fig. 5(b).It can be observed that when the time delay is increased, the average error of differentiated waveform is increased.However, it can be also observed that the measured average error gets larger when the relative time delay is sufficiently small.This is caused by the noise in the differentiated waveform.When the time delay is too small, the amplitude of differentiated waveform is also very small and the signal to noise ratio (SNR) is deteriorated.The large noise level will increase the average error.Thus, as the relative time delay increased, the differentiation efficiency and error are both increased.Therefore, there is a tradeoff between the differentiation efficiency and accuracy.
In our scheme, it can be noted that a RF delay line is used to introduce a relative time delay between the two phase modulated optical signals of the two arms.Therefore, it is not an all-optical approach and the operation bandwidth is limited by the RF delay line.This disadvantage can be overcome by using the optical delay.For example, by designing the DDMZM with one arm longer than the other one, a relative time delay can be introduced in the optical domain to overcome the bandwidth limitation set by the RF delay line.Therefore, an all-optical differentiator can be achieved.
Conclusion
A reconfigurable optical differentiator based on a DDMZM is proposed and demonstrated.The reconfigurability is realized simply by adjusting the bias voltage of the DDMZM.Both the simulation results and experimental results demonstrate the generation of a pair of polarity-reversed intensity differentiation and a pair of polarity-reversed field differentiation.The differentiation accuracy and conversion efficiency versus the time delay are also simulated and analyzed.The results show that there is a tradeoff between the differentiation accuracy and the conversion efficiency.
Fig. 1 .
Fig. 1.Operation principle of the proposed scheme.(a) is the schematic diagram of the proposed scheme.(b1),(b2), (b3) and(b4) represent the generation of the positive field differentiation, negative field differentiation, positive intensity differentiation, and negative intensity differentiation, respectively.
the input signal.The combination of the two terms is the positive intensity differentiation of the input signal.The positive intensity differentiation can be illustrated as Fig.1(b3).However, there is a little difference from the field differentiation.
Fig. 4 .
Fig. 4. Experimental and simulated results.(a) shows the measured (black solid curve) and fitted (red dotted curve) waveforms of the input signal; (b) shows the measured (black solid curve) and simulated (red dotted curve) waveforms of positive field differentiation; (c) shows the measured (black solid curve) and simulated (red dotted curve) waveforms of negative field differentiation; (d) shows the measured (black solid curve) and simulated waveforms (red dotted curve) of positive intensity differentiation; (e) shows the measured (black solid curve) and simulated waveforms (red dotted curve) of negative intensity differentiation.
Fig. 5 .
Fig. 5.The influences of time delay on the differentiation accuracy and differentiation efficiency.(a) shows the measured input waveforms and differentiated waveforms with different time delays.(b) shows the measured (rhombus) and simulated (dashed curve) amplitudes of differentiated waveforms with different time delay, and measured (rectangles) and simulated (solid curve) of the average error with different time delay. | 5,828.2 | 2016-05-30T00:00:00.000 | [
"Physics"
] |
Error-Backpropagation-Based Background Calibration of TI-ADC for Adaptively Equalized Digital Communication Receivers
A novel background calibration technique for Time-Interleaved Analog-to-Digital Converters (TI-ADCs) is presented in this paper. This technique is applicable to equalized digital communication receivers. As shown in the literature, in a digital receiver it is possible to treat the TI-ADC errors as part of the communication channel and take advantage of the adaptive equalizer to compensate them. Therefore calibration becomes an integral part of channel equalization. No special purpose analog or digital calibration blocks or algorithms are required. However, there is a large class of receivers where the equalization technique cannot be directly applied because other signal processing blocks are located between the TI-ADC and the equalizer. The technique presented here generalizes earlier works to this class of receivers. The error backpropagation algorithm, traditionally used in machine learning, is applied to the error computed at the receiver slicer and used to adapt an auxiliary equalizer adjacent to the TI-ADC, called the Compensation Equalizer (CE). Simulations using a dual polarization optical coherent receiver model demonstrate accurate and robust mismatch compensation across different application scenarios. Several Quadrature Amplitude Modulation (QAM) schemes are tested in simulations and experimentally. Measurements on an emulation platform which includes an 8 bit, 4 GS/s TI-ADC prototype chip fabricated in 130nm CMOS technology, show an almost ideal mitigation of the impact of the mismatches on the receiver performance when 64-QAM and 256-QAM schemes are tested.
Although the CE solves the compensation problem, there is 91 still a problem with the adaptation of the CE, because slicer 92 error components associated with different interleaves are 93 also combined by the signal pre-processing blocks. Thus, the 94 slicer error is not directly applicable to adapt the CE. To solve 95 the adaptation problem, in this work we propose to adapt the 96 CE using a post processed version of the error at the slicer 97 of the receiver. The post processing is based on the back-98 propagation algorithm [43], widely used in machine learning 99 applications [44]. Its main characteristic is that, in a multi-100 stage processing chain where several cascaded blocks have 101 adaptive parameters, it is able to determine the contribution 102 to the error generated by each one of these blocks and their 103 associated parameters for all the stages. Backpropagation is 104 used in combination with the Stochastic Gradient Algorithm 105 (SGD) to adjust the coefficients of the CE in order to mini-106 mize the slicer Mean Squared Error (MSE). The use of the CE 107 in combination with the backpropagation algorithm results in 108 robust, fast converging background calibration. As we shall 109 show, this proposal is not limited to the compensation of 110 individual TI-ADCs (which is the case for most calibration 111 techniques), but it extends itself to the entire receiver Analog 112 Front End (AFE), enabling the compensation of impairments 113 such as time skew, quadrature, and amplitude errors between 114 the in-phase and the quadrature components of the signal in 115 a receiver based on Phase Modulation (PM) or Quadrature 116 Amplitude Modulation (QAM). 117 Because ultrafast adaptation is usually not needed, the 118 backpropagation algorithm can be implemented in a highly 119 subsampled hardware block which does not require parallel 120 processing. Therefore, the implementation complexity of the 121 proposed technique is low, as will be discussed in detail. 122 Although the technique presented here is general and can be 123 used in digital receivers for different applications, the primary 124 example in this paper is a receiver for coherent optical com-125 munications. State of the art coherent optical receivers oper-126 ate at symbol rates around 96 Giga-Baud (GBd) and require 127 ADC sampling rates close to 150 GS/s and bandwidths of 128 about 50 GHz. In the near future symbol rates will increase to 129 128-150 GBd or higher, requiring bandwidths in the range of 130 65-75 GHz and sampling rates in the 200-250 GS/s range. 131 High-order QAM schemes (e.g., 64-QAM, 256-QAM and 132 higher) will be deployed to increase spectral efficiency [45]. 133 High-order modulation schemes increase the resolution and 134 overall performance requirements on the ADC. The benefits 135 of the proposed technique are experimentally verified using 136 64-QAM and 256-QAM schemes. 137 The rest of this paper is organized as follows. Section II 138 lists the requirements of calibration techniques suitable for 139 digital receivers. These requirements set this application apart 140 from more generic applications. Section II also compares 141 the technique proposed here with other state-of-the-art tech-142 niques in the light of said requirements. Section III presents 143 a discrete time model of the TI-ADC system in a Dual-144 Polarization (DP) optical coherent receiver. The error back-145 propagation based adaptive CE is introduced in Section IV. 146 Simulation results are discussed in Section V, while the 147 experimental evaluation is presented in Section VI. The hard-148 ware complexity of the proposed scheme is discussed in 149 Section VII and conclusions are drawn in Section VIII. The technique proposed in this paper meets all the above 202 requirements. In the following we compare it to some of the 203 state of the art calibration techniques presented in the tech-204 nical literature, with focus on the communications receiver 205 applications. The following is a broad categorization of the 206 most important techniques described in the literature and their 207 comparison with the one proposed in this paper: 208 • Group 1 (G1): Techniques based on the autocorrela-209 tion of the quantized signal [21], [22], [23], [24], [25]. 210 These techniques are well suited to estimating the sam-211 pling time errors, but do not provide information on 212 frequency response mismatches or mismatches affect-213 ing different TI-ADCs in the AFE of a QAM receiver. 214 This group of techniques satisfies Criterion 1 but not 215 Criteria 2, 3, and 4.
228
• Group 4 (G4): Techniques based on dither injec-229 tion [33], [34], [35], [36]. Dither injection techniques 230 are based on the addition of a known signal to the 231 sample that is being quantized in order to estimate the 232 calibration parameters of an individual TI-ADC. This 233 group may meet Criteria 1 and 4 but not 2 and 3. Table 1 summarizes the comparison of the above tech-248 niques with the one proposed in this paper on the basis of 249 the Criteria 1 through 4 listed above.
250
It is important to notice that communications applications 251 of TI-ADCs enjoy an important advantage over more general 252 applications. This advantage is the availability of the global 253 optimality criterion referred to as Criterion 3 above, in other 254 words, the maximization of the SNR at the slicer. In general, 255 applications other than digital communications receivers lack 256 VOLUME 10, 2022 a global criterion such as the slicer SNR, whose optimiza-257 tion can be exploited to compensate the impairments of the 258 TI-ADC, or more generally, of the AFE. Therefore, TI-ADCs
289
where ω is the angular frequency, L is the fiber length,
293
where * denotes complex conjugate. Matrix J(ω) is unitary 294 (i.e., det(J(ω)) = |U (ω)| 2 + |W (ω)| 2 = 1, ∀ω) and models 295 the effects of the PMD. Chromatic dispersion is modeled as: where β 2 is related to the dispersion parameter D = 2πc λ 2 β 2 , 298 with c and λ being the speed of light and the wavelength, 299 respectively. 1 300 We highligth that a coherent receiver can compensate for 301 PMD and CD impairments without noise enhancement or 302 signal to noise ratio penalty [46]. k are assumed to be indepen-318 dent and identically distributed such E â = δ m−k with δ k being the discrete time 320 impulse function. We also defineŝ (H ) (t) andŝ (V ) (t) as the 321 complex signals at the receiver input for polarizations H 322 and V , respectively. Then, the noise-free complex electrical 323 signals provided by the optical demodulator can be expressed 324 1 Parameter D is expressed in ps/(nm*km), representing the differential delay, or time spreading (in ps), for a source with a spectral width of 1 nm traveling on 1 km of fiber. It depends on the fiber type, and in the absence of equalization it would limit the error-free bit rate or the transmission distance.
340
The four electrical signals s (1) respectively. Every path gain/attenuation is modeled by is the gain error.
374
The quantizer is modeled as additive white noise with 375 uniform distribution since the resolution of the ADC is con- The digitized high-frequency samples can be written as 382 (see Appendix) (36), and q (i) [n] is the quantization noise. Errors and mismatches of the TI-ADC can be compensated 390 by using digital finite impulse response (FIR) filters applied 391 to each interleaved branch. In the case of a communica-392 tion receiver, the digitized signal could be applied to a 393 time-varying equalizer immediately following the TI-ADC 394 (see [1] for more details). The practical implementation of 395 this periodically time-varying equalizer is briefly addressed 396 in Section IV-A, and in more detail in [3].
397
Similarly to what was done in previous works [1], [2], [3], 398 [26], in the backpropagation-based architecture introduced in 399 this paper we propose to adaptively compensate the TI-ADC 400 mismatch, after the mitigation of the offset, using a filter with 401 an M -periodic time-varying impulse response: , L g is the 405 number of taps of the compensation filters, and w (i) [n] is the 406 DC offset-free signal given by
411
The adaptation algorithm of the CE as proposed in [1] 412 or [2] cannot be implemented in coherent optical communi-413 cation receivers. This is because of the presence of several 414 signal pre-processing blocks placed between the CE and the 415 slicers, such as the BCD or the MIMO FFE that compensates 416 VOLUME 10, 2022 444 where l = 0, · · · , L g − 1 and n 0 is an arbitrary time index 448 2 The structure of the CE shown in Fig. 4 can be extended to include the compensation of the quadrature error of the optical demodulator. This will be addressed in a future work. 3 Although the receiver DSP for wireline and wireless may include other algorithms, the technique presented here can be applied to them with minor modifications.
In high speed optical communication applications, the use 449 of parallel implementations is mandatory. Typically, a par-450 allelism factor on the order of 128 or higher is adopted. 451 Furthermore, given the number of interleaves of the TI-ADC 452 M , the parallelism factor P can be selected to be a multiple 453 of M , i.e., P = q × M with q an integer. In this way, 454 the different time multiplexed taps are located in fixed posi-455 tions of the parallel implementation, and we do not incur 456 significant additional complexity when compared to a filter 457 with just one set of coefficients (see [2] for more details). 458 The complexity of the resulting filter is similar to that of 459 the I/Q-skew compensation filter already present in current 460 coherent receivers [4]. Therefore, the typical skew correction 461 filter can be replaced by the CE without adding significant 462 penalties in area or power since the CE is also able to correct 463 time skew.
465
The filter coefficients of the impulse response in (14) are 466 adapted using the slicer error at the output of the receiver DSP 467 block. Let u (i) k be the equalized signal at the input of the slicer 468 (see Fig. 4). The latter is a quantization device that makes the 469 symbol decisionsã As usual (e.g., see [48]), in the analysis we assume that there 473 are no decision errors, 4 and thus we use a Since the slicer operates at 1/T sampling rate, a subsam-477 pling of T /T s is needed after the receiver DSP block. Then, 478 the total squared error at the slicer at time instant k is defined 479 as Let E{E k } be the MSE at the slicer with E{.} denoting the 482 expectation operator. In this work we iteratively adapt the real 483 coefficients of the CE defined by (14) by using the Least 484 Mean Squares (LMS) algorithm, in order to minimize the 485 MSE at the slicer: m,p is the L g -dimensional coefficient vector at 489 the p-th iteration given by where β is the adaptation step, and ∇ g (i) of the T /2 receiver DSP block (see Fig 5) as (17) can be expressed as Then, we can derive an all-digital compensation scheme using 534 an adaptive CE with coefficients updated as where µ = αβ is the adaptation step-size. Moreover, it is 537 possible to estimate the DC offsets in the input samples, using 538 the backpropagated error defined in (24), as follows Since channel impairments change slowly over time, the 552 coefficient updates given by (26) and (27) do not need to 553 operate at full rate, and subsampling can be applied. The 554 latter allows implementation complexity to be significantly 555 reduced. Additional complexity reduction is enabled by: 556 1) strobing the algorithms once they have converged, and/or 557 2) implementing them in firmware in an embedded processor, 558 typically available in coherent optical transceivers. Practical 559 aspects of the hardware implementation will be discussed in 560 Section VII.
562
In this section we address the convergence properties of the 563 proposed calibration algorithm, using the traditional LMS 564 algorithm as a reference. In particular, it is well known that 565 convergence of the latter is not affected by local minima 566 VOLUME 10, 2022 of the error surface where the gradient algorithm could get 567 trapped, because the error surface is quadratic. The pro-568 posed system is equivalent to a traditional LMS adaptive 569 filter [50] with the exception that the gradient is not cal- where except for the leakage factor (1 − µζ ) the adaptation 606 is equivalent to that of the traditional LMS.
608
In the following, we summarize the proposed error-609 backpropagation based background calibration algorithm.
635
Since the technique operates in background, steps 2 636 through 10 run continuously to enable tracking of param-637 eter variations caused by temperature, voltage, aging, etc. 638 This happens even after the algorithm reaches convergence. 639 However, since ultrafast compensation is usually not needed, 640 steps 7 to 10 can be implemented in a highly subsampled 641 hardware block. Consequently, the implementation complex-642 ity and power can be reduced.
644
In this section the proposed backpropagation based mis-645 match compensation technique is tested using computer 646 simulations. The simulation setup is shown in Fig. 6. 647 The simulated parameters are summarized in Table 2 Table 2. Table 2.
An improvement in BER of one order of magnitude can be 696 achieved with the proposed technique. In particular, notice 697 that the serious impact on the receiver performance of the I/Q 698 time skew values of Table 2 is essentially eliminated by the 699 proposed CE with L g = 7 taps.
700
BER histograms for the receiver with and without the CE 701 in the presence of the combined effects are shown in Fig. 9. 702 Results of 4000 cases with random gain errors, sampling 703 phase errors, I/Q time skews, BW mismatches, and DC offsets 704 as defined in Table 2, are presented. Fig. 9 also depicts the per-705 formance of the CE with L g = 13 taps. Without CE, a severe 706 degradation on the receiver performance as a consequence of 707 the combined effects of the TI-ADC mismatches is observed. 708 However, note that the CE is able to compensate the impact of 709 all combined impairments improving the BER in some cases 710 by almost 100 times. Moreover, note that a slight performance 711 improvement can be achieved increasing the number of taps 712 L g from 7 to 13.
713
In multi-gigabit transceivers, the impairments of the AFE 714 and TI-ADCs change very slowly over time, as mentioned 715 in Section IV-C. Hence, decimation can be applied since the 716 coefficient updates given by (26) and (27) do not need to 717 be made at full rate. In ultra high-speed transceiver imple-718 mentations (e.g., for optical coherent communication), block 719 processing and frequency domain equalization based on the 720 Fast Fourier Transform (FFT) are widely used [4]. Therefore, 721 we propose to update the CE performing block decimation 722 over the error samples. The procedure is detailed as follows. 723 Let N be the block size in samples to be used for implement-724 ing the EBP. Define D B as the block decimation factor. In this 725 way, the CE is updated using only one block of N consecutive 726 samples of the oversampled slicer error (25) every D B blocks, 727 FIGURE 9. Histogram of the BER for 4000 random cases with combined impairments as defined in Table 2. Reference BER of ∼ 1 × 10 −3 . Top: without CE. Middle: CE w/L g = 7 taps. Bottom: CE w/L g = 13 taps.
740
We demonstrate the benefits of our proposal using a digital which is implemented on a host computer. Multiple Pseudo-752 Random Binary Sequences (PRBSs) with configurable length 753 and seed are generated in the FPGA. The amplitude of the 754 symbols and the Additive White Gaussian Noise (AWGN) 755 can be set through the coefficients G S and G N , respec-756 tively. Then, we are able to evaluate different SNR scenar-757 ios. The symbol with added noise is sent to a commercial, 758 16-bit Digital-to-Analog Converter (DAC) board [56] using 759 an LVDS interface. The DAC synthesizes the samples at 760 1/T = 1 GS/s. This sampling rate is adopted due to limi-761 tations on the FPGA and DAC clocks. The communication 762 channel is modeled as a low-pass filter with a −3 dB cut-off 763 frequency of 650 MHz [57]. Figure 12 shows the measured 764 eye diagrams at the input and output of the channel with 765 Binary Phase Shift Keying (BPSK) modulation. Notice that 766 significant ISI is added by the channel. Although not explic-767 itly shown, the impact of the ISI is even more significant 768 for the higher order modulations used in the experiments, 769 such as 8-PAM/64-QAM and 16-PAM/256-QAM. This ISI 770 is an important part of the experiment since it enables the 771 verification of the backpropagation technique, as discussed 772 later in this section. On the receiver side, the signal is acquired 773 by the TI-ADC described in [58], operating at a sampling rate 774 of 2 GS/s (i.e., an oversampling ratio of T /T s = 2 is used 775 in the DSP blocks). The clocks for both DAC and ADC are 776 generated from a single 10 MHz clock reference. More details 777 of the experimental platform as well as the fabricated TI-ADC 778 can be found in [58], [59].
780
As explained before, the available experimental setup has one 781 TI-ADC. Therefore, a suitable signal for the coherent receiver 782 has to be assembled by combining four independent measure-783 ments. This is done by collecting one set of samples for each 784 The comparison of the BER curves for the receiver with 806 and without the proposed technique is shown in Fig. 14.
807
The performance of the receiver is severely affected when 808 the TI-ADC mismatch is not mitigated. A sampling phase 809 error of 4 % has been set for Fig. 14(a), whereas 1 % is set 810 for Fig. 14 although the mismatch in this case is much smaller than in 816 6 Mismatches exercised in our experiments are, as a percentage of the symbol rate or the sampling period of the receiver AFE, comparable to those observed in more advanced technology nodes and recently reported coherent optical transceivers (please see reference [60]). In other words, a more advanced technology enables at the same time: (1) higher sampling rates (96GS/S in [60]), and (2) smaller impairments (when measured in absolute units, such as picoseconds). The net result is that the relative impact on the receiver performance in our experiments is comparable to that experienced in more advanced technology nodes and state-of-the-art coherent transceivers. the previous case. After enabling the proposed technique, the 817 performance of the receiver is restored to almost replicate the 818 case without mismatch. This result indicates that our proposal 819 is able to nearly eliminate the receiver penalty introduced by 820 the mismatches of the TI-ADC.
821
The spectrum comparison for a 972 MHz sinusoidal input 822 is shown in Fig. 15. Samples from one of the emulated chan-823 nels are collected before and after running the technique on 824 the communication setup of Fig. 11. Since the CE would not 825 adapt properly with a sinusoidal signal, for this experiment 826 the CE is frozen after being exercised with pseudo-random 827 64-QAM signals. Mismatches of ±4 %T in the sampling 828 phase and ±5 % of gain with respect to the unity are applied. 829 The input tone is identified with a , and spurs from mis-830 matches in the TI-ADC are marked with a ×. Notice that 831 the spurs caused by the mismatches among the interleaves 832 seriously degrade both the Signal-to-Noise-plus-Distortion 833 Ratio (SNDR) and Spourious-Free-Dynamic-Range (SFDR) 834 to 19.4 dBFS and 21.9 dBFS, respectively. After applying 835 the proposed technique, the performance of the TI-ADC is 836 boosted to 39 dBFS and 46.6 dBFS, for SNDR and SFDR, 837 respectively.
838
A comparison to other calibration techniques is reported 839 in Table 3. The technique proposed in this paper is the only 840 one that meets all four criteria established in Section II. 841 As stated there, these criteria are important for digital com-842 munication receivers. In addition, our technique is the one 843 that provides the largest improvement when the calibrated 844 high frequency SNDR (HF SNDR) is compared with the 845 uncalibrated one. The best way to compare performance of 846 calibration techniques in the context of a digital communica-847 tion receiver application is on the basis of the receiver BERs 848 before and after calibration (see Figure 14). In the publica-849 tions referenced in Table 3, this data is either not provided, 850 In these architectures, the parallelism factor P can be cho-885 sen to be a multiple of the ADC parallelism factor M , i.e., 886 7 The focus of our paper is the calibration technique proposed, and not the absolute performance of the ADC. The latter may be limited by effects that cannot be calibrated, such as random jitter, kT/C noise, thermal noise, etc. Fig. 16). We highlight that the resulting filter 891 is equivalent in complexity to the I/Q-skew compensation 892 filter already present in current coherent receivers [4]. Since 893 the proposed scheme also corrects skew, the classical skew 894 correction filter can be replaced by the proposed CE without 895 incurring significant additional area or power.
898
A straightforward implementation of error backpropagation 899 must include a processing stage for each DSP block located 900 between the ADCs and the slicers. Typically these blocks 901 comprise the BCD, FFE, TR interpolators, and the FCR. All 902 these blocks can be mathematically modeled as a sub-case 903 of the generic receiver DSP block used in Section IV-C and 904 the Appendix. The EBP block is algorithmically equivalent 905 to its corresponding DSP block with the only difference that 906 the coefficients are transposed. Therefore, in the worst case, 907 the EBP complexity would be similar to that of the receiver 908 DSP block. 8 Since doubling power and area consumption is 909 not acceptable for commercial applications, important sim-910 plifications must be provided.
911
Considering that AFE and TI-ADC impairments change 912 very slowly over time in multi-gigabit optical coherent 913 transceivers, the coefficient updates given by (26) and (27) Typically, a serial implementation requires that hardware such as multipliers be reused with variable numerical values of coefficients, whereas in a parallel implementation hardware can be optimized for fixed coefficient values. This results in a somewhat higher power per operation in a serial implementation. Nevertheless, the drastic power reduction achieved through decimation greatly outweighs this effect. . Equivalent discrete-time model of the analog front-end and TI-ADC system with impairments for the signal component given by (35) (i.e., without DC offsets and quantization noise) for signal s (i ) (t ) with i = 1, 2, 3, 4.
973
Next we review the model of the TI-ADC with impairments 974 used in this paper (see Fig. 3). The effects of the sampling 975 time errors δ The total impulse response of the m-th interleaved channel 983 is defined as Therefore, it can be shown that the digitized high-995 frequency signal can be expressed as: (34). 10 We highlight that the impact 1002 of both the AFE impairments and the M -channel TI-ADC 1003 mismatches are included in (35). Finally, the digitized highfrequency sequence is obtained by replacing (35) in (32) | 5,892.6 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Relationship between tax avoidance and institutional ownership over business cost of debt
Abstract Beginning with classical theories on finance, such as the capital structure theory, the trade-off theory of capital structure, and the pecking order theory, the literature shows a negative correlation between tax avoidance and institutional ownership with respect to the business cost of debt. However, the impact of tax avoidance and institutional ownership on corporate debt policy in Vietnam is an under-researched topic. The aim of the study is to identify the effect of those mentioned factors on business borrowing policy, using data on 207 companies listed on the Ho Chi Minh City Stock Exchange (HOSE) in Vietnam from 2008 to 2016. The study employs model proposed by Lim in 2009 to achieve mentioned research object with Feasible Generalized Least Squares (FGLS) method to overcome for any defection. The study results show no conclusive empirical evidence of a relationship between business’s cost of debt and tax avoidance and institutional ownership. This result contrasts with the conclusion in previous studies and can be explained by the characteristics of the funding market in Vietnam where financial organizations often focus on business results and management efficiency in making lending decisions and this characteristic is at no sign of change soon.
Introduction
Businesses take advantage of the current tax regime to lower their tax payments by reducing their taxable income (Noor et al., 2009) and thereby increase current profits as well as the company's Pham Minh Vuong ABOUT THE AUTHOR Nguyen Minh Ha is the rector of Ho Chi Minh City Open University, Vietnam. His research interests include issues on finance and economic. He has published in various international journals. Pham Minh Vuong is a lecturer Ho Chi Minh City Open University, Vietnam. His research interests are accounting and finance issues. Tran Thi Phuong Trang graduates from Ho Chi Minh City Open University, Vietnam. She is currently working in a commercial in Vietnam.
PUBLIC INTEREST STATEMENT
The impact of tax avoidance and institutional ownership on corporate debt policy in Vietnam is an under-researched topic. The paper aim is to identify the effect of tax avoidance and institutional ownership on cost of debt for listed firms in Vietnam borrowing. The study uses model proposed by Lim (2009) for the research goal.
There is no conclusive empirical evidence of a relationship between business's cost of debt and tax avoidance and institutional ownership. In contrasting previous studies, the findings can be explained by the characteristics of the funding market in Vietnam where financial organizations often focus on business results and management efficiency in making lending decisions.
after-tax value (Chung et al., 2002;Noor et al., 2009;Salehi et al., 2019). However, tax avoidance can reduce company value in cases where the costs are directly related to a firm's tax planning costs, such as adaptation costs and agency costs (Fuadah & Kalsum, 2021;Wang, 2010). According to Graham and Tucker (2005), savings from tax avoidance can be considered in making financial plans, as it is a form of funding that reduces a business's dependence on external borrowing. In addition, tax avoidance increases financial flexibility, thereby increasing credit quality, reducing bankruptcy risk (Lim, 2009), and lowering a business's average cost of capital (Monila, 2005).
Tax avoidance behaviour can also represent the subjective actions of the manager of a business for personal purposes. This means that tax avoidance can increase information asymmetry at businesses. Chung et al. (2002) shows that increasing the ownership ratio of institutional shareholders can improve the quality of corporate governance and limit profit manipulation by means of the accounting method. Some empirical research (Utkir, 2012) confirms the controlling effect of institutional shareholders in the Malaysian market, but other literature (Lim, 2009;Sunarto & Widjaja, 2021) shows the opposite in the Korean and Indonesian market. There is also research showing that at well-managed companies, measured by the degree of institutional ownership, tax avoidance has a favourable impact on corporate value (Desai & Dharmapala, 2009). In sum, no consensus has been reached in the debate over managerial opportunism and tax avoidance.
Overall, tax avoidance can help businesses reduce the cost of debt by temporarily taking advantage of saving on the amount paid to the state. Also, it might include the issue of representative costs because of the separation between management and ownership, as a result, tax avoidance may serve the personal needs of the manager. However, to date, no studies have examined the relationship between tax avoidance and institutional ownership with respect to the debt policy of businesses in Vietnam. Therefore, we examine this relationship at companies listed on the HOSE in Vietnam.
Literature review and hypotheses development
From 1986, with the remark reform usually referred as Doi Moi, the Vietnamese market has emerged from the socialist centrally planned model. Over fifty years, substantial changes have help Vietnam to attract vast amount of investment from the around the globe. However, a gap between the development process of Vietnam and the international community understanding about the country exists because there is limited research volume on Vietnam economic. One of aspect is the Vietnam financial market mechanism. Hanlon and Heitzman (2009) define tax avoidance as the reduction in tax per currency unit of pre-tax accounting profit. Tax evasion, by contrast, is defined as the transfer of value from the government to shareholders (Desai & Dharmapala, 2009). The difference between taxable income and accounting income is affected by many different factors in two main systems: financial accounting standards and tax rules. Financial accounting standards adhere to certain fundamental principles set by the GAAP (Generally Accepted Accounting Principles), which help to describe financial transactions and to provide useful information for relevant stakeholders. Tax rules, however, are determined by political conditions, as legislators enact tax laws to increase the state's income from taxes, encouraging or discouraging certain activities in the economy.
Many studies have explored why some businesses avoid taxes more than others. Researchers approach this question from different perspectives. For example, some arguments are based on business characteristics, the field of operations, size, or the age of the business. Others explain it based on ownership structure and organizational characteristics (Desai & Dharmapala, 2004, 2009, 2011Graham & Tucker, 2005). An increase in tax avoidance leads to two perceptions of the consequences of such actions: first, tax avoidance is interpreted as increasing other tax incentives; second, it involves agency costs, which managers can use as a tool to cover up opportunistic behaviour. In the first perspective, according to Graham and Tucker (2005), dodging taxes is the act of taking advantage of tax incentives, such as using debt. This view suggests that tax avoidance can be an alternative for external borrowing, so it should have a negative relationship with the cost of debt, and this relationship can be stronger with a higher level of institutional ownership (Lim, 2009). The second perspective emphasizes the correlation between tax avoidance and agency costs, and tax avoidance can be a cover for actions that divert real profits to managers.
Tax avoidance is the act in which businesses take advantage of legal provisions to minimize the amount of tax paid, whereas tax evasion involves providing false information to limit the amount of tax paid (Sandmo, 2005). Because tax evasion is illegal, taxpayers who try to avoid taxes in this way are concerned about the possibility of their actions being discovered. Tax avoidance, in contrast, is a legally sanctioned activity, using tax provisions to reduce their tax liability by converting labour income into capital income to take advantage of lower tax rates.
The theory of the company's capital structure introduced by Modigliani and Miller in 1958 can be summarized as two cases with and without the effect of taxes related to firm value and capital cost. In the case of no taxation, the value of the company with debt is equal to the value of the firm without debt. Meanwhile, the average cost of capital is constant, regardless of changes in the capital structure when the firm owes no taxes. If taxes are owed, the value of a company employing debt is equal to the value of the debtless company plus the present value of a tax shield (a reduction in taxable income attained through allowable deductions from charity donations, amortization and depreciation). With respect to the cost of capital, if taxes are owed, this theory holds that the required return on equity also increases with increasing use of financial leverage. The benefit from a tax shield helps to reduce the Weighted Average Cost of Capital (WACC). However, the soaring required rate of return on equity, as an increase in the use of financial leverage, triggers equity risk. The capital structure of Modigliani and Miller (1958) is reviewed in this study to explain why businesses do not use the maximum debt to gain benefits from the tax shield. Kraus and Litzenberger (1973) propose using the trade-off theory of capital structure to explain why businesses are often financed in part by both debt and equity. In this theory, they suggest that the use of debt financing is costly, most notably the cost of financial exhaustion. Therefore, businesses cannot fully finance with loans. Oddly, for every additional percentage increase in debt, the benefit of the tax shield increases, and so does the cost of financial exhaustion. When the present benefit from the tax shield does not exceed the cost of financial exhaustion, borrowing no longer benefits the business. Because of this, companies always seek to optimize their total business value based on this equilibrium principle to determine how much debt and how much equity are optimal for their capital structure. Some authors rely on capital structure theory in studying the relationship between tax avoidance and debt costs (Bhojraj & Sengupta, 2003;Desai & Dharmapala, 2009;Graham & Tucker, 2005;Lim, 2009;Nguyen Minh et al., 2021). Jensen and Meckling (1976) introduce the concept of agency costs as the aggregated costs of an organized contract. Originating in the distinction between ownership and management at companies, managers are often at better understanding the true value of assets and current and potential risks, hence, causing information asymmetry. In addition, decentralization in business can have consequences. For instance, managers directly run business activities, so they can take actions to maximize their personal benefits. However, because of asymmetric information, managers can make decisions that harm the interests of investors. In this study, agency costs arising from information asymmetry explain how managers implement tax avoidance for personal gain, leading to the risk that it will reduce business creditworthiness and increase the cost of debt.
The pecking order theory (Myers & Majluf, 1984) argues that firms prefer internal sources of funding. Myers and Majluf (1984) showed the priority, in descending order, of corporate funding as follows: (1) retained earnings, (2) direct borrowing, (3) convertible debt, (4) ordinary shares, (5) non-convertible preference shares, and (6) convertible preference shares. This order helps to explain why businesses consider tax avoidance an internal resource that can be used to minimize external funding.
According to Bhojraj and Sengupta (2003), the business cost of debt can be affected by its characteristics, such as bankruptcy risk, agency costs, and information asymmetry. Therefore, businesses often avoid tax to increase their financial surplus and improve their credit quality, thereby reducing the cost of debt. Graham and Tucker (2005) study 44 enterprises with tax avoidance behaviour in the period 1975-2000, with similar results. They show that businesses often use tax avoidance to replace debt usage and reduce the cost of borrowing. Based on these research results, we propose the following hypothesis: H1: Tax avoidance is negatively correlated with the cost of debt. Desai and Dharmapala (2009) construct a model that shows the correlation between tax avoidance and profit-distorting behaviour. To hide tax avoidance behaviour from tax authorities, managers can take actions that limit shareholder control. Ashbaugh-Skaife et al. (2006) explain the cost of debt based on the agency theory. Accordingly, creditors might be disadvantaged by information asymmetry caused by the behaviour of managers or shareholders who take advantage by transferring lenders' assets to themselves. In addition, institutional ownership is negatively correlated with tax avoidance, which is explained by Desai and Dharmapala (2009). Institutional shareholders can use their dominance to limit managerial tax evasion behaviour, while also limiting information asymmetry. This is also the result in Chung et al. (2002). Based on their research, institutional ownership can influence the cost of indirect debt because it is believed that the higher proportion of institutional ownership of a business lowers the agency cost and the cost of debt. Bhojraj and Sengupta (2003) and Nguyen Minh and Hiep (2019) provide empirical evidence on the direct effect of institutional ownership on the cost of debt at a firm. Companies with high institutional ownership often have lower debt costs due to higher credibility. Hereby, we propose our second hypothesis as follows:
Research model
To test H1 and H2, we use the model proposed by Lim (2009) as follows: Table 1 shows a brief summary for variables of the research model. Also, the expected correlation with independent variables are showed in Table 2 together with relevant previous studies in the same subject matter.
Data collection
Secondary data, including financial statements of listed companies, is collected from vietstock. com. The study period is from 2008 to 2016. To ensure uniformity in the data, we omitted businesses with special financial characteristics, consisting of finance and insurance businesses, banks, real estate companies, companies whose financial information was not disclosed during the study period, and businesses with negative income tax due. The final panel data is with 207 enterprises and a total of 1,863 observations. Tax rates are an important part of determining the implied revenue from tax paid. However, from 2008 to 2016, Vietnam corporate income tax experienced many fluctuations as detailed in Table 2.
Statistical description of the data
In Table 4, the cost of debt (COD) averages 0.054%; the difference between reporting revenue and implied revenue derived from the tax paid and the corresponding tax rate (BTD) averages VND 265.649 billion, with a maximum of VND 6,608.416 billion and a minimum of VND −31.608 billion; the average business total accrual (TA) is VND 28.838 billion; the highest is VND 27,860 billion, and
Correlation and the variance inflation factor (VIF)
The results verified the correlation coefficients between the variables in the proposed model to test the likelihood of multicollinearity (Table 4).
According to these test results, none of the variables have a VIF greater than 5 (Table 5) and no pairs of variables have excessively high correlation, and the correlation coefficients are less than 0.5. Only the pair SIZE and BTD are highly correlated, with a coefficient of 0.679 (Table 6). It is indicating that the model has a low likelihood of multicollinearity.
Regression results
To determine which model to use, we carried out OLS (Ordinary Least Squares), REM (Random Effects Model), and FEM (Fixed Effects Model) tests. The result the Breusch and Pagan Lagrangian tests was Prob > chi2 = 0.0000, indicating that the REM model is better than OLS (Table 7). The results of the Hausman test, Prob > chi2 = 0.165, show that FEM is a better fit for our proposed research model (Table 8). However, the Wald test (Prob > chi2 = 0.0000) result shows evidence of heteroskedasticity (Table 9). This defect in the model is addressed using the FGLS (Feasible Generalized Least Squares) method. The regression results are in Table 10.
Tax avoidance and the cost of debt
Business book-tax different (BTD) and total accrual (TA) are applied to measure for tax avoidance behaviour. The regression results in Table 10 indicate the absence of an analytical basis for confirming the correlation between tax avoidance measurement and business cost of debt (COD). Therefore, there is not enough evidence to support H1. This result is inconsistent with the results of Desai and Dharmapala (2009), who stated that tax avoidance is a funding source for business activities and reduces external borrowing. Therefore, it helps businesses reduce COD in two ways. First, it reduces COD by reducing debt financing. Second, the use of less debt helps businesses improve their credit rating in the eyes of creditors, such as commercial banks, thereby reducing the cost of using debt. However, in Vietnam, the use of capital from tax avoidance does not really have an impact on helping to improve the credit rating of these businesses. The result is also in contrast with Bhojraj and Sengupta (2003), Monila (2005), and Lim (2009) suggesting tax avoidance increases business credit quality lowers business's average cost of debt. Explanation for these variables is at section 3.1 of this paper.
Institutional ownership and the cost of debt
The regression result is insufficient to confirm H2 (P > | z | = 0.850), which demonstrates that institutional ownership (INST) is not a factor in reducing the cost of borrowing in Vietnam. This outcome contrasts with that of Desai and Dharmapala (2004), who found that the greater the organizational ownership, the lower the cost of representation, and the greater the transparency of companies, thereby improving credit ratings and reduce COD. By this, it suggests that banks and credit institutions in the Vietnamese market do not consider the ownership structure of businesses in their evaluation of creditworthiness. Again, this result implies the characteristic of those institutions in fund providing decision making.
In addition, we do not find any correlation between the control variables (TA, AGE, LEVERAGE, CFO, SIZE) and COD for the firms in the sample.
Summary and conclusion
The study focuses on identifying the role of specific business factors and the degree of impact of each factor on debt policy using REM with FGLS estimation in our proposed research model. The result is that tax avoidance and institutional ownership have no statistically significant effect on a firm's COD. This confirms the approach of banks and credit institutions in Vietnam in customer evaluation. More specifically, these lenders often do not view the use of tax avoidance capital and ownership structure as indicators of creditworthiness. Instead, these organizations often focus on business results and management efficiency in making lending decisions. This characteristic of Vietnam lending industry is at no sign of change soon. Also, the study period is only up to 2016. However, as the stability lending industry in Vietnam is certain, it is arguable that extension of research period would not yield any significant different result. Also, the reverse causality, with cost of debt driving tax avoidance, is excluded from the paper as previous research demonstrating other way around (Graham & Tucker, 2005;Lim, 2009). The paper only points out some specific internal characteristics of the enterprise and considers the impact of these characteristics on the cost of debt. In fact, there will be many other factors that can affect the debt policy of an enterprise, including external factors: macro policies on interest rates, inflation, etc. The suggestion is to include external factors in further studies. | 4,397.4 | 2022-01-20T00:00:00.000 | [
"Business",
"Economics"
] |
Can We Take the Religion out of Religious Decision-Making? The Case of Quaker Business Method
In this paper, we explore the philosophical and theological issues that arise when a ‘ religious ’ process of decision-making, which is normally taken to require specific theological commitments both for its successful use and for its coherent explanation, is transferred into ‘ secular ’ contexts in which such theological commitments are not shared. Using the example of Quaker Business Method, we show how such a move provokes new theological questions, as well as questions for management studies. In this paper, we explore the philosophical and theological issues that arise when a ‘ religious ’ process of decision-making, which is normally taken to require specific theological commitments both for its successful use and for its coherent explanation, is transferred into ‘ secular ’ contexts in which such theological commitments are not shared. First, presenting our example (Quaker business method), we show how such a move provokes new theoretical – and theological – questions, as well as questions for management and organisation studies. Second, we propose a response, grounded in a particular religious tradition but admitting of wider application, to one core question – namely that of the relationship between religious truth-claims and ‘ good ’ decisions.
)a process that has attracted interest in management and organisation studies as well as in theology, and that has from time to time been adopted and adapted for use in non-Quaker organisations (see Allen 2016;Burton 2017;Michaelis 2010). Notable features of the 'Quaker business method' include the lack of voting, of confrontational debate and of other practices that emphasise differences of opinion; an intentional shared search for unity, which is not understood as compromise or consensus; contemporaneous agreement of minutes (especially minutes of decision) in the meetings at which the decisions are taken, with the expectation of shared ownership of the decisions thereafter; and the use of reflective silence in meetings. Our focus in this paper, however, is not on the details of Quaker business method itself, but rather on the more general issues raised by its use beyond Quaker organisations. 1 The first key point to note is that, in Quaker accounts of Quaker business method, it is explained in theological terms (see Anderson 2006;Grace 2000Grace , 2006. According to standard Quaker accounts, a decision-making meeting is, like a Quaker meeting for worship, a process of collectively seeking to discern the will of God for the group in relation to the decision at hand. Thus, the British Quaker 'book of discipline' that serves as the authoritative account of Quaker faith and practice in Britain 2 states concerning Quaker decision-making: In our meetings for worship we seek through the stillness to know God's will for ourselves and for the gathered group. Our meetings for church affairs [sc. decisionmaking meetings] are also meetings for worship based on silence, and they carry the same expectation that God's guidance can be discerned if we are truly listening… (Britain Yearly Meeting 1994: 3.02).
The same passage goes on to use the 'belief that God's will can be recognised through the discipline of silent waiting' to distinguish Quaker decision-making from a 'secular idea' (consensus). Quakers often insist that the secular idea of consensus is based on negotiation between pre-established positions -requiring mutual compromise so that the decision taken is agreeable to all present, or at least objectionable to none (Anderson 2006). The Quaker process, by contrast, prioritises not compromise between different points of view but the collective search for a right way forwardunderstood as the will of God.
Quaker business method, then, both for authoritative collectively-agreed accounts and for a significant body of Quaker scholars, is not contingently but necessarily connected to specific theological beliefs held by Quakers. These beliefsminimally, from the text above, that God exists, that God has a will, and that individuals and groups can perceive the will of Godexplain, in the sense of justifying or providing a good enough reason for, specific features of the method (for example, in the quotation given above, the use of silence), and provide key connections in an account of how and why the whole method 1 By 'Quaker organisations' we mean, for these purposes, organisations that explicitly claim Quaker identities and/or claim to shape their ways of working according to Quaker beliefs. 2 The book of discipline, which is approved by the national Quaker body and periodically revised, includes both straightforwardly descriptive sections (for example, on the history of Quakerism) and straightforwardly normative sections (for example, on the requirements for the solemnisation of marriages). The sections discussed here, like much of the book, fall somewhere in between, as accounts of 'best practice' with an advisory character, explicitly open to revision in the light of experience. They are the nearest available approximation to an authoritative and normative account of Quaker business method.
works. 3 A corollary would seem to be that an explanation of Quaker business methodeither an account of why specific features are there, or an account of how the whole method makes sensethat omitted reference to these beliefs would fail, either in coherence or in accuracy. It would either be an incoherent explanation, or an incomplete explanation of Quaker business method.
The second key point to note is thatwithin this type of account of Quaker business methodbeliefs are important not only at the level of explanation and justification, but at the level of practice. Consider, for example, the following description, synthesised from a range of sources, of how certain key features of Quaker business method relate to the search for divine guidance, or the attempt to discern the will of God (for relevant sources see Eccles 2009;Grace 2000Grace , 2006Mace 2012;Muers 2015).
Since the aim in Quaker decision-making is not to rely on the participants' views and opinions, but rather to learn the will of God for the group, the participants should not attempt to advocate for a specific point of view determined in advance, but rather to speakand to listenin ways that contribute to the shared search. The task of the 'chair' (the clerk) is to recognise and record the group's discernmentso he or she should offer suggested formulations of the decision and invite the group's further consideration, only moving on when it is clear that a right conclusion has been reached. Framing the meeting, and the various contributions, with reflective silence establishes the basic attitude of dependence on divine guidance, and enables prayerful attention to the matter at hand, to one another and to God.
It is apparent from this description, not only that Quaker business method as a practice is given theological explanations, but also that, if the explanations are correct, the method requires some specific set of beliefs and commitments on the part of those who participate. 4 It looks as ifon Quaker terms -Quaker business method cannot be used by those who do not believe that God exists, that it is meaningful to talk about God having a will, or that individuals or groups can perceive the will of God. Indeed, given certain other characteristic Quaker beliefs and attitudesmost notably an emphasis on individual conviction, along with a suspicion of formal theology and of clerical or other religious hierarchyit would be extremely surprising to find a theologically-framed method that was supposed to work, as it were, ex opere operato without the participants' beliefs and attitudes being relevant. 5 To understand the particular weight that these theological explanations of Quaker business method carry, we need to remember that Quaker business method is not only 'distinctive', but alsoso it is widely claimedunusual and (at least occasionally) surprising in the results it produces. Thus, for example, Robson's account of the decision by British Quakers in 2009 to approve the solemnisation of marriages for same-sex couples (before this was legal in the UK) highlights the rapid movement from controversy and uncertainty to clear approval, by a large national gathering, of a step more decisive and radical than any of the 'leaders' had expected (Robson 2013:169-188). On the standard Quaker description of the business method, which ties it closely to Quaker meetings for worship, its surprising effectiveness is connected to its reliance on divine guidance. While there are, to the best of our knowledge, no examples of Quaker business method (or to be precise, its success or surprising results) being used directly in apologeticsas an argument for the existence of God, or for the availability of divine guidancethere are texts in which it appears as a paradigmatic case of Quaker experience of God (Eccles 2009).
This being so, the question for those interested in the study of management and organisation (rather than narrowly in Quaker studies) is: what sense can be made of the successful use of processes taken from, and very closely linked to, Quaker business method in secular, or at least not-entirely-Quaker, contexts? That such successful uses do occur is increasingly widely attested (see for example Burton 2017;Lewis 2009;Michaelis 2010). Various parts of Quaker business method, including many that are listed in Quaker books of discipline and elsewhere in brief definitive accounts of the method, have been adopted outside Quaker structures. A considerable number of organisations that are 'Quaker-connected' but largely run by non-Quakersand by people with no religious beliefs or affiliationsclaim plausibly either to use Quaker business method or to shape their decision-making practices according to Quaker principles. For example, Michaelis (2010) documents the similarities of the decision-making processes to the Quaker business method in non-Quaker organisations as diverse as Churches Together, the Green Party, and Scottish Legal Aid Board. Furthermore, examples of for-profit organisations founded on religious principles continue to shape their decision-making processes according to processes consistent with the Quaker approach. The Scott Bader Commonwealtha UK chemical manufacturing company established by a Quaker in 1921-retains references to Bdecision making… by unity rather than by a formal vote^in its constitution (Scott Bader 2010: 27). Since a Quaker business meeting itselfas opposed to the various things that might be written about itinvolves more or less no explicitly theological language, and since, as indicated earlier, there are many 'components' of Quaker business method that, while unusual, can easily be learned and used independently, there are few obvious barriers to this transferability (see, for example, on 'transferring' the use of silence into non-Quaker organisations, Brigham and Kavanagh 2015;Lewis 2009; and for the development of an entire approach to corporate governance based on voteless participatory decision-making, Saxena and Jagota 2016).
Let us assume, for the purpose of argument, that at least some of these accounts of the successful transfer of Quaker business method to non-Quaker contexts are accurate. That is, let us assume that at least some people who do not hold the relevant beliefs have used the key practices and processes that comprise Quaker business methodincluding reflective silence, contemporaneous minute-taking, voteless decision-making, eschewal of partisan advocacyand have found that it works. How should we interpret the use of Quaker business method by non-Quakersor more precisely, by people who do not share the beliefs taken by Quakers to be essential both to the adequate explanation and to the practice of Quaker business method?
One obvious response is to treat the 'transferability' of Quaker business method as evidenceperhaps, conclusive evidencethat Quaker explanations of why and how their business method works are misguided. This could be argued in at least two ways. On the one hand, staying within broadly the same theological paradigm as the explanations themselves, the 'transferability' of Quaker business method into non-Quaker contexts might be taken to demonstrate that intention does not matter so much after allthat the will of God can be discerned through this method regardless of the attitudes of the participants (what I referred to above as ex opere operato). On the other hand, this 'transferability' might be taken to demonstrate that theological accounts of what is going onat the level of explanation or justificationare either wrong or redundant. It is (we might say) not divine guidanceand the practice of collective seeking for divine guidancethat makes Quaker business method work; any explanations that rely on divine guidance are mistaken at best and mystifying at worst, and adequate explanations should rather be sought in the fields of (for example) psychology or organisational studies, employing a robust methodological agnosticism and focusing strictly on the commonalities between Quaker and non-Quaker organisations that employ the method.
An interesting feature of these two responses to the transferability of Quaker business methodtwo ways of using it as evidence against Quaker explanations of the methodis that they are, to a certain extent, mirror-images of one another. The first preserves theological claims at the level of explanation (this really is a process that reveals the will of God) and discounts them at the level of practice (nobody needs to know or believe that for it to work). The second discounts theological claims at the level of explanation (the success of this process has nothing at all to do with God) but leaves open, at least theoretically, the possibility of their relevance in practice (it might perhaps work even better if some people believe they are seeking the will of God, even though they are mistaken). Neither looks as if it can be easily reconciled with the Quaker explanations of Quaker business method, discussed above; the second, indeed, looks prima facie as if it is incompatible with any religious commitmentalthough, as we shall see, the picture is somewhat more complicated. It looks as if Quakers committed to the standard Quaker view, presented with evidence that Quaker business method Bworks^beyond Quaker organisations and regardless of participant belief, will need either to deny the evidence or perhaps more plausiblyto deny that what is being practised is Quaker business method at all, claiming that both its similarities to the latter and its success are, absent the theological foundations, mere coincidence.
We suggest, however, that this is not the only way to go. In the next section, we present and defend an alternative account of Quaker business method that can accommodate the 'transferability' of the processes into secular organisations, give a robust account of the method's strengths, and provide the best fit with Quaker theology and tradition. It is beyond the scope of this article to argue directly for the truth of this alternative account; our aim is merely to demonstrate that it might plausibly be accepted by people who are committed to certain core theological claims, while also making it possible fully to acknowledge, and indeed to enter into dialogue about, the successful use of a religious decision-making practice in secular contexts and by non-believers. In order to do so, we need to look more closely into the question of what it actually means to seek the will of God.
Seeking the will of God: Truth in Quaker Business Method
In what follows, we explore and critique twosuperficially appealing, but theologically and practically deficientmodels of what it means to seek the will of God, broadly fitting correspondence-and coherence-based accounts of religious truth. We then expound a third model that takes seriously the Quaker emphasis on the practical and experimental character of human appropriations of divine truth.
The first assumption that readers are likely to make, when they see a reference to 'seeking the will of God' in relation to a collective decision, is that the 'will of God' corresponds to some pre-determined 'right' decision'. On this viewwhich fits well with, although it does not require, some strong account of divine foreknowledge 6 -Quaker business method is taken to be a fairly reliable process for discovering a 'right' decision that exists out there before the process begins. The search for the right decision is analogous to the attempt to make truthful claims about God on a 'correspondence' theory of truththere is a reality in God (in this case, the 'will of God' for a particular situation) to which a decision or a statement corresponds more or less adequately, and against which its rightness or truth is properly tested.
There are various problems with this account, both from a theological point of view and from the point of view of the empirical study of decision-making processes. Theologically, it tends to reinscribe an account of divine will and activity as inscrutable and arbitrary. Why, one might ask, would God hide God's will such that it can only or mainly be found through these idiosyncratic methods? From the point of view of the empirical study of decision-making, this view also tends to short-circuit any investigation of the specifics of Quaker business methodparticularly their similarities to processes and practices that have been developed independently of theological frameworks. Quaker decision-making, after all, is not altogether unlike consensus processes used in a wide range of organisations. Its attractiveness beyond Quaker organisations is linked to features that have recognisable analogues elsewherefor example, the lack of hierarchy, the avoidance of group polarisation, the careful attention to the views and experiences of minorities, and indeed the commitment to taking as much time as is required to find the right decision.
It might, of course, be possible to qualify or add to the account in order to soften some of these objections. For example, the belief that the will of God is 'waiting to be discovered' is perfectly compatible with the belief that the will of God in any given situation is reflective of the character of God, as revealed and experienced within particular communities. The will of God, when we find it, will fit with what we already know, or what we have done beforeeven if it extends it in new and surprising ways. Moreover, when we start to give content to the character of Godas loving, as desiring reconciliation, healing and justiceit becomes even clearer that the will of God and the 'best decision', as at least in part perceivable from secular premises, are unlikely to be fundamentally opposed.
Nonetheless, the idea of seeking the will of God as 'trying to find the pre-existent right answer' retains its structural weaknesses as an account of Quaker business method. It frustrates attempts to make sense of why and how the process workswhile also being liable to produce over-attachment to unimportant details of the process. If the process cannot be interpreted rationally, there is no basis on which to make judgements about which parts of it matter most, or how it might legitimately be altered.
As a secondary point, it is worth noting that this account of seeking the will of God also requires us to present the conclusion of a decision-making process primarily as a finding of fact, rather than as an action -'we have discovered that x is the will of God', rather than 'we now do x'. Although it is not absolutely necessary for an account of decision-making to maintain the distinction between (in Aristotelian terms) the theoretical and the practical syllogism, it has been noted that the minutes of decision in Quaker business meetings are in practice often framed as actions, that is, as present-tense collective illocutionary acts (we accept, we ask, we commit ourselves; see on this Muers 2015:194).
Looking at contemporary Quaker discussions of Quaker business method, however, it is not obvious that the 'guess the right answer' correspondence-theory account of the will of God is the only or the dominant option. There is a significant 'minority report' within which unity or agreement within the group, reached through a right process, just is the will of God. God does not actually have a 'will', Quakers might say, about what colour the carpet should be in the Quaker meeting house; there is no pre-existing right answer that everyone is trying to work out or guess. The will of God is, rather, that the processwhich, as noted above, has manifest advantages for the individuals and the groupis used well, resulting in a decision that 'works' for the group. To continue the account developed above, the search for the right decision here is somewhat analogous to the attempt to make truthful claims about God on a 'coherence' theory of truth. Decisions are tested, not against their correspondence to some reality in God but against their fit within this casethe beliefs, needs and practices of the group, which as a coherent whole reflects the will of God. 7 This approach seems to give a much better account of the transferability of Quaker business method into secular contexts. What it risks losing, however, is an important aspect of what participants in Quaker decision-making are actually doing. In at least some cases, they are not simply trying to find unity among the group; they are trying to discern the will of God on a specific issue. Moreover, as we have seen, this intentionto try to discern the will of Goddoes materially affect how they behave; if they all started believing that there was no will of God apart from 'everybody coming to agreement and owning the decision', the process, at least on Quaker terms, would not work in the same way. Lest it be thought that the transferability of Quaker business method into non-Quaker contexts undercuts this objection, we could even restate it in non-religious terms. For at least some decision-making processes to work well, those involved have to be committed, not only to finding agreement within the group, but also to some wider contexts or criteria by which truth or success can be judged and to which the decision-makers are accountable. A good decision isat least some of the timenot simply a decision that everybody likes, or that works well for this group of people. In other words, while, as we have seen, Quaker literature uses the idea of 'seeking the will of God' to differentiate Quaker business method from a consensus process, it is also possible and indeed important to differentiate 'good decisions' from consensus. The gap between 'good decisions' and consensus can be seen in the (admittedly limited) literature on 'good decisions'. Often, a 'good' decision is defined from a utilitarian perspective, inclusive of a both benefit-cost analysis and risk analysis (see, Broadman et al. 2000;Dietz et al. 2001). Focusing on environmental decision-making, Dietz (2003) broadens the criteria for a 'good' decision as one that encompasses a range of factors, such as (i) human and environmental well-being, (ii) competence about facts and values, (iii) a fairness in process and outcome, (iv) one that relies upon human strengths, (v) a chance to for learning, and (vi) process efficiency (see also Renn et al. 1995).
Our preferred account of Quaker decision-making and its transferability holds together practical considerationswhat process enables agreement to be reached, or a decision to be made?with a robust commitment to the core idea of seeking truth, which differentiates the process from a search for consensus or compromise. Its further strength is that it takes as its starting point the distinctive historic Quaker understanding of truthwhich is neither straightforwardly 'correspondence'-based nor straightforwardly 'coherence'-based. To simplify a complex story, Truth (capitalised) is used in Quaker writings to refer both to the God who is known and responded to, and to the way of life of the person who knows and responds to God (Britain Yearly Meeting 1994: 19:34 preamble); and the way of life that is Truth is characterised both by truthful speech and by right action (see the examples in Britain Yearly Meeting 1994:19.34-19.38).
The key claim for this paper is that in Quaker tradition, knowing truth, on the one hand, and living 'truthfully' or rightly, on the other, are inseparable and reciprocally related (Rediehs 2015;Muers 2015: 80ff). On the one hand, knowledge of the truthwe might say, correct beliefis not a simple prerequisite for right action; on the other hand, correct belief or knowledge is not an ultimate goal. Rather, knowledge of the truth and right action are mutually reinforcing and mutually generating. This approach has been described elsewhere, using Quaker terminology, as 'experimental' knowingclaims to knowledge are based on experience that arises from right action, and they serve in turn as the basis for further 'experiments' (Muers 2015:15). The truth about a person or a situation is how they are known when they are responded to rightlyand right response has both cognitive and enacted dimensions. Truths about God are also drawn into this reciprocal relation of knowledge and action; to follow divine guidance, or to do the will of God, is to engage with the world in certain ways that in turn generate new understanding of the will of God.
In Laura Rediehs' helpful account (2015), Gandhian 'nonviolence' serves as a valuable analogue for Quaker truth and demonstrates the close relationship between knowledge and action. To be nonviolent is both to perceive or know the other in a particular way (as valued, as inviolable) and to behave towards him or her in a particular way, and, crucially, the two moves are mutually required and mutually reinforcing, with a stance of active nonviolence leading to and enabling truthful perception. Likewise and very similarly, in Quaker tradition, 'answering that of God in everyone' entails both perceiving others in a particular way (as persons in and through whose lives God is revealed) and acting towards them accordingly. Nonviolence, in both Quaker and Gandhian tradition, means recognising and respecting things and persons as they arebut this recognition and respect (what we might call nonviolent knowledge) is only acquired through nonviolent relationships.
Similarly, the close relationship between knowledge and action extends, in the Quaker tradition, to management and organization. Quakers were instrumental in the development and creation of many industries in the eighteenth and nineteenth centuries, such as banking, chocolate, iron (Burton and Hope 2018). The UK Quakers & Business group recently published a guide, BGood business: ethics at work^, to emphasise a Quaker approach to business grounded in the discipline of respecting others and seeking T/truth, both discerned and enacted -whether employees, customers, suppliers, or communities within which the business operates (Quakers and Business 2014).
Within this framework, general or broad theological claims (such as the claim 'in business meetings we seek and find the will of God') are not primarily a description of a state of affairs. Rather, they are directions for both how to live truthfully and how to discern truth. Although they may imply claims about states of affairs (for example -God exists, God has a will that can be found), these claims cannot be evaluated independently of the practices that they generate. Formally, this approach to seeking and doing truth can look as if it results in circularity (assuming the truth-claim that supports the practice that is then used to justify the truth-claim). Substantively, however, since the claims are vague and the practice open-ended and exploratoryand indeed susceptible to failurethe circle is virtuous. In a process somewhat analogous to the testing of a scientific hypothesis (an analogy often referred to in discussions of early Quaker thought) a belief gives rise to an experimental practice that further refines or challenges the belief. 8 A further important point about Quaker business method that is grasped more clearly in this account than in either of the foregoing is that the conclusion of a decision-making process is not a finding of fact, but an actionan action that is, nonetheless, inseparable from a process of 'seeking truth', and that is made intelligible by a specific understanding of the facts of the matter. 9 The other aspect of Rediehs' account of Quaker truth that is particularly relevant to the transferability of Quaker business method is that truth, in Quaker terms, is both divine and 'worldly'. There is not a special category of theological or religious truth. Knowing the truth about particular things in the world is only possible by acting towards them in accordance with the will of Godwhich entails respecting and recognising them as they are. The commitment to truth that begins with the accurate representation of ordinary facts is thus in direct continuity with the theologically-loaded commitment to capital-T Truth, the commitment that (for example) sustains well-known Quaker prophetic practices of 'speaking truth to power'.
Furthermore, it is a key principle of Quaker theological anthropologyand one that itself gives rise to specific 'experimental' practices of nonviolent relation to the other -that everyone has access to divine truth, regardless of their institutional or theological affiliations (Muers 2015). Everyone is in principle able, therefore, to orient herself towards the world, in terms of both action and perception, so that she sees it in the right way. This being so, Quakers should expect to see overlaps, continuities and close analogies between their decision-making processes and those of othersjust as they should (and, from their writings, do) expect to see examples of right action and true judgement emerging in the lives of individuals.
The claim that everyone has access to divine truth does, it should be noted, raise significant questions at the intersections of Quaker practice and management practicefor example, in relation to the place and use of expertise. In the sphere of theology and worshipso, in relation to the knowledge of God and the practice of seeking God's will -Quakers have traditionally eschewed or de-centred 'expertise' (for example, by not appointing clergy). In decisionmaking processes on various complex issues, room has to be found for various kinds of expertise, and the tension between acknowledging distinctive expertise and prioritising the shared search for truth in which all can and should participate is an area of ongoing research in Quaker business method (Burton et al. Forthcoming). In the conversation with management studies more generally, however, the Quaker approach clearly calls into question the emphasis, in more positivist approaches to management, on the distinctive or unique value of the 'scientific' manager's technical expertise. Quaker business method, and its associated beliefs and practices, force us to pay attention to the 'expertise' of everyoneto the full range of experiences, value-laden perceptions, activities and contexts through which an organisation can learn and enact truth.
This last observation should also make it clear that the truthor even the Truthbeing discussed here is not perceived or known as if 'from nowhere'. Indeed, the claim to have a 'view from nowhere' would itself be significantly, and perhaps disastrously, untruthfulliable to result once again in a failure to respond rightly to the other as a knower and doer of truth. Again to cut short what would need to be a far longer discussion, the Quaker account of truth we propose is clearly realist, but in a way that recognises the situated, perspectival and partial character of all knowledge of the truth. 10
Summary and Conclusions
We have argued that Quaker decision-making can be interpreted in a way that allows 'secular' appropriations of the process to be recognised and evaluated, by Quakers, on a continuum with Quaker uses of the process. They can be evaluated as experiments in living truthfully, in order to discern truth and act in accordance with the truth. Handling Quaker business method in this way neither requires 'secular' practitioners to accept theological claims, nor necessitates a reductionist reading of theology. The focus on truth discerned and truth enacted opens up a space in which God-talk can interact with management-talk without impugning the integrity of either discourse. Indeed, looking to the focus of this special issue, our paper shows that to bring together BGod and Management^in a philosophical context is to open up the question of how the pursuit of trutharguably central to the philosophical endeavour as well as to many conceptions of the religious life relates to the everyday pursuit of Bgood decisions^in management contexts.
Thus, while our account is specific to Quakers, and relies on a distinctive Quaker understanding of the relationship between truth and practice, it does suggest at the formal level that evaluations of the relationship between religious practices and their secular appropriations in management might benefit from detailed scrutiny of the core underlying assumptionsfor example, about the nature of truth or about the criteria for 'good decisions'. Such scrutiny can also enable us better to perceive the key challenges posed by Quaker (and other religiously-based) decision-making processes to secular management contexts: how are decision-making processes enabling or inhibiting attention to the truth of a situation, and what are the criteria for success?
Any conversation about truth in management contexts would need at some point to answer questions about the location and operation of power within organisations. We suggest that this does not need to derail the conversation. Indeed, a fruitful area for further research would be the relationship between (expressed and enacted) understandings of divine power within religiouslybased decision-making processes, on the one hand, and theories of power in organisational decisionmaking, on the other. Once again the link with nonviolence, and particularly with Gandhian nonviolence (as suggested by Rediehs), would be worth exploring. In a management world in which metaphors of warfare tend to dominate, it is easy to forget, or to fail to draw on, the considerable body of thought that understands nonviolence as a mode of poweralbeit one that is inseparable from ethical commitments and from the pursuit of truth. A fuller discussion of this dimension of the relationship of BGod and Management^, however, must await future research.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 7,778.4 | 2018-06-25T00:00:00.000 | [
"Philosophy",
"Business"
] |
USER REQUIREMENT FOR WEB-BASED PROCEEDINGS REPOSITORY
Conference is a popular platform for academics to exchange knowledge and engage with one another. The conferences typically produce collections of papers, known as conference proceedings, which are often published online as open-access resources to facilitate more comprehensive access and dissemination of knowledge. Consequently, there has been a significant increase in articles on online conference proceedings. However, the absence of an organised management system has made it increasingly challenging for users to locate and retrieve specific proceedings. To address this issue, this study proposes implementing a digital repository system to simplify the sharing and distributing of conference proceedings articles. The primary goal of this project is to provide a sustainable solution for storing and accessing conference proceedings, characterised by its reliability, efficiency, effectiveness, and user-friendliness. Such an enhancement is anticipated to boost research productivity and quality. In this article, we present the system requirements for the repository. Subsequently, based on these requirements, we have developed a prototype called the Web-based Digital Repository for Proceedings Articles (WDR). We conducted a field study to assess the prototype's usability, and the evaluation results indicate that WDR is both practical and user-friendly.
INTRODUCTION
Conferences are crucial gatherings for professionals, researchers, and experts to exchange knowledge and foster innovation across various fields.Academic conferences, in particular, play a pivotal role in scholarly communication, allowing researchers to present their work, receive feedback, and publish their findings (Lopez de Leon & McQuillin, 2020).Similarly, conferences in industries like technology, healthcare, and finance drive discussions that shape the future of these sectors.
The formal output of these events, known as conference proceedings, is instrumental in preserving knowledge and extending the impact of a conference.These proceedings act as repositories for research findings and insights, ensuring accessibility and the enduring relevance of the contributions made during the event.For academics, conference proceedings often serve as precursors to formal journal publication and are crucial references in their research.Within academic circles, citing conference proceedings is not just a scholarly convention; it is a way to acknowledge the sources of information, foster scholarly dialogue, and preserve intellectual lineage.As time passes, these documents evolve into historical records, offering valuable insights into the progression of ideas and trends within specific fields (Lui, 2004).
Despite the growing number of conference proceedings, their management lacks systematic organisation, leading to inconsistencies in publication approaches (Wang et al., 2023).These proceedings are often presented statically, without a dedicated database or search functionality, which poses significant challenges for users seeking valuable articles.To address these issues, this study proposes creating a web-based digital repository to streamline the upload, storage, and retrieval of conference proceedings articles.This repository aims to provide a comprehensive solution for managing these papers while offering technical insights and serving as a reference point for developers.
DIGITAL REPOSITORY
A digital repository serves as a dedicated system or platform designed to comprehensively collect, store, manage, and preserve a wide range of digital assets and content types, spanning documents, images, videos, research papers, datasets, and various forms of digital media (Jamaluddin & Ishak, 2011;Denison, 2007).The core mission of a digital repository is to ensure long-term access to these digital materials, facilitating their discoverability and accessibility for users (Butterfield et al., 2022).Digital repositories find utility across diverse contexts, including archiving academic and research materials, preserving cultural heritage, storing data from scientific experiments, and more.Their intrinsic value lies in their ability to effectively organise and safeguard digital information, ensuring its availability for current and future generations.
A web-based digital repository represents a specialised category within this digital repository landscape (Sakri & Ishak, 2023).Distinguished by its accessibility and management through webbased technologies and the internet, these repositories empower users to access their content through web browsers conveniently, rendering them widely accessible and user-friendly.Web-based digital repositories often incorporate intuitive interfaces, allowing users to seamlessly search, browse, and retrieve digital assets online.Institutions such as academic organisations, libraries, museums, and research institutions commonly leverage web-based digital repositories to extend the reach of their digital collections to a global audience.A web-based digital repository that houses digital articles or publications offers many advantages (Dijk & Moti, 2023;Lappalainen & Narayanan, 2023).Firstly, it significantly enhances the accessibility of these articles by making them readily available online.This accessibility enables researchers, scholars, and students to access conference proceedings articles from any location with an internet connection, t hereby promoting the broader dissemination of knowledge.Moreover, web-based repositories frequently feature advanced search and retrieval functionalities, streamlining the locating of specific articles.This efficiency saves valuable time and effort for researchers seeking relevant content.Additionally, these repositories centralise the storage of conference proceeding articles, simplifying the management and organisation of substantial volumes of content.Centralised storage mitigates duplication and en sures consistency in archiving practices, enhancing overall efficiency (Ghedi et al., 2016).
Web-based repositories also facilitate proper citation and referencing of conference proceeding articles, enhancing research credibility and academic rigour.Furthermore, researchers can collaborate more effectively as they access and share these articles with colleagues and peers globally, fostering a sense of community within the academic and research community.They also provide valuable analytics and usage data, enabling publishers and institutions to monitor the popularity and impact of specific articles.As substantiated by numerous studies (Lappalainen & Narayanan, 2023;Gibbons, 2009;Tansley et al., 2003;Mgonzo & Yonah, 2014), standard features of a web-based digital repository designed for housing conference proceeding articles typically encompass online access, efficient search tools, document viewing, detailed metadata, user-friendly navigation, advanced search filters, version control, preservation strategies, customisation options, usage analytics, DOI integration for citable links, robust security measures, user authentication for access control, content submission interfaces, notification systems, and support for open access publishing models.Web-based digital repositories are pivotal tools for efficiently managing and making conference proceedings articles accessible.They address the challenges of organising and accessing these critical components of scholarly communication and knowledge dissemination.These repositories are crucial in advancing research, enhancing collaboration, and preserving the intellectual heritage of various fields and industries.
METHODOLOGY
The research design adopted for this study employs an exploratory approach.The primary aim of this approach is to thoroughly investigate the feasibility and requisite conditions for establishing a webbased digital repository exclusively dedicated to housing conference proceedings articles.This methodological choice facilitates a comprehensive examination of the subject matter, offering invaluable insights into the multifaceted challenges and opportunities inherent in developing and maintaining such a repository (Swaraj, 2019).
The initial phase of this study entails the systematic collection of data through an exhaustive review of extant literature.This literature review encompasses various sources, including digital, institutional, and web-based repositories.Its principal objective is to elicit discerning insights into prevailing best practices, defining characteristics, and core functionalities pertinent to repositories of this nature.Subsequently, the amassed data is subjected to rigorous analysis, resulting in the synthesis of pivotal findings.These findings, in turn, serve as the foundational bedrock upon which the essential attributes and advantages of digital repositories are predicated.The repository design and development phase is underpinned by a methodical approach that harnesses the visualisation capabilities of Unified Modelling Language (UML) to meticulously plan the system architecture, user interfaces, and interaction flows.This UML-based design ensures a user-centric, intuitive, and efficient web-based digital repository.Concurrently, PHP is the primary programming language, capitalising on its versatility in web application development.At the same time, MySQL serves as the relational database management system (RDBMS), ensuring efficient storage and retrieval of conference proceedings articles, metadata, and user information.This strategic combination of UML, PHP, and MySQL expedites the translation of design concepts into functional components.It establishes a robust , userfriendly foundation aligning with the repository's objectives and user requirements.
DESIGN AND DEVELOPMENT OF WDR
This section delves into designing and developing a web-based digital repository for conference proceedings articles.A two-step process is employed in the requirement-gathering phase to comprehend the requirements for this development.The first step involves comprehensively analysing documents and information from internet resources.Relevant content is meticu lously sourced using targeted keywords such as "digital repository," "proceedings articles repository," "proceedings articles," and "web-based digital repository for proceedings articles."The gathered information is subject to thorough analysis and documentation, forming the foundation for constructing the requirements for developing the web-based digital repository for proceedings articles.Table 1 enumerates the specifications of the system's requirements.These requirements are categorised into three primary segments: 'Configure,' 'Register,' and 'Authenticate.'Each requirement is assigned a priority level indicated by M (Mandatory), O (Optional), or D (Desirable).
No.
Requirement Description Priority WDR_01 Registration WDR_01_01 The system must allow the organiser to register.M WDR_02 Login / Logout WDR_02_01 The system must allow the admin and the organiser to log in and out.D WDR_02_02 Login WDR_02_02_01 The system must allow the user to log in based on the role, which is admin or organiser.
M WDR_02_02_02
The system must display a page where the admin and the organiser can enter their login information, such as username and password.
M WDR_02_02_03
The system must verify the username and password of the user.M WDR_02_02_04 If the user forgets their password, the system must allow them to retrieve it by entering their email and username.Delete Proceeding WDR_04_03_01 The system must allow the organiser to delete the proceeding information.
M WDR_04_03_02
The system must delete the articles permanently from the system.O WDR_04_04 Update Proceeding Status WDR_04_04_01 The system should allow the admin and organiser to update the status of proceedings by turning them on or off for regular users.
WDR_04_05
View Proceeding WDR_04_05_01 The system must be able to list the proceedings.M WDR_04_05_02 The system must allow the admin, organiser, and normal user to view the proceeding, including the proceeding information.
WDR_04_06
Edit Proceeding WDR_04_06_01 The system must allow the organiser to edit the conference's name, date, venue, URL, proceeding name, editor, year, month, volume, issue, ISBN/ISSN, publisher, and published information.
WDR_04_07
Add Article WDR_04_07_01 The system must allow the organiser to add a new article.M
WDR_04_08
View Article WDR_04_08_01 The system should allow the admin, the organiser, and the normal user to view the article and its information.
M WDR_04_08_02
The system must be able to list the articles based on the proceedings.M WDR_04_08_03 The system should be able to track how many times the average user has viewed the article.
WDR_04_09
Edit Article WDR_04_09_01 The system must allow the organiser to edit the title, author, abstract, keywords, and page number of the articles.
WDR_04_10
Delete Article WDR_04_10_01 The system must display an option for the organiser to delete the articles.
M WDR_04_10_02
The system must delete the articles permanently from the system.O WDR_04_11 Update Article Status WDR_04_11_01 The system should allow the admin and the organiser to update the article's status by turning them on or off for regular users.
M WDR_04_12 Download Article WDR_04_12_01 The system should allow the organiser, admin, and regular user to download the article.
M WDR_04_12_02
The system should be able to track how many times normal users download the articles.
WDR_05_01
The admin can view reports for several organisers, proceedings, and articles.Meanwhile, the organiser can view the report for proceedings and articles.
WDR_05_02
View Organizer Report WDR_05_02_01 The system must allow the admin to view the number of registered organisers by country.
WDR_05_03
View Proceeding Report WDR_05_03_01 The admin and organiser should be able to view the report showing the number of proceedings by year and the number of views for each proceeding.
WDR_05_04
View Article Report WDR_05_04_01 The system should allow the admin and organiser to view the report showing each article's views and downloads.
WDR_06
Search Proceeding and Article
WDR_06_01
The system must allow the admin, organiser, and normal user to browse articles or proceedings by predefined categories.
M WDR_06_02
The admin, organiser, and average user can search the articles or proceedings based on the author's name, keyword, and title.
M WDR_06_03
The system must display the desired articles or proceedings.M The requirements outlined in Table 1 were translated into the functional components of the computer system.Subsequently, the next step involves visualising and modelling the website's requirements, utilising the appropriate modelling techniques and tools.The Unified Modeling Language (UML) was employed in this project to visualise and model these requirements.The models used in this context encompass two behavioural diagrams, namely use case and sequence diagrams, alongside a class diagram that encapsulates the structural elements of the application.These diagrams were meticulously crafted using StarUML.
Figure 1 illustrates the use case diagram, delineating the interactions among the use cases and the actors within the website, facilitating the management of conference proceedings articles.This use case diagram involves three distinct actors: the organiser, the admin, and the regular user.Within this framework, four principal use cases are discernible: registration, login/logout, management of organiser accounts, management of proceedings articles, viewing reports, and searching proceedings and articles.The 'Manage Organizer Account' use case permits users to execute subfunctions including "View Information" and "Edit Information," while the 'Manage Proceeding Articles' use case provides users with subfunctions such as "Update Proceeding Status," "Update Article Status," "Download Article," "View Article," and "View Proceeding."Simultaneously, the 'View Report' use case empowers users to undertake subfunctions such as "View Proceeding Report" and "View Article Report."
Figure 1
The Use Case Diagram of WDR The development of the web-based digital repository for conference proceedings articles culminated in the creation of a functional prototype, as depicted in Figure 1.This illustrative use case diagram captures the essence of the system's functionality and showcases the seamless interactions between the actors and the various use cases inherent to the prototype.Figure 2 shows an example of the prototype interfaces.Within the context of this prototype, three distinct actors play pivotal roles: 1) Organizer: The organiser assumes a central role in coordinating and managing conference proceedings articles, overseeing critical functions that ensure the smooth flow of proceedings-related activities.
2)
Admin: The admin, as a critical actor, is entrusted with administrative privileges and responsibilities, enabling efficient governance and oversight of the digital repository.
3)
Normal User: The average user represents the broader user base, including researchers, scholars, and stakeholders who use the repository to access, interact with, and contribute to conference proceedings articles.The prototype encompasses diverse use cases, each tailored to address specific functionalities and actions within the digital repository.These use cases are integral to the repository's seamless operation:
1)
Registration: This use case allows users to register their accounts, enabling them to access and contribute to the repository.
2)
Login/Logout: Users can securely log in and out of their accounts, ensuring data privacy and controlled access.
3)
Manage Organizer Account: This pivotal use case empowers organisers to efficiently oversee and administer their accounts.Subfunctions within this use case include "View Information" and "Edit Information."4) Manage Proceeding Articles: A cornerstone of the prototype, this use case facilitates the management of conference proceedings articles.Subfunctions include "Update Proceeding Status," "Update Article Status," "Download Article," "View Article," and "View Proceeding." 5) View Report: Users can gain valuable insights by accessing reports related to proceedings and articles.Subfunctions encompass "View Proceeding Report" and "View Article Report."6) Search Proceeding and Article: The repository offers an efficient search functionality, allowing users to locate and access specific proceedings and articles based on their preferences and criteria.
EVALUATION AND FINDINGS
Thirty individuals were invited to participate in this study to evaluate the system.The participants, comprised of students and employees at UUM, were selected using a simple random sampling method.Once recruited, each participant was tasked with completing five predetermined assignments before responding to a questionnaire provided through Google Forms.The questionnaire utilised in this study was adapted from the Website Analysis and Measurement Inventory (WAMMI) questionnaire, which assesses five critical factors: attractiveness, controllability, helpfulness, efficiency, and learnability.Section A of the questionnaire inquired about six demographic details of the respondents.In contrast, Sections B through F consisted of four questions, employing a five-point Likert scale where "one" indicated strong disagreement and "five" signified strong agreement.This comprehensive assessment encompassed a total of 26 questions.
An analysis of the demographic data provided insights into the composition of the respondents.Among them, 25 were undergraduates, two were pursuing postgraduate studies, and three represented UUM staff members.In terms of gender distribution, there were 21 female respondents and nine male respondents.Age-wise, 19 respondents fell within the 21 to 25 age range, while seven were between 26 and 30.Additionally, two respondents were below 21 years old, and two were over 30 in each category.Furthermore, when assessing their internet skills, 18 respondents considered themselves to possess fairly excellent skills, with eight rating their skills as experts.Only four respondents regarded their internet skills as average.Additionally, 16 respondents indicated awareness of the article repository, whereas only five reported prior experience using it.These sections aimed to gauge the respondents' viewpoints concerning the attractiveness, controllability, efficiency, helpfulness, and learnability of WDR.The findings, including the frequency and average of the responses, have been documented in Tables 2 to 7. The evaluation results indicate that WDR has achieved high satisfaction levels across all usability factors, with satisfaction percentages exceeding 80% for each factor.On average, respondents expressed the highest satisfaction levels with the system's efficiency, controllability, and helpfulness, scoring 88.85%, 85%, and 84.5%, respectively.Additionally, respondents displayed overall satisfaction rates of 82.8% for both attractiveness and learnability.
Furthermore, respondents conveyed positive feedback regarding the efficiency, helpfulness, and learnability aspects, with satisfaction levels of 95.8%, 93.3%, and 90%, respectively.Controllability received a positive response from 82.5% of respondents.Notably, only the aspect of attractiveness fell slightly below the 80% threshold, garnering a satisfaction rate of 79.2%.
These findings underscore that most respondents are content with the Web-based Digital Repository System (WDR).However, some elements still elicited disagreement or dissatisfaction among participants.Therefore, enhancing the system's user interface to en hance user-friendliness and appeal could be considered to increase user engagement with the repository.Additionally, there is room for improving the system's functionality to achieve higher satisfaction and consensus regarding usability.
CONCLUSION AND FUTURE WORKS
The Web-based Proceedings Repository is an asset within the academic realm.It provides a vital tool for the dissemination and accessibility of conference proceedings in an era when these attributes are of the utmost importance.This repository offers an effective solution for managing and retrieving conference proceedings, catering to the needs of researchers, academics, and conference organisers.
One of the repository's prominent strengths lies in its user-friendly interface.With its well-structured and intuitive design, users can effortlessly navigate the extensive collection of conference proceedings.This feature dramatically facilitates locating specific proceedings, rendering it an indispensable resource within the academic community.Moreover, the repository's commitment to open access is laudable.By granting online access to conference proceedings, it fosters transparency and inclusivity in academia.This open-access approach benefits researchers and encourages broader knowledge sharing, thereby contributing to the advancement of various academic disciplines.
The repository boasts a robust search functionality that enables swift retrieval of relevant proceedings.Additionally, the capability to generate reports enhances convenience, aiding administrators and organisers in effectively managing their respective proceedings.One of its paramount strengths lies in its role as a custodian of academic knowledge.With a long-term sustainability and reliability focus, the repository ensures the continued accessibility of conference proceedings for posterity.This aspect is pivotal in preserving the historical record of scholarly discourse.Advanced collaboration features can be incorporated for future enhancements, enabling researchers to connect and exchange insights within the platform.Furthermore, integration with citation management tools would further streamline the research process for users.
Figure 2 Example of the Prototype Interfaces add, view, edit, delete, and update the proceeding status, add, edit, view, delete, update status, and download the articles.Meanwhile, the admin can view and update the status of the proceeding, view the article, update the article's status, and download the article.The normal user can view proceedings, view articles, and download articles.
Table 2
The Respondents' Responses on the Attractiveness
Table 3
The Respondents' Responses on the Controllability
Table 5
The Respondents' Responses on the Helpfulness
Table 6
The Respondents' Responses on the Learnability
Table 7
The Respondents' Positive Responses on Each Usability Factors | 4,566.8 | 2024-04-30T00:00:00.000 | [
"Computer Science"
] |
On certain generalized q-Appell polynomial expansions
We study q-analogues of three Appell polynomials, the H-polynomials, the Apostol–Bernoulli and Apostol–Euler polynomials, whereby two new q-difference operators and the NOVA q-addition play key roles. The definitions of the new polynomials are by the generating function; like in our book, two forms, NWA and JHC are always given together with tables, symmetry relations and recurrence formulas. It is shown that the complementary argument theorems can be extended to the new polynomials as well as to some related polynomials. In order to find a certain formula, we introduce a q-logarithm. We conclude with a brief discussion of multiple q-Appell polynomials.
Introduction.
The aim of this paper is to describe how the q-umbral calculus extends in a natural way to produce q-analogues of conversion theorems and polynomial expansions for the following Appell polynomials: Hpolynomials, Apostol-Bernoulli and Apostol-Euler from the recent articles on this theme, as well as multiple variable extensions.To this aim, we use certain q-difference operators known from the book [3], and some new operators containing a factor λ from the previous work of Luo and Srivastava [8], [9], [10] on Apostol-Bernoulli polynomials.The q-Appell polynomials have been used before in [4], where their basic definition was given together with several matrix applications.The q-umbral method [3], influenced by Jordan [5] and Nørlund [12], forms the basis for the terminology and umbral method, which enables convenient q-analogues of the formulas for Appell polynomials; our formulas resemble the Appell polynomial formulas in a remarkable way.A certain q-Taylor formula plays a key role in many proofs.
This paper is organized as follows: In this section we give the general definitions.In each section, we then give the specific definitions and special values which we use there.In Section 2, we introduce two dual polynomials together with recursion formulas, symmetry relations and complementary argument theorem.
Let λ ∈ R and let E q (x) denote the q-exponential function.In Sections 3 and 4 in the spirit of Apostol, Luo and Srivastava, we introduce and discuss two dual forms of the generalized q-Apostol-Bernoulli polynomials, together with the many applications that were mentioned earlier.In Sections 5 and 6, we continue the discussion with two dual forms of the generalized q-Apostol-Euler polynomials Two of their generating functions are given below: (1) NWA,λ,ν,q (x) {ν} q !, and (2) NWA,λ,ν,q (x) {ν} q ! .This is followed by formulas which contain both kinds of these polynomials.Many of the formulas are proved by simple manipulations of the generating functions.In Section 7 we show that the many expansion formulas according to Nørlund can also be formulated for our polynomials.In Section 7 we extend the previous considerations to a more general form, named multiplicative q-Appell polynomials.More on this will come in a future paper.We now come to Section 9; in order to find q-analogues of the corresponding formulas for the generating functions, we, formally, introduce a logarithm for the q-exponential function.The calculations are valid for so-called q-real numbers.In Section 10, we briefly discuss multiple q-Appell polynomials.We now start with the definitions, compare with the book [3].Some of the notation is well known and will be skipped.
Definition 1.
Let the Gauss q-binomial coefficient be defined by Let a and b be any elements with commutative multiplication.Then the NWA q-addition is given by The JHC q-addition is the function The q-derivative is defined by ( 7) Definition 2. Let the NWA q-shift operator be given by ( 8) Definition 3. The related JHC q-shift operator is given by ( 9) E( q )(x n ) ≡ (x q 1) n .
Definition 4. For every power series f n (t), the q-Appell polynomials or Φ q polynomials of degree ν and order n have the following generating function: For x = 0 we get the Φ (n) ν,q number of degree ν and order n.Definition 5.For f n (t) of the form h(t) n , we call the q-Appell polynomial Φ q in (14) multiplicative.Theorem 1.1.We have the two q-Taylor formulas 2. The H polynomials.First, we repeat some of the definitions of certain related q-Appell polynomials for later use.These polynomials are more general forms of the polynomials we wish to study.Definition 6.The generating function for β Definition 7. The generating function for γ Definition 8.The generating function for η Definition 9.The generating function for θ We now come to the generating function of the polynomials we want to study in this section (for q = 1): The H polynomials are defined in [14, p. 532 (37)].NWA,ν,q (x) is a special case of (19): Definition 11.The generating function for H (n) JHC,ν,q (x) is a special case of (20): The polynomials in ( 23) and ( 24) are q-analogues of the generalized H polynomials.We now turn to these q-analogues.Theorem 2.1.We have H JHC,0,q = 0, H JHC,1,q = 1, (H JHC,q q 1) k + H JHC,k,q = 0, k > 1.
The following table lists some of the first H NWA,ν,q numbers.
We need not calculate the H JHC,ν,,q numbers, since we have the following symmetry relations: For ν even, H NWA,ν,q = H JHC,ν,q .
The following table lists some of the first Ward q-Bernoulli numbers.
The following three formulas express x n in terms of q-Appell polynomials.
The following complementary argument theorems extend the ones given in [3, p. 153].
3.
The NWA q-Apostol-Bernoulli polynomials.Throughout, we assume that λ = 0.The b polynomials are more general forms of the NWA q-Apostol-Bernoulli polynomials, which we will study in this section.
Definition 13. The polynomials b
(n) λ,ν,q (x) are defined by (43) The generating function for B NWA,ν,q (x) is a special case of (44): Definition 14.The generalized NWA q-Apostol-Bernoulli polynomials B (n) NWA,λ,ν,q (x) are defined by Assume that λ = 1.The poles in the denominator of (44) are the roots of E q (t) = λ −1 , which implies that in some cases the limit λ → 1 is not straightforward and needs some further consideration.
We have λ,ν,q (x).This leads to the following recurrence for the NWA q-Apostol-Bernoulli numbers: The following table lists some of the first B NWA,λ,ν,q numbers.
Proof.We have that Formula (63) now follows on equating the coefficients of t ν .
5.
The NWA q-Apostol-Euler polynomials.We start with some repetition from [3]: The generating function for the first q-Euler polynomials of degree ν and order n, F NWA,ν,q (x) is given by (66) The following table lists some of the first q-Euler numbers F NWA,n,q .
The e polynomials are more general forms of the NWA q-Apostol-Euler polynomials, which we will study in this section.
Definition 18.The e polynomials are defined by (67) Definition 19.The generalized NWA q-Apostol-Euler polynomials F (n) NWA,λ,ν,q (x) are defined by (68) Assume that λ = −1.The poles in the denominator of (68) are the roots of E q (t) = −λ −1 .Theorem 5.1.We have λ,ν,q (x), This leads to the following recurrence: The following table lists some of the first F NWA,λ,n,q numbers.
We observe that the limits for λ → 1 are the first q-Euler numbers.
Proof.The following computation with generating functions shows the way: Equating the coefficients of t ν and using (70) gives (85).
Proof.We have that Formula (89) now follows on equating the coefficients of t ν .
We conclude that the NWA q-Apostol-Bernoulli and NWA q-Apostol-Euler polynomials satisfy linear q-difference equations with constant coefficients.
Theorem 7.5 (A generalization of [3, 4.261]).Under the assumption that f (x) is analytic with q-Taylor expansion we can express powers of NWA,A,q and ∇ NWA,A,q operating on f (x) as powers of D q as follows.These series converge when the absolute value of x is small enough: NWA,λ,ν,q (x) {ν} q ! .
8. Multiplicative q-Appell polynomials.In this section we very briefly discuss multiplicative q-Appell polynomials with f n (t) equal to h(t) n .It turns out, by simple umbral manipulation that many of the formulas in [3,Section 4.3] are also valid for multiplicative q-Appell polynomials, and these equations will be presented in another article.Throughout, we denote the multiplicative q-Appell polynomials by Φ (n) M,ν,q (x).Definition 23.Under the assumption that the function h(t) n can be expressed analytically in R[[t]], and for f n (t) of the form h(t) n , we call the q-Appell polynomial Φ q in (14) multiplicative.
Then we have M,k,q (x 1 ⊕ q . ..⊕ q x s ) = where we assume that n j operates on x j .
Proof.It suffices to show that the logarithmic derivative of E q (x) is > 0. However, this follows from Definition 24.The q-logarithm log q (x) is the inverse function of E q (x), −∞ < x < (1 − q) −1 , 0 < q < 1.
Theorem 9.2.The q-logarithm log q (x) has the following properties (x and y have small real values, n ∈ N): (1) Its domain is ) It is strictly increasing.
( 128) Proof.We equate the generating functions in the following way: 10. Appendix: multiple q-Appell polynomials.Of course there are many ways to define multiple q-Appell polynomials; in this paper we concentrate on one of the simplest approaches in the spirit of Lee. | 2,129.4 | 2014-01-01T00:00:00.000 | [
"Mathematics"
] |
Beyond-accuracy: a review on diversity, serendipity, and fairness in recommender systems based on graph neural networks
By providing personalized suggestions to users, recommender systems have become essential to numerous online platforms. Collaborative filtering, particularly graph-based approaches using Graph Neural Networks (GNNs), have demonstrated great results in terms of recommendation accuracy. However, accuracy may not always be the most important criterion for evaluating recommender systems' performance, since beyond-accuracy aspects such as recommendation diversity, serendipity, and fairness can strongly influence user engagement and satisfaction. This review paper focuses on addressing these dimensions in GNN-based recommender systems, going beyond the conventional accuracy-centric perspective. We begin by reviewing recent developments in approaches that improve not only the accuracy-diversity trade-off but also promote serendipity, and fairness in GNN-based recommender systems. We discuss different stages of model development including data preprocessing, graph construction, embedding initialization, propagation layers, embedding fusion, score computation, and training methodologies. Furthermore, we present a look into the practical difficulties encountered in assuring diversity, serendipity, and fairness, while retaining high accuracy. Finally, we discuss potential future research directions for developing more robust GNN-based recommender systems that go beyond the unidimensional perspective of focusing solely on accuracy. This review aims to provide researchers and practitioners with an in-depth understanding of the multifaceted issues that arise when designing GNN-based recommender systems, setting our work apart by offering a comprehensive exploration of beyond-accuracy dimensions.
INTRODUCTION
With their ability to provide personalized suggestions, recommender systems have become an integral part of numerous online platforms by helping users find relevant products and content Aggarwal et al. (2016).There are various methods employed to implement recommender systems, among which collaborative filtering (CF) has proven to be particularly effective due to its ability to leverage user-item interaction data to generate personalized recommendations Koren et al. (2021).Recent advances in Graph Neural
BACKGROUND
Graph neural networks (GNNs) have recently emerged as an effective way to learn from graph-structured data by capturing complex patterns and relationships Hamilton (2020).Through the propagation and transformation of feature information among interconnected nodes in a graph, GNNs can effectively capture the local and global structure of the given graphs.Consequently, they emerge as an ideal method especially suitable for dealing with tasks involving interconnected, relational data such as social network analysis, molecular chemistry, and recommender systems among others.
In recommender systems, integrating Graph Neural Networks (GNNs) with traditional collaborative filtering techniques has been shown beneficial.Representing users and items as nodes in a graph with interactions acting as edges allows GNNs to provide more accurate personalized recommendations by discovering and utilizing intricate connections that would otherwise remain undetected Wang et al. (2019a).In particular, higher-order connectivity together with transitive relationships play an essential role when trying to extract user preferences in certain scenarios.
GNN-based recommender systems represent an evolving field with continuous advancements and innovations.Recent research has focused on multiple aspects of GNNs in recommender systems, ranging from optimizing propagation layers to effectively managing large-scale graphs and integration of auxiliary information Zhou et al. (2022).Aside from these aspects, an expanding interest lies in exploring beyond-accuracy objectives for recommender systems.Such objectives include diversity, explainability/interpretability, fairness, serendipity/novelty, privacy/security, and robustness which offer a more comprehensive evaluation of the system's performance Wu et al. (2022a); Gao et al. (2023).However, our work focuses primarily on three key aspects: diversity, serendipity, and fairness, since these aspects have a significant impact on user satisfaction, while also considering ethical concerns in the field of recommender systems.Ensuring diversity amongst recommendations minimizes over-specialization effects, benefiting users in product/content discovery and exploration Kunaver and Požrl (2017).Considering serendipity also helps to overcome the over-specialization problem by allowing the system to recommend novel, relevant, and unexpected items, thus improving user satisfaction Kaminskas and Bridge (2016).The aspect of fairness ensures that the system does not discriminate against certain users or item providers, thereby promoting equitable user experiences Deldjoo et al. (2023).Diversity, serendipity, and fairness in recommender systems are interconnected and often influence each other.For instance, increasing diversity can lead to more serendipitous recommendations, since users are exposed to a wider range of unexpected and less-known items Kotkov et al. (2020).Furthermore, focusing on diversity and serendipity can also promote fairness, since it ensures a more equitable distribution of recommendations across items and prevents the system from consistently suggesting only popular items Mansoury et al. (2020).However, it's important to note that these aspects need to be balanced with the system's accuracy and relevance to maintain user satisfaction.Considering beyond-accuracy dimensions contributes to supporting the development of GNN-based recommender systems that are not only robust and accurate but also user-centric and ethically considerate.
While GNNs have seen rapid advancements, their application in recommender systems has also been the subject of several surveys.Wu et al. (2022a) and Gao et al. (2023) provide a broad overview of GNN methods in recommender systems, touching upon aspects of diversity and fairness.Dai et al. (2022) delves into fairness in graph neural networks in general, briefly discussing fairness in GNN-based recommender systems.Meanwhile, Fu et al. (2023) explores serendipity in deep learning recommender systems, with limited focus on GNN-based recommenders.Building on these insights, our review distinctively emphasizes the importance of diversity, serendipity, and fairness in GNN-based recommender systems, offering a deeper dive into these dimensions.
To conduct our review, we searched for literature on Google Scholar using keywords such as "diversity", "serendipity", "novelty", "fairness", "beyond-accuracy", "graph neural networks" or "recommender system".We manually checked the resulting papers for their relevance and retrieved 21 publications overall from relevant journals and conferences in the field (see Table 1).While re-ranking and post-processing methods are often used when optimizing beyond-accuracy metrics in recommender systems Gao et al. (2023), this paper specifically concentrates on advancements within GNN-based models, thus leaving these methods outside the discussion.Finally, it is important to highlight that diversity, serendipity, and fairness are extensively researched in recommender systems beyond GNNs.Broader literature across various architectures has provided insights into these challenges and their overarching solutions.While our paper primarily focuses on GNNs, we direct readers to consult these works for a comprehensive perspective Kaminskas and Bridge (2016); Wang et al. (2023a).
MODEL DEVELOPMENT
The construction of a GNN-based recommender system is a complex, multi-stage process that requires careful planning and execution at each step.These stages include data preprocessing (DP), graph construction (GC), embedding initialization (EI), propagation layers (PL), embedding fusion (EF), score computation (SC), and training methodologies (TM).In this section, we provide an overview of this multi-stage process as it is crucial for understanding the specific stages at which current research has concentrated efforts to address the beyond-accuracy aspects of diversity, serendipity, and fairness in GNN-based recommender systems.
Figure 1.The simplified multi-stage process of developing a GNN-based recommender system, each of these stages strongly impacts resulting recommendations and can be considered when designing a model that takes into account beyond-accuracy objectives.
Data preprocessing, graph construction, embedding initialization
The initial stage of developing a GNN-based collaborative filtering model is data preprocessing, where user-item interaction data and auxiliary information such as user/item features or social connections are collected and processed Lacic et al. (2015a); Duricic et al. (2018); Fan et al. (2019a); Wang et al. (2019b); Duricic et al. (2020).Techniques like data imputation ensure that missing data is filled, providing a more complete dataset, while outlier detection helps in maintaining the data's integrity.Feature normalization ensures consistent data scales, enhancing model performance.Addressing the cold-start problem at this stage ensures that new users or items without sufficient interaction history can still receive meaningful recommendations Lacic et al. (2015b); Liu et al. (2020).
The graph construction stage is crucial, as the graph's structure directly influences the model's efficacy.Choosing the type of graph determines the nature of relationships between nodes.Adjusting edge weights can prioritize certain interactions while adding virtual nodes/edges can introduce auxiliary information to improve recommendation quality Wang et al. (2020); Kim et al. (2022); Wang et al. (2023b).
In the embedding initialization stage, nodes are assigned low-dimensional vectors or embeddings.The choice of embedding size balances computational efficiency and representation power.Different initialization methods offer trade-offs between convergence speed and stability.Including diverse information in the embeddings can capture richer user-item relationships, enhancing recommendation quality Wang et al. (2021).This initialization can be represented as item , where h (0) user and h (0) item are the initial embeddings of the user and item nodes, respectively.
Propagation layers, embedding fusion, score computation, training methodologies
Propagation layers in GNNs aggregate and transform features of neighboring nodes to generate node embeddings, represented as l) , where H (l) is the matrix of node features at layer l, A is the adjacency matrix, D is the degree matrix, W (l) is the weight matrix at layer l, and σ is the activation function Hamilton (2020).There are numerous approaches built on this concept.For instance, He et al. (2020) adopt a simplified approach, emphasizing straightforward neighborhood aggregation to enhance the quality of node embeddings; whereas Fan et al. (2019a) integrate user-item interactions with user-user and item-item relations, capturing complex interactions through a comprehensive graph structure.
Afterward, these embeddings are combined during the embedding fusion stage, forming a latent user-item representation used for score computation by applying a weighted summation, concatenation, or a more complex method of combining user and item embeddings Wang et al. (2019a); He et al. (2020).
The score computation stage involves a scoring function to output a score for each user-item pair based on the fused embeddings.The scoring function can be as simple as a dot product between user and item embeddings, or it can be a more complex function that takes into account additional factors Wang et al. (2019a); He et al. (2020).
Finally, in the training methodologies stage, a suitable loss function is selected, and an optimization algorithm, typically a variant of stochastic gradient descent, is used to update model parameters Rendle et al. (2012); Fan et al. (2019b).
Understanding the unique strengths of each stage outlined in this section is essential, and a comparative evaluation can guide the selection of the most suitable approach for specific collaborative filtering scenarios, such as addressing the challenges associated with beyond-accuracy metrics.In Table 1, we provide a comprehensive overview of existing literature, aiding readers in navigating the diverse methodologies and findings discussed throughout this review.
Definition and importance of diversity
Diversity in recommender systems indicates how different the suggested items are to a user.It's vital for recommendation quality, preventing over-specialization, and boosting user discovery.Diverse recommendations offer users a wider item range, enhancing satisfaction and user engagement Kunaver and Požrl (2017); Duricic et al. (2021).Diversity has two types: intra-list (variety within one recommendation list) and inter-list (variety across lists for different users) Kaminskas and Bridge (2016).
Review of recent developments in improving accuracy-diversity trade-off
A number of innovative approaches have emerged recently to tackle recommendation diversity using graph neural networks (GNNs).These methods can be broadly categorized based on the specific mechanisms or strategies they employ: • Neighbor-based mechanisms 1 : An approach introduced by Isufi et • Adversarial learning4 : To improve the accuracy-diversity trade-off in tag-aware systems, the DTGCF model utilizes personalized category-boosted negative sampling, adversarial learning for categoryfree embeddings, and specialized regularization techniques Zuo et al. (2023).Furthermore, the above-mentioned DGCN model also employs adversarial learning to make item representations more category-independent.
• Contrastive learning5 : The Contrastive Co-training (CCT) method by Ma et al. (2022) employs an iterative pipeline that augments recommendation and contrastive graph views with pseudo edges, leveraging diversified contrastive learning to address popularity and category biases • Heterogeneous Graph Neural Networks6 : The GraphDR approach by Xie et al. (2021) utilizes a heterogeneous graph neural network, capturing diverse interactions and prioritizing diversity in the matching module.
Each of these methods offers a unique approach to the accuracy-diversity challenge.While all aim to improve the trade-off, their strategies vary, highlighting the multifaceted nature of the challenge at hand.
Definition and importance of serendipity and novelty
Serendipity and closely related novelty are crucial in recommender systems, both aiming to boost user discovery.Serendipity refers to surprising yet relevant recommendations, promoting exploration and curiosity.Novelty suggests new or unfamiliar items, expanding user exposure.Both prevent overspecialization and encourage user curiosity Kaminskas and Bridge (2016).
Review of recent developments in promoting serendipity and novelty
Recent advancements in GNN-based recommender systems have shown promising results in promoting serendipity and novelty, although notably fewer efforts have been directed towards balancing the accuracyserendipity and accuracy-novelty trade-offs in comparison to the accuracy-diversity trade-off.In our exploration, we identified several studies addressing these efforts and have categorized them based on the primary theme of their contribution: • Neighbor-based mechanisms: Approach proposed by Boo et al. (2023) enhances session-based recommendations by incorporating serendipitous session embeddings, leveraging session data and user preferences to amplify global embedding effects enabling users to control explore-exploit tradeoffs.
• Long-tail recommendations7 : The TailNet architecture is designed to enhance long-tail recommendation performance.It classifies items into short-head and long-tail based on click frequency and integrates a unique preference mechanism to balance between recommending niche items for serendipity and maintaining overall accuracy Liu and Zheng (2020).
• Normalization techniques8 : Zhao et al. ( 2022) proposed r-AdjNorm, a simple and effective GNN improvement that can improve the accuracy-novelty trade-off by controlling the normalization strength in the neighborhood aggregation process.
• General GNN architecture enhancements9 : Similarly to the popular LightGCN approach by He et al. (2020), the ImprovedGCN model by Dhawan et al. (2022) adapts and simplifies the graph convolution process in GCNs for item recommendation, inadvertently boosting serendipity.On the other hand, the BGCF framework by Sun et al. (2020), designed for diverse and accurate recommendations, also boosts serendipity and novelty through its joint training approach.These GNN-based models, while focusing on accuracy, inadvertently elevate recommendation serendipity and/or novelty.
These studies collectively demonstrate the potential of GNNs in enhancing the serendipity and novelty of recommender systems, while also highlighting the need for further research to address existing challenges.
Definition and importance of fairness
Fairness in recommender systems ensures no bias towards certain users or items.It can be divided into user fairness, which avoids algorithmic bias among users or demographics, and item fairness, which ensures equal exposure for items, countering popularity bias Leonhardt et al. (2018); Kowald et al. (2020); Abdollahpouri et al. (2021); Lacic et al. (2022); Kowald et al. (2023); Lex et al. (2020).Fairness helps to mitigate bias, supports diversity, and boosts user satisfaction.In GNN-based systems, which can amplify bias, fairness is crucial for balanced recommendations and optimal performance Ekstrand et al. (2018); Chizari et al. (2022); Chen et al. (2023); Gao et al. (2023).
Review of recent developments in promoting fairness
In the evolving landscape of GNN-based recommender systems, the pursuit of user and item fairness has become a prominent topic.Recent advancements can be broadly categorized based on the thematic emphasis of their contributions: • Neighbor-based mechanisms: The Navip method debiases the neighbor aggregation process in GNNs using "neighbor aggregation via inverse propensity", focusing on user fairness Kim et al. (2022).Additionally, the UGRec framework by Liu et al. (2022b) employs an information aggregation component and a multihop mechanism to aggregate information from users' higher-order neighbors, ensuring user fairness by considering male and female discrimination.The SKIPHOP approach focuses on user fairness by introducing an approach that captures both direct user-item interactions and latent knowledge graph interests, capturing both first-order and second-order proximity.Using fairness for regularization, it ensures balanced recommendations for users with similar profiles Wu et al. (2022b).
• Adversarial learning: The UGRec model additionally incorporates adversarial learning to eliminate gender-specific features while preserving common features.
• Contrastive learning: The DCRec model by Yang et al. (2023b) leverages debiased contrastive learning to counteract popularity bias and addressing the challenge of disentangling user conformity from genuine interest, focusing on user fairness.The TAGCL framework also capitalizes on the contrastive learning paradigm, ensuring item fairness by reducing biases in social tagging systems Xu et al. (2023).
DISCUSSION AND FUTURE DIRECTIONS
In this paper, we have conducted a comprehensive review of the literature on diversity, serendipity, and fairness in GNN-based recommender systems, with a focus on optimizing beyond-accuracy metrics.Throughout our analysis, we have explored various aspects of model development and discussed recent advancements in addressing these dimensions.
To further advance the field and guide future research, we have formulated three key questions: Q1: What are the practical challenges in optimizing GNN-based recommender systems for beyondaccuracy metrics?GNNs are able to capture complex relationships within graph structures.However, this sophistication can lead to overfitting, especially when prioritizing accuracy Fu et al. (2023).Data sparsity and the need for auxiliary data, such as demographic information, challenge the optimization of high-quality node representations, introducing biases Dhawan et al. (2022).An overemphasis on past preferences can limit novel discoveries Dhawan et al. (2022), and while addressing popularity bias is essential, it might inadvertently inject noise, reducing accuracy Liu and Zheng (2020).Balancing diverse objectives, like fairness, accuracy, and diversity, is nuanced, especially when optimizing one can compromise another Liu et al. (2022b).These challenges emphasize the need for focused research on effective modeling of GNN-based recommender systems focused on beyond-accuracy optimization.
Q2: Which model development stages of GNN-based recommender systems have seen the most innovation for tackling beyond-accuracy optimization, and which stages have been underutilized?By conducting a thorough analysis of the reviewed papers (see Table 1), we have observed that the graph construction, propagation layer, and training methodologies have seen significant innovation in GNN-based recommender systems.This includes advanced graph construction methods, innovative graph convolution operations, and unique training methodologies.However, stages like embedding initialization, embedding fusion, and score computation are relatively underutilized.These stages could offer potential avenues for future research and could provide novel ways to balance accuracy, fairness, diversity, novelty, and serendipity in recommendations.
Q3: What are potentially unexplored areas of beyond-accuracy optimization in GNN-based recommender systems?
A less explored aspect in GNN-based recommender systems is personalized diversity, which modifies the diversity in recommendations to match individual user preferences.Users favoring more diversity get more diverse recommendations, whereas those liking less diversity get less diverse ones Eskandanian et al. (2017).This concept of personalized diversity, currently under-researched in GNN-based systems, hints at an intriguing future research direction.It can also relate to personalized serendipity or novelty, tailoring unexpected or novel recommendations to user preferences.Thus, incorporating personalized diversity, serendipity, and novelty in GNN-based systems could enrich beyond-accuracy optimization.
Overall, this review aims to help researchers and practitioners gain a deeper understanding of the multifaceted issues and potential avenues for future research in optimizing GNN-based recommender systems beyond traditional accuracy-centric approaches.By addressing the practical challenges, identifying underutilized model development stages, and highlighting unexplored areas of optimization, we hope to contribute to the development of more robust, diverse, serendipitous, and fair recommender systems that cater to the evolving needs and expectations of users.
Wu et al. (2022a))) nearest neighbors (NN) and furthest neighbors (FN) with a joint convolutional framework.The DGRec method diversifies embedding generation through submodular neighbor selection, layer attention, and loss reweightingYang et al. (2023a).Additionally, DGCN model leverages graph convolutional networks for capturing collaborative effects in the user-item bipartite graph ensuring diverse recommendations through rebalanced neighbor discoveryZheng et al. (2021).DGCF framework diversifies recommendations by disentangling user intents in collaborative filtering using intent-aware graphs and a graph disentangling layerWang et al. (2020).DDGraph approach involves dynamically constructing a user-item graph to capture both user-item interactions and non-interactions, and then applying a novel candidate 1 Neighbor-based mechanisms aggregate and propagate information from neighboring nodes (users or items) to enhance the representation of a target node, capturing intricate relational patterns for improved recommendationsWu et al. (2022a).
Skarding et al. (2021)isms 2 :• Dynamic graph construction 3 : 2 Disentangling mechanisms aim to separate and capture distinct factors or patterns within graph data, ensuring more interpretable and robust recommendations by reducing the entanglement of various latent factorsMa et al. (2019).3Dynamicgraphconstructioninvolves continuously updating and evolving the graph structure to incorporate new interactions and/or entitiesSkarding et al. (2021).
Table 1 .
This table summarizes key literature on GNN-based recommender systems emphasizing beyondaccuracy metrics: Diversity, Serendipity, and Fairness.Each entry specifies the paper's publication venue/journal, targeted metric, a broad strategy categorization, and the model development stages the method utilizes or adapts to enhance the respective metric.These stages include data preprocessing (DP), graph construction (GC), embedding initialization (EI), propagation layers (PL), embedding fusion (EF), score computation (SC), and training methodologies (TM).
Zhang et al. (2022)tail issue by focusing on popularity bias in session-based recommendation systems.It aims to ensure item fairness by normalizing item and session representations, thereby improving recommendations, especially for less popular items.Additionally, the above-mentioned approach byLi et al. (2019) also focuses on long-tail recommendations.Self-training mechanisms 11 : The Self-Fair approach byLiu et al. (2022a)employs a self-training mechanism using unlabeled data with the goal of improving user fairness in recommendations for users of different genders.By iteratively refining predictions as pseudo-labels and incorporating fairness constraints, the model balances accuracy and fairness without relying heavily on labeled data.In the broader context of graph neural networks, researchers have also tackled fairness in nonrecommender systems tasks, such as classificationDai and Wang (2021);Ma et al. (2021);Dong et al. (2022);Zhang et al. (2022).Their insights provide valuable lessons for future development of fair recommender systems. • | 4,872 | 2023-10-03T00:00:00.000 | [
"Computer Science"
] |
MicroRNA manipulation in colorectal cancer cells: from laboratory to clinical application
The development of Colorectal Cancer (CRC) follows a sequential progression from adenoma to the carcinoma. Therefore, opportunities exist to interfere with the natural course of disease development and progression. Dysregulation of microRNAs (miRNAs) in cancer cells indirectly results in higher levels of messenger RNA (mRNA) specific to tumour promoter genes or tumour suppressor genes. This narrative review aims to provide a comprehensive review of the literature about the manipulation of oncogenic or tumour suppressor miRNAs in colorectal cancer cells for the purpose of development of anticancer therapies. A literature search identified studies describing manipulation of miRNAs in colorectal cancer cells in vivo and in vitro. Studies were also included to provide an update on the role of miRNAs in CRC development, progression and diagnosis. Strategy based on restoration of silenced miRNAs or inhibition of over expressed miRNAs has opened a new area of research in cancer therapy. In this review article different techniques for miRNA manipulation are reviewed and their utility for colorectal cancer therapy has been discussed in detail. Restoration of normal equilibrium for cancer related miRNAs can result in inhibition of tumour growth, apoptosis, blocking of invasion, angiogenesis and metastasis. Furthermore, drug resistant cancer cells can be turned into drug sensitive cells on alteration of specific miRNAs in cancer cells. MiRNA modulation in cancer cells holds great potential to replace current anticancer therapies. However, further work is needed on tissue specific delivery systems and strategies to avoid side effects.
Background
Colorectal cancer (CRC) is the third most common neoplasm worldwide. According to the International Agency for Research on Cancer, approximately 1.24 million new cases of CRC were detected worldwide in 2008 [1]. The incidence of CRC is on the rise in developing countries, southern and eastern Europe [2][3][4]. Contrary to the current trend in Europe, the incidence of CRC in the USA has fallen in the last two decades [5]. The lifetime risk for developing CRC [6] in men is 1 in 16 whereas in women it is 1 in 20 (National Statistics, UK). The development of CRC follows the sequential progression from adenoma to carcinoma [7]. The initial genetic alteration results in adenoma formation in which cells exhibit autonomous growth. During the further course of carcinogenesis, intestinal epithelial cells acquire the characteristics of invasion and potential for metastasis. Therefore opportunities exist to interfere with the natural course of disease development and improve cancer specific survival. Such therapeutic interferences can potentially be chemo preventive for high risk individuals, the early detection of colorectal neoplasia, chemotherapies to down stage the surgically resected or resectable cancers, and therapies for palliation of symptoms in advanced stage cancer. Discovery of microRNAs (miR-NAs) and their utility in RNA interference has opened a new era of cancer research and potential of new therapies for cancer treatment. This narrative review aims to provide a comprehensive review of the literature about the manipulation of miRNAs in colorectal cancer cells and tissue for the purpose of development of anticancer therapies. Principles of miRNA manipulation and common methods of modulation in vitro and vivo are discussed in detail for the general understanding of readers. This review also aims to provide the update on the role of miRNAs in CRC development and the diagnostic utility of circulating miRNAs Methods A search was performed using Medline, PubMed and The Cochrane Library databases from 2000 to 2011 to identify articles reporting the role of miRNAs in colorectal cancer development, diagnosis and therapy. The following MeSH search headings were used: 'microRNA' and 'colorectal cancer'. The search was further extended by using the following text words and their combinations: 'microRNA' , 'blood' , 'circulation' , 'diagnosis' , 'screening' , 'therapy' , 'manipulation' , 'modulation' , 'stem cells' and 'miRNAs'. 'The related articles' function in PubMed was used to broaden the search. All the abstracts, studies and citations found were reviewed. The most recent date of the search was 16 July 2011. Information about colorectal cancer related miR-NAs was extracted on the following areas: cancer development & progression; diagnostic utility; manipulation in vitro and vivo; development of therapies. Detailed information was extracted from studies that met the inclusion criteria: Studies conducted on human and non-human cells or tissues; blood-based miRNAs-in colorectal cancers and studies published in the English literature. Because of the lack of randomized controlled trials and the heterogeneous nature of the available data, no attempt was made to perform quantitative meta-analyses. In the absence of standard criteria for the quality assessment of laboratory-based, observational studies on miRNA and heterogeneity of outcome measures included in this narrative review, no quality assessment of included studies was carried out. As the study is only a narrative review, no ethical permission or approval was required.
Results
The literature search identified 48 original scientific studies and review articles in which some or all of the outcomes of interest were reported. 18 additional articles and web-based information sites were selected to provide a general background to miRNA and colorectal neoplasia. 36 additional studies were included to supplement the information in colorectal cancer development, detection of circulating miRNAs, and principles of miRNA therapy, design and delivery. This article provides a comprehensive review of the different approaches for restoration of tumour suppressor miRNAs expression and methods of knocking down the tumour promoter miRNAs in cancer cells. Not all the methodologies of miRNA manipulation have been applied to CRC cells and some have been experimented only on other solid organ cancers. As description and elaboration of these methods might provide an insight to future CRC therapies designed on miRNA manipulations, the authors have included principle studies focussing on mechanisms of such manipulation.
Discussion
MiRNA biogenesis and function miRNAs are single-stranded, evolutionarily conserved, small (17-25 ribonucleotides) noncoding [8] RNA molecules. MiRNAs function as negative regulators of target genes by directing specific messenger RNA cleavage or translational inhibition through RNA induced silencing complex termed as RISC [9,10]. So far around fourteen hundred mature human miRNAs have been described in the Sanger miRBase version 17 (An international registry and database for miRNAs nomenclature, targets, functions and their implications in different diseases). Figure 1 illustrates the biogenesis and mechanism of action of miRNAs [11][12][13][14][15][16][17][18].
MiRNAs and colorectal cancer carcinogenesis
MiRNAs play an important role in colorectal tumour biology including; oncogenesis; progression; invasion; metastasis and angiogenesis [19][20][21][22]. Initiation and progression of colorectal neoplasia results from sequential accumulation of genetic alterations in oncogenic and tumour suppressor genes in colonic epithelium [23]. MiRNAs interfere with these genetic mutations and are involved in different stages of cancer of colorectal neoplasia. Slaby and colleagues summarized the role of different miRNAs in the development of colorectal cancer and emphasized the importance of Adenomatous Polyposis Coli (APC), Tumour Protein 53 (TP53) gene mutations and the WNT signalling Pathway [24]. The initiation of colonic neoplasia is strongly linked to inactivation of the APC gene and activation of the WNT Signalling Pathway. APC inactivation has been found in more than 60% of colonic tumours and such inactivation is associated with up regulation of miR-135a/b in colonic epithelial cells [23,25,26]. Accumulation of any further somatic mutations leads to further dysregulation of miRNAs and activation of additional downstream pathways. For example let-7, miR-18a* & miR-143 are strongly linked to KRAS knockdown and activation of the EGFR-MAPK pathway [27][28][29] whereas miR-21 and miR-126 are associated with augmentation or inactivation of the phosphatidylinositol-3-kinase (PI-3-K) pathway repectively [30,31]. Activation of these downstream pathways results in autonomous tumour cell growth, increased cell survival, and initiation of angiogenesis. Loss of P53 is a critical step in transformation of adenoma to adenocarcinoma as nearly 50-70% of colonic adenocarcinomas are found to be P53 mutant [23]. miR-34a has been identified as a direct downstream target of P53 and the replacement of miR-34a has achieved p53 induced effects of apoptosis and cell cycle arrest [32]. A commonly up regulated miR-17-92 cluster (miR-17, miR-18a, miR-19a, miR-20a, miR-19b & miR-92a) also drives the progression of adenoma to adenocarcinoma by up regulation of c-myc [33]. Figure 2 summarises the interaction of different miRNAs in signalling pathways for colorectal cancer development and progression.In the KEGG pathway, it shows the interaction of different miR-NAs in the formation of adenoma and its progression to adenocarcinoma. In brief, miRNAs are mostly transcribed from intragenic or intergenic regions by RNA polymerase II into primary transcripts (pri-miRNAs) of variable length (1 kb-3 kb). In the nucleus Pri-miRNA transcript is further processed by the nuclear ribo-nuclease enzyme 'Drosha' thereby resulting in a hairpin intermediate of about 70-100 nucleotides, called pre-miRNA. The pre-miRNA is then transported out of the nucleus by a transporting protein exportin-5. In the cytoplasm, the pre-miRNA is once again processed by another ribonuclease enzyme 'Dicer' into a mature double-stranded miRNA. Two strands of double stranded miRNA (miRNA/miRNA* complex) are separated by Dicer processing. After strand separation, mature miRNA strand (miRNA-also called the guide strand) is incorporated into the RNA-induced silencing complex (RISC), whereas the passenger strand, denoted with a star (miRNA*) is commonly degraded. This miRNA/RISC complex is responsible for miRNA function. If on miRNA cloning, or miRNA array, the passenger strand is found at low frequency (less than 15% of the guide strand) it is named miR*. However, if both passenger and guide strand are equal in distribution, then these two strands are named the 3p and 5p version of miRNA depending on their location to either 5' or 3' of the miRNA molecule. In this case both strands can potentially incorporate in position into the RISC complex and have a biological role [9][10][11][12][13][14]. The specificity of miRNA targeting is defined by Watson-Crick complementarities between positions 2 to 8 from the 5 primed end of the miRNA sequence with the 3′ untranslated region (UTR) of their target mRNAs. When miRNA and its target mRNA sequence show perfect complementarities, the RISC induces mRNA degradation. Should an imperfect miRNA-mRNA target pairing occur, translation into a protein is blocked [9,10]. Regardless of which of these two events occur, the net result is a decrease in the amount of protein encoded by the mRNA targets. Each miRNA has the potential to target a large number of genes (on average about 500 for each miRNA family). Conversely, an estimated 60% of the mRNAs have one or more evolutionarily conserved sequence that is predicted to interact with miRNAs [9,10,15]. miRNAs have been shown to bind to the open reading frame or to the 5′ UTR of the target genes and, in some cases, they have been shown to activate rather than to inhibit gene expression [16]. Researchers have also reported that miRNAs can bind to ribonucleoproteins in a seed sequence and a RISC-independent manner and then interfere with their RNA binding functions (decoy activity) [17]. MiRNAs can also regulate gene expression at the transcriptional level by binding directly to the DNA [18].
Another cancer pathway-the second serrated neoplasia pathway has recently gained acceptance and is for the most part, APC and TP53 mutation independent. It involves distinct molecular features of somatic BRAF mutation concordance with high CpG islands methylation phenotype (CIMP-H) and microsatellite instability (MSI+) associated with mutt homologue 1 (MLH1) methylation [34,35]. Involvement of the miRNAs in the latter pathway is slowly emerging and would require further functional studies to find the link of miRNAs associated with this pathway.
Functional and mechanistic studies of miRNAs have shown that the replacement or knockdown of distinct miR-NAs in vitro resulted in distinct cytogenetic abnormalities leading to either tumour cell proliferation or apoptosis [20]. That's why it is believed that dysregulation of miRNA genes that target mRNAs for tumour suppressor or oncogenes contributes to tumour formation by inducing cell proliferation, invasion, angiogenesis and decreased cell death [19]. This has further lead to belief that over expressed miRNAs in tumour cells function by inhibiting different tumour suppressor genes whereas miRNAs that are often found silenced in tumour cells downregulate the expression of oncogenes in normal tissue (Figures 3 & 4). Amplification, translocation, pleomorphism or mutation in miRNA transcribing genes results in over production of miRNAs. In contrast, mutation, deletion, promoter methylation or any abnormalities in the miRNA biogenesis results in silencing of miRNAs in tumour cells [19]. Figure 2 Carcinogenesis of colorectal cancer cells and role of different miRNAs in cancer pathways. In the carcinogenesis of CRC, higher levels of miR-135a & miR-135b are associated with low levels of Adenomatous Polyposis Coli (APC) , which in turn leads to activation of the Wnt signalling pathway. Activation of the Wnt signalling pathway is a major tumour initiating event in the colonic epithelium. The low level of APC associated β-catenin degradation complex results in the formation of free cytoplasmic β-catenin that enters the nucleus and activates Wntregulated genes through its interaction with TCF (T-cell factor) family transcription factors and concomitant recruitment of coactivators (Survivin, c-Myc and Cyclin D1). As a consequence, there is a lack of apoptosis and increase proliferation of abnormal cells that results in autonomous growth and formation of adenoma. During the course of carcinogenesis, cells in adenoma accumulate few other genetic alterations leading to activation of other signalling pathways e.g. mitogen activated protein kinase (MAPK), Phosphatidylinositol 3-kinases (PI3K) and transforming growth factor-beta (TGFβ) pathways. The let-7 miRNA family, miR-18a* and miR-143 are adept at inhibiting the KRAS translation hence switching "off" the MAPK phosphorylation and inactivation of downstream transcription factors c-Myc, c-Fos and c-Jun. Furthermore, a targeted degradation of PTEN and p85β by miR-21 and miR-126 respectively , blocks the PI3K-Akt pathway. These changes drive the early adenoma to a large advanced adenoma. The loss of p53 function is associated with low expression levels of miR-34a family, indicating the role of this miRNA family in the transformation of adenoma to the carcinoma.
Diagnostic utility of circulating miRNAs
Recent studies have shown that tumour-derived miRNAs are present in human body fluids in a remarkably stable form and are protected from endogenous ribonuclease activity. In three different studies researchers have evaluated the suitability of circulating miRNAs as a diagnostic biomarker for CRC [50]. Preliminary studies suggest that CRC derived miRNAs are present in the circulation at detectable levels [51][52][53][54] and can accurately distinguish healthy controls from patients with CRC (Table 2). Significantly high sensitivity and specificity for detection of CRC holds promise for the use of circulating miRNAs as a diagnostic biomarker for CRC. Furthermore, the ability of a miRNA based blood assay to detect colonic adenoma can lead to its use in early detection and bowel cancer screening.
Although a small number of studies have identified circulating miRNAs in CRC patients, the clinical utility of this is still questionable. This is due to an overlapping miRNA expression with other solid organ cancers and benign colonic diseases, and variability of individual miRNA expression with tumour site & stage. It is possible that the utility of tumour tissue specific expression-signature/profile may prove more informative and accurate in future clinical studies. Furthermore, the discovery of exosome mediated transport of miRNAs into the circulation, has shifted the focus of miRNA studies towards the isolation of tissue specific circulating exosomes and their contained miRNAs [55].
Principles of miRNA therapy
The fact that miRNAs regulate multiple genes in a molecular pathway, makes them excellent candidates for gene therapy. One of the most appealing properties of miRNAs as therapeutic agents is their ability to target multiple genes, frequently in the context of a network, making them extremely efficient in regulating distinct biological cell processes relevant to normal and malignant cell homeostasis. The rationale for using miRNAs as anticancer drugs is based on two major principles: 1. MiRNA expression is deregulated in cancer compared with normal tissues 2. Cancer phenotype can be changed by targeting miRNA expression MiRNA based gene therapy is based on these two principles where manipulation of miRNA expression levels in cancer tissue results in inhibition of tumour growth, apoptosis, blocking of invasion, angiogenesis and metastasis. Restoration of the normal equilibrium for cancer related miRNA expression levels can result in growth retardation and reduced cell viability both in vivo and in vitro experiments. The major obstacle in gene therapy is the safe delivery to specific target tissue without side effects. Rapid degradation by body nucleases and poor cellular uptake owing to the unfavourable chemical structure of synthetic miRNAs have forced researchers to try chemical modifications of synthetic oligonucleotides as well as a more effective means of delivery. To overcome these delivery hurdles, viral and non-viral strategies have been developed. Restoration of tumour-suppressor miRNAs in cancer cells is usually achieved in vitro by using adenovirus-associated vectors (AAV). These vectors do not integrate into the genome and are eliminated efficiently with minimal toxicity. There are multiple AAV serotypes available, allowing for the efficient targeting of specific tissues including colorectal tissue. The ability of miRNAs to regulate several genes, does create potential problem in terms of side effects. This is a major concern in miRNA therapeutics as such interactions may lead to toxic phenotypes formation in targeted cells [56]. This has been approached by using nano-particles and tissue specific non viral vectors. However, the concentration dependent knockdown of non specific targets still remains an unresolved issue. This article provides a comprehensive review of the different approaches for restoration of tumour suppressor miRNAs expression and methods of knocking down the tumour promoter miRNAs in cancer cells. Not all the methodologies mentioned in this article have been applied to CRC cells, but all have been investigated as a therapy for other cancers. The description and elaboration of these methods may provide an insight to miRNA therapies in CRC.
Blocking oncogenic MiRNAs using antisense oligonucleotides
The demonstration that oncogenic miRNAs are upregulated in cancer provided a rationale to investigate the use of antisense oligonucleotides to block their expression. Antisense oligonucleotides work as competitive inhibitors of miRNAs, presumably by annealing to the mature miRNA guide strand and inducing degradation of mature miRNAs ( Figure 5). The stability, and specificity for target miRNAs and the binding affinity of antisense oligonucleotides has been optimised by modifications to the chemical structure of the oligonucleotides [57]. In particular the introduction of 2′-O-methyl or 2′-O-methoxyethyl groups to oligonucleotides enhances resistance to nuclease enzyme and improves the binding affinities to RNA [58]. The silencing of endogenous miRNAs by this novel method has shown to be long lasting, specific and efficient both in vitro [59,60] and in vivo. In CRC cells lines, antimiR based blockage of oncogenic miRNAs (miR-20a, miR-21, miR-31, miR-95, miR-672) has not only shown to reduce cell proliferation, transformation and migration, but it has resulted in enhanced sensitivity to chemotherapy agents [61][62][63][64][65][66]. This strategy of sensitizing the chemotherapy resistant tumour cells with alteration of miRNA expression in tumour cells may result in improved response to traditional chemotherapy agents. Table 3 summarizes the studies manipulating the oncogenic miRNAs in colorectal cell lines.
Blocking oncogenic MiRNAs using locked nucleic acid (LNA) Constructs
LNA construct based anti-miR therapeutic strategy has been extensively explored by researchers studying the miRNA based antiviral therapy for chronic hepatitis-C [66]. LNA nucleosides are a class of nucleic acid analogues in which the ribose ring is 'locked' by a methylene bridge connecting the 2′-O atom and the 4′-C atom. By locking the molecule with the methylene bridge, LNA oligonucleotides display unprecedented hybridization affinity towards complementary single-stranded RNA and complementary single-stranded or double-stranded DNA [68]. In addition, they display excellent mismatch discrimination and high aqueous solubility. So-called 'LNA anti-miR' constructs has been used successfully by Valeri and colleagues [67] to knock down miR-21 in colon adenocarcinoma cell lines.
Blocking oncogenic MiRNAs using MiRNA sponge constructs miRNA sponges based techniques to manipulate oncogenic miRNAs is still in its infancy and has not been studied in detail in CRC cells. miRNA sponges are transcripts that contain multiple tandem-binding sites to a miRNA of interest and are transcribed from mammalian expression vectors. Ebert and colleagues recently reported the use of miRNA sponges in mammalian cells [69]. The authors reasoned that miRNA target sequences expressed at high levels could compete with bona fide targets in a cell for miRNA binding ( Figure 5). To increase the affinity of these decoy transcripts, the researchers introduced not only multiple miRNA binding sites, but also a bulge at the position normally cleaved by argonaute 2, therefore facilitating the stable association of miRNA sponges with ribonucleoprotein complexes loaded with the corresponding miRNA. Using these constructs, repression of miRNA targets was observed and proved effective in vitro silencing of miRNAs [69]. Theses effects were comparable with those obtained with 2′-O-methyl-modified oligonucleotides or LNA antisense oligonucleotides. Furthermore, sponges that contained only the heptameric seed were shown to effectively repress an entire miRNA family that shares by definition the same seed sequence [69].
Blocking oncogenic MiRNAs using MiRNA masking antisense oligonucleotides
MiRNA masking effect strategy is designed to target single signalling cancer pathways. miR-mask (miRNA-masking antisense oligonucleotides) technology has been developed by Xiao and colleagues [70]. In contrast to miRNA sponges, miR-masks consist of single-stranded 2′-O-methyl-modified antisense oligonucleotides that are fully complementary to predicted miRNA binding sites in the 3′ UTR of the target mRNA [70]. In this way, the miR-mask covers up the miRNA-binding site to hide its target mRNA ( Figure 5), thereby its effects are gene specific. This technology has been applied successfully in a zebrafish model to prevent the repressive actions of miR-430 in the transforming growth factor-β signalling pathway [71]. Although unwanted effects or off-target effects can be dramatically reduced with this approach, this may be a disadvantage for cancer therapy in which the targeting of multiple pathways may be desirable.
Blocking oncogenic MiRNAs using inhibitors of oncogenic pathways
Several drugs may have the ability to modulate the expression of miRNAs by targeting signalling pathways that ultimately converge on the activation of transcription factors which in turn regulate miRNA encoding genes. Furthermore, it is possible to modulate the machinery that contributes to miRNA maturation and degradation processes. The identification of these compounds, however, is not straightforward and requires efficient screening of chemical libraries. Recently, Gumireddy and colleagues identified a method to screen for small-molecule inhibitors of miRNAs [72]. As a proof of concept for this HCT-116 RKO approach, the investigators selected the frequently studied and up-regulated miRNA, miR-21. Complementary sequences to miR-21 were cloned into a luciferase reporter gene, which was then used as a sensor to detect the presence of specific mature miRNA molecules. The construct was transfected into HeLa cells, which express high miR-21 levels, resulting in low luciferase activity. Subsequently, a primary screen of more than several smallmolecule compounds was done and an initial hit compound, diazobenzene 1, produced a 250% increase in the intensity of the luciferase signal relative to the untreated cells [72]. Additional characterization showed that this compound affects the transcription of miR-21. This strategy could be applied to the screening of small molecules as inhibitors for other distinct oncogenic miRNAs. These could be used with conventional cancer therapeutics to develop novel combined approaches for cancer treatment.
Restoration of tumour-suppressor miRNAs
The loss or downregulation of a tumour-suppressor miRNA could be overcome by introducing synthetic oligonucleotides i.e. mature miRNA mimics, miRNA precursors or pre-miRNA mimics into the CRC cells ( Figure 6). Introduction of synthetic miRNAs with tumour-suppressor function in cancer cells have been shown to induce cell death and block cellular proliferation, transformation, invasion and migration in several studies as summarized in Table 4. The altered expression of tumour suppressor miRNAs have been studied in the context of cancer-associated transcription factors. P53 mutations have been found in 40-50% of CRCs. The p53 protein is a transcription factor that regulates multiple cellular processes in CRC development, either by regulating mRNA directly or by regulating miRNA indirectly. The absence of p53 mutations in adenomas suggests that loss of p53 is a critical step in progression of adenoma to carcinoma [73,74]. In addition, the miR-34 family has been strongly linked to p53 and loss of p53 has been linked to reduced levels of miR-34 in cancer cells. miR-34a restoration studies [75,76] have clearly demonstrated reduced cell survival, invasion and migration in CRC cell lines. In addition to a link with the p53 pathway, miR-34a encoding genes on their own have been identified as targets for the mutational or epigenetic inactivation in different cancers. Interestingly, miR-34a resides on the chromosomal locus 1p36, which has been proposed to harbor a tumor suppressor gene because it displays homozygous deletions in neuroblastoma and in other tumor types [77]. An unbiased screen for genes with tumor suppressive function on 1p36 also revealed miR-34a as a candidate tumor suppressor gene [78]. Therefore miR-34 targeted gene therapies hold a prime importance in the designing therapies for chemoprevention and to halt the tumour progression. miR-143 is another tumour suppressor miRNA, significantly found downregulated in CRC tissues ( Table 1). The most significant study in this respect is restoration of miR-143 with miR-143 precursor resulting in reduced proliferation in SW480 colorectal cell lines and tumour suppression on xenografted tumors of DLD-1 human CRC cells [79,80]. Restoration of tumour suppressor miRNA by intravenous injection in mouse with liver cancer resulted in the suppression of tumorigenicity by reduced tumour growth and enhanced tumour apoptosis without signs of toxicity. This illustrates that tumour suppressor miRNA restoration strategy based gene therapy, if delivered efficiently to specific tissues may prove vital in future cancer treatments.
Modulation of miRNA processing
Alterations in miRNA processing machinery have also been implicated in cancer development and modulation of this machinery in part or in general can potentially lead to the discovery of new anticancer therapies.
Researchers have investigated the global repression of miRNA maturation process in cells and have identified that the abrogation of miRNA processing pathway promotes the cellular transformation and tumorigenesis [91]. The inhibition of Dicer1 activity on its own has also been associated with cancer development, invasion and lymph node metastasis [91][92][93][94]. Therefore, speeding up the miRNA processing globally or by replacement of Dicer1 in cancer cells can alter their progression and invasive potential. However, the effects of alterations in miRNA biogenesis pathway have found to vary for different tumour types [92,93] and therapeutic strategy will vary dependent on the response for inhibition or acceleration of miRNA processing.
Cancer stem cell directed miRNA therapy
Cancer stem cells or tumor-initiating cells have recently gained enormous attention. According to this hypothesis a subpopulation of cancer cells possesses unique characteristics of self renewal and multipotent differentiation; fundamental characteristics of embryonic and somatic stem cells [95]. As cancer stem cells or tumor-initiating cells are highly resistant to conventional chemotherapy and radiotherapy [96,97] there is further need for stem cell targeted therapy. Accumulating evidence indicates that miRNAs play functional roles in normal and cancer stem cell maintenance and differentiation. MiRNA expression signatures for differentiated cells are distinctly different from embryonic and somatic stem cells [98,99]. However, miRNA expressions in cancer stem cells have significant similarities with embryonic and somatic stem cells. Monzo and colleagues studied the common miR-NAs for CRC and embryonic tissue and suggested that the miR-17-92 cluster and its target, E2F1, exhibit a similar pattern of expression in human colon development and colonic carcinogenesis [43]. Further studies dealing with miRNA biogenesis and stem cells; have suggested that mutation of the key proteins in miRNA biogenesis pathway fail to maintain the self-renewal and differentiation capacities [99][100][101][102]. Recent studies have demonstrated the role of let-7 in self renewal, differentiation and regulation of progenitor maintenance. Low expression of let-7 in cancer stem cells and restoration of stem cell related miRNAs expression level in cancer cells might prove pivotal in treating cancers by modulating cancer stem cells.
Conclusions
In summary, there is strong evidence that miRNAs play a significant role in CRC development and progression. There is emerging evidence that miRNAs interact with different cancer signalling pathways and control cellular homeostasis. Changes in miRNAs expression levels results in alteration of this homeostasis and significantly contribute in cancer development and progression. Cancer specific miRNAs are detectable in body fluids and can potentially be used as novel biomarkers for CRC detection and prediction of cancer specific survival. MiRNAs influence oncogenic potential and therefore strategies to manipulate oncogenic or tumour suppressor miRNAs can successfully halt tumour progression. Furthermore, miRNA manipulation strategies can potentially be used as an adjuvant to other forms of anticancer therapies as miRNA manipulation can be used to sensitise drug resistant tumours. Finally, stem cell directed miRNA based therapies can be used to control stemness and latency of cancer stem cells in order to prevent the recurrence of tumour. However, miRNA based gene therapy is still in its infancy, but does hold great potential to replace current anticancer therapies. Further work is needed on an efficient, tissue specific delivery system and strategies to avoid side effects. Increased apoptosis | 6,664 | 2012-06-20T00:00:00.000 | [
"Biology",
"Medicine"
] |
PERMEABILITY OF INTERLANGUAGE SYSTEM: A CASE STUDY OF STUDENTS LEARNING ENGLISH AS A FOREIGN LANGUAGE AT SMP MUHAMMADIYAH 5 SURAKARTA
The research deals with permeability of students’ interlanguage system that reflects in students’ composition of SMP Muhammadiyah 5 Surakarta. The aims of the research are (1) to describe what the types of permeability, (2) to describe the source of the influence in students’ IL system, and (3) to describe the influence frequency of students’ IL system. The type of this research is qualitative research. The data of this research are erroneous sentences found in the students’ compositions. The method of collecting data is elicitation and document analysis. The writer uses descriptive analysis by Celce Marcia and a modified framework of Error Analysis by Shridar a technique for analyzing data. The results indicate that (1) the permeability is found at the level of morphology and syntax, (2) the source of the influence is students’ mother tongue (Indonesian) and target language (English), (3) the influence frequency of the mother tongue to students’ interlanguage system is 48% and of the target language to the students’ interlanguage system is 52%. The conclusion is that the learners’ interlanguage is open to to influence from the outside and influence from the inside language system. It progressively approaches the target language as a result of learners’ attempt on constructing a new linguistic system
INTRODUCTION
English becomes a global language that demands foreigners to learn it.In Junior High School, English is introduced as one of subjects of the study.Students learn it as a foreign language.English is not easy to be learnt because it consists of four skills.Those skills are listening, speaking, reading and writing.It is difficult to master them because the learners should take into account three language aspects-structure, pronunciation, and vocabulary.Students are limited by a period of time in learning English and English also enlisted in final examination as a means to graduate.As a result it adds the difficulties to learn English.
There are three previous studies related to this study.The first previous research has been done by Hobson (Rhode University, 1999) entitled "Morphology Development in the Interlanguage of English Student of Xhosa".The purpose of this study is to investigate whether the features of interlanguage identified in other studies appear in the learner language in study.He uses a quasi-longitudinal research design as a tool to trace development in the oral interlanguage of six learners of Xhosa for a period of eight month.He also uses case study approach to method of collecting data and the data analysis is primarily qualitative.The result of this study is that learners use morphology from the beginning of the learning process considering agreement and inflectional morphology play a central role in conveying meaning in Xhosa.
The second study is Caneday's research (University of North Dakota, 2001) entitled Interlanguge Coda Production of Hmong Second Language Students of English.The aim of her study is to define and understand the production of syllable final consonants and clusters by Hmong children (ages 9 and 12) learning English using a constraint-based theory.She uses Optimality Theory as a method for conducting her research.First, she lists the targeted coda consonants and consonant clusters.Next, she explains the tasks that were used for her study.Then, she gives a profile of the subjects chosen for her study.Finally, she gives detail on the transcriptions that were made.The results of this study are that Hmong language and the English constraints interacted in an ordered fashion allowing predictable patterns in production.The final consonants and consonant clusters were often deleted or changed by the intermediate Hmong speakers of English, because they have not completely resolved the conflict of what they now in their native language with what they are learning in the English.
The last research is a research conducted by Sarmedi Agus Siregar (University of North Sumatra, 2004) entitled Analisis Antarbahasa (Interlanguage) Pembelajaran Bahasa Inggris di Politeknik Negeri Medan dan Yanada English Centre Medan, Suatu Studi Kasus.He investigates students' interlanguage system in Medan.The results of this study are (1) both interlingual and intralingual transfer are found in the students' interlanguage system, (2) there are some overgeneralization forms found in students' interlanguage system, including the auxiliary insertion, auxiliary substitution, inappropriate use of verb, and inappropriate use of preposition, (3) there are three stages in the students' interlanguage system, those are presystematic stage, systematic stage, and postsystematic stage, (4) the students' interlanguage system includes deviations, those are ortograph deviation, lexicon deviation, and grammatical deviation, (5) the code-switching and code-mixing is found in the students' interlanguage system (Sarmedi, 2004).
The writer uses some related theories in this study.An interlanguage is an emerging linguistic system that has been developed by a learner of a second language (or L2) who has not become fully proficient yet but is approximating the target language: preserving some features of their first language (or L1), or overgeneralizing target language rules in speaking or writing the target language and creating innovations.Yip (1995:12) assumes that interlanguage refers to "the susceptibility of interlanguage to infiltration by first language and target language rules or forms."Adjemian (1976:301) gives contribution to the concept of interlanguage by adding three features of IL.Those are systematicity, fossilization, and permeability.Permeability is one of the keys in language development that makes learner's knowledge at any stage is not fixed but is open to amendment.Ellis (2000:33) asserts that learner's grammar is permeable.It means the grammar is open to influence from the inside and outside.It can be inferred that students' IL system is influenced by two items namely first language and target language.
From the explanation above, the writer formulates the problem statements of the study as follows: (1) how does the mother tongue system influence in the students' IL system?, (2) how does the target language system influence in the students ' IL system?,(3) to what extend does the students' mother tongue influence the students' IL system?, ( 4) to what extend does the students' target language influence the students' IL system?, ( 5) what is the difference degree the influence from both the mother tongue and the target language in students' IL system?, and (6) what is the implication of the research result toward English teaching and learning process?
So, the objectives of the study are (1) to describe how the mother tongue system influences in the students' interlanguage system, (2) to describe how the target language system influences in the students' interlanguage, (3) to describe to what extend the students' mother tongue influence the students' interlanguage system, (4) to describe to what extend the students' target language influences the student interlanguage system, (5) to describe the degree of the influences from both the mother tongue and target language in the students' interlanguage system, and (6) to describe the implication of the research result toward English teaching and learning process.
RESEARCH METHOD
The type of research that is used by the writer is qualitative research.Qualitative research is a type of research that does not use any calculation or statistic procedure.Larsen, Freeman, and Long (1991: 11) state that prototypical qualitative methodology is an ethnographic study in which researchers do not set out to the hypotheses, but rather to observe what it presents with their focus, and consequently the data, free to vary during the course of the observation.The researcher describes types of error and analyzes them by surface strategy of taxonomy.She describes how is students' interlanguage system can be influenced by students' mother tongue linguistic system and target language linguistic system.
The subject of the study is the eighth grade students of SMP Muhammadiyah 5 Surakarta in 2012 school year.The object of this study is erroneous sentences found in students' composition.The researcher takes 40 students' composition randomly and lists them in the frame of erroneous sentence.Those erroneous sentences listed 108.
This research is a study of learners' interlanguage system.Thus, the source of the data is collected from eight grade students of SMP Muhammadiyah 5 Surakarta.Then, the data of this study is erroneous sentences found in the students' compositions.The researcher takes 40 compositions randomly from three defferent classes of SMP Muhammadiyah 5 Surakarta as suitable data of the research.
In this research, elicitation and document analysis are used as the method of collecting data.Elicitation technique is a technique to lure students to produce their own composition by giving instruction in producing it.In gathering written materials, the researcher uses documentation technique which involves several steps as follows: (1) The researcher gives instruction to write free English composition; (2) The researcher reads each students' composition objectively and cautiously; (3) The researcher writes all the erroneous sentences found in students' composition; (4) The researcher lists them and makes them as the data of her research.
The writer uses descriptive analysis by Celce Marcia and a modified framework of Error Analysis by Shridar (2009:136).The steps of technique for analyzing the data are as follows: (1) in identifying Error, the researcher makes a list of erroneous sentences that are not proper with English rule found in students' English composition; (2) in classification of Error, the researcher classifies the erroneous sentences found in students' English composition based on linguistic category, strategy taxonomy outside, and based on degree of mother tongue and target language system; (3) in identification of sourses of error, the researcher highlights erroneous sentences, influenced by English and Indonesian system, to analyze the source of error; (4) in describing the degree of Influences, the researcher gives the description of influences degree toward mother tongue system and target language system which is mirrored students' interlanguage system; and (5) in drawing conclusion, the researcher draws conclusion from the research results that got to answer the problem statements in this research as the final step.
RESEARCH FINDING AND DISCUSSION
This part presents the research finding and the discussion of research finding.It will answer the problem statement of this study.
Research Finding
The researcher uses the framework of Error Analysis proposed by James, Corder, and Shridar to recognize the errors.Those errors are normal occurence in students' learning a foreign language.They have a unique system which different from both mother tongue and target language.It is called interlanguage system.Then, she uses Dulay, Burt, and Krashen (1982) taxonomy in order to present the classification of error descriptively.There are three sorts of errors based on Dulay, Burt, and Krashen taxonomy found in students' composition.Those are linguistic category, surface strategy taxonomy, and comparative taxonomy.
First, in terms of linguistic strategy terms, there are two language components which are affected by errors which contain permeability case.They are morphological and syntax.In the morphological level, the permeability counts for 26.67%.Besides the syntactic level is count for 73.33%.
Second, in terms surface strategy taxonomy, permeability is found in three types of error forms.Those are omission, addition, and misformation.Omission is as 12% include four different categories with various percentages.Addition is as 3.3% include addition of article (2%) and addition of preposition (1.3%).Misformation is as 4.67% include regularization (0.67%) and alternating forms (4%).
Third, the writer found permeable sentence concerning with comparative taxonomy.Errors types based on comparative taxonomy are errors which classified based on "a comparison between the structure of second language errors and certain other type of construction" (Dulay, Burt, and Krashen, 1982:164).In this case, errors in learners' interlanguge do not result from first language influence but rather reflects the learners' gradual discovery of the second language system and the learners' mother tongue system may interference the acquiring of target language or transfer into learners' developing second language system.The stage of learners' developing second language system is a stage where learners create a new language system.The system is called interlanguage system which is permeable toward influence of both mother tongue and the target language.
Next, she uses the interlanguage framework introduced by Selinker, Adjemian, and Corder to identify the influence both of mother tongue and target language to the students' interlanguage system.She analyzes the data accurately then she classifies them based on the types and sources influence in students' interlanguage system.As the result, the writer found two sources of the influence in students' interlanguage system.The first is students' mother tongue (Indonesian) and the second is students' target language (English).Both of them have the same chance to give interferrence towards students' interlanguage system, but the research result tells students' target language (English) is the highest influence of it.The influence frequency of students' mother tongue (Indonesian) is 52% while the influence frequency of students' target language (English) is 48%.
The analysis shows that the percentage of the influence of the mother tongue to the students' IL system is 48%.The influences of the students' IL system include (1) morphological level and (2) syntactic level.The first is the morphological level (5.93%) including Indonesian abbreviation as 1.3%, an Indonesian word as 3.33%, and the use of Indonesian word with slight modification as 1.3%.The second is the syntactic level (42%) including a literal translation in the use of Indonesian pattern (6%) and the use of Indonesian pattern in sentences (36%).
In addition, the percentage of the influence of the target language (English) to the students' IL system is as 52%.It is greater than the mother tongue influence toward the students' IL system.The influence of English to students' IL system includes: (1) morphological level and (2) syntactic level.First, the morphological level (20.64%) includes free morpheme as 18.64% and bound morpheme as 2%.Free morpheme includes (a) false friend, (b) omission of article, (c) addition of article, and (d) misformation.False friend involves words with the similarity in form as 1.3% and words with the similarity in meaning as 10%.Omission of article is at 0.67% and addition of the article is as 2%.Misformation includes regularization as 0.67% and alternating form as 4%.Bound morpheme shows the percentage as 2%, it comes from omission of -s in plural form.Second, syntactic level (31.3%) includes (a) the use of V1 instead of V2, (b) omission of be as copula, and (c) preposition.The use of V1 instead of V2 is at 14.6% whereas omission be as copula is as 6%.Preposition show the percentage as 10.63% coming from omission of preposition as 1.3%, addition of preposition as3.33%, and misuse of preposition as 6%.
The research result shows that learners' interlanguage progressively approaches target language as a result of learners' attempt on constructing a new linguistic system.The new linguistic system is called interlanguage.This study emphasizes on permeability students' IL system by investigating erroneus sentences having permeablility cases.It might give implication to English teaching learning.The implication is addresed to the teachers and students as the subject of education.The first is for the teachers as an important figure in education.The research result enables the teachers to know their problem that they are facing, so that it could lead the language teacher to create or pick a method which is most suitable to teaching English especially in writing.More writing tasks also should be done in order to improve students' writing ability.In addition, one way is that teachers should give an explanation for a new word in English, and in Indonesian when necessary.Another way is to put the word in a given context when introducing a new word to the students.The second is for the students.The students can evaluate their composition and improve their ability in producing composition by mastering their problem.The researcher hopes that her study is meaningful for both the student and the teacher in English teaching learning process.
Discussion
The researcher finding of the recet research shows the most dominant influence of students' interlanguage system is influenced by target language (Englih).The influence frequency of students' mother tongue (Indonesian) is as 52% while the influence frequency of students' target language (English) is as 48%.Mostly the students' interlanguage system is influenced by target language (English) especially in syntactic level (31.3%).To make the explanation simpler to be understood, the researcher presents the influence frequency of both mother tongue (Indonesian) and target language (English) in a chart.Learners' interlanguage system are permeable to infiltration of linguistic elements of the target language.This means the structuere of first language which students' learned second gives chanche to presence of errors.Brown (2000:224) states interlingual transfer is the negative transfer of item within the target language or put another way, the incorrect generalization of rules within the target language.The learners' mother tongue system may interference the acquiring of target language or transfer into learners' developing second language system.
Conclusion
Based on the previous part, it shows that the theory applied in this research is appropriate.The writer infers that students' interlanguage system is permeable which is open to infiltration of mother tongue (Indonesian) and the target language (English).The influence frequency of students' mother tongue (Indonesian) is 52% while the influence frequency of students' target language (English) is 48%.Eventually, this study has revealed that the most dominant influence of students' interlanguage system is influenced by target language (English).It also shows that learners' interlanguage progressively approaches target language as a result of learners' attempt on constructing a new linguistic system.
Limitation and Suggestion a. Limitation of the Study and Suggestion
The writer realizes that this paper is far from being perfect.This paper is just limited on permeability as one of interlanguage characteristics in SMP Muhammadiyah 5 Surakarta.What the researcher study is compositions of eight grade students in SMP Muhammadiyah 5 Surakarta.The researcher takes 40 compositions randomly from the subject of this study to study it.There are many weaknesses on this research dealing with the ideas and the theory.Here, the writer hopes that the next researcher will be better and more complete than this research.
For example, the other researcher can conduct a research about systematicity or fossilization of students' interlanguage system in SMP Muhammadiyah 5 Surakarta or even in other schools.The writer hopes that this study will be useful for the other researcher to conduct another study on the same topic but in different perspective or to conduct a further analysis of interlanguage.
b. Limitation of the Findings and Suggestion
Based on the recent finding, it can be seen that students' interlanguage system influenced by first language (Indonesian) and second language (English).The influence frequency of second language is greater than first language.From those facts, the researcher suggests to the English teacher to improve his ability in using learning strategy.She also want to show that errors still exist in students' written form and errors tell the teacher what needs to be taught.Those errors reflect students' IL system which is permeable.It means that English teacher should give more motivation to the students in order to make the students more productive in producing composition.Giving motivation is not only on writing but also on other English skill-listening, reading, and speaking. | 4,296.6 | 2016-08-14T00:00:00.000 | [
"Linguistics",
"Education"
] |
Genomic Expedition: Deciphering Human Adenovirus Strains from the 2023 Outbreak in West Bengal, India: Insights into Viral Evolution and Molecular Epidemiology
Understanding the genetic dynamics of circulating Human Adenovirus (HAdV) types is pivotal for effectively managing outbreaks and devising targeted interventions. During the West Bengal outbreak of 2022–2023, an investigation into the genetic characteristics and outbreak potential of circulating HAdV types was conducted. Twenty-four randomly selected samples underwent whole-genome sequencing. Analysis revealed a prevalent recombinant strain, merging type 3 and type 7 of human mastadenovirus B1 (HAd-B1) species, indicating the emergence of recent strains of species B in India. Furthermore, distinctions in VA-RNAs and the E3 region suggested that current circulating strains of human mastadenovirus B1 (HAd-B1) possess the capacity to evade host immunity, endure longer within hosts, and cause severe respiratory infections. This study underscores the significance of evaluating the complete genome sequence of HAdV isolates to glean insights into their outbreak potential and the severity of associated illnesses.
Introduction
Human adenovirus (HAdVs) belongs to the family Adenoviridae and Genus Mastadenovirus.Adenoviruses are non-enveloped, icosahedral, linear double-stranded DNA viruses with diameters of 80-110 nm and DNA genomes of approximately 25-48 kbp in length surrounded by a non-enveloped icosahedron capsid.The icosahedral capsid of adenoviruses consists of 240 capsomeres without a vertex (hexons) and 12 capsomeres with a vertex (pentons).The fiber knob is formed by protein IV homotrimers and has three structural domains: the tail, which is attached to the base of the penton, the axis of characteristic length, and the distal bulge [1].The core is composed of DNA protected with inverted terminal repeats (ITR) of 145 bases in length.Four early genes (E1-E4) are transcribed for viral proteins before the replication of viral DNA and the five late genes (L1-L5) are transcribed for structural proteins after the viral DNA replication [2].More than 110 types of HAdV have been reported so far [3].Multiple types of HAdVs are grouped into species.Seven species (A-G) of the HAdVs are grouped based on their immune reaction [4].The pathogenicity of many types is unexplored [1] HAdV infection creates mild respiratory conditions (such as pharyngitis) and conjunctivitis.Conjunctivitis due to HAdV infections is more common in Southeast Asia.HAdV replicates very well in epithelial cells in the primary barrier of the gastrointestinal tract, respiratory tracts, conjunctiva, and urinary Viruses 2024, 16, 159 2 of 16 bladder [1].Severe acute respiratory infections, gastroenteritis with dehydration, hemorrhagic cystitis, and meningoencephalitis are the outcome of fatal HAdV infections, often observed in outbreak cases [5].A fatal form of HAdV infection is mostly observed in both pediatric populations, especially those under 5 years of age, and immunocompromised patients with comorbidities.On average, 5% to 10% of all febrile illnesses in infants and preschool-aged patients are observed due to HAdV infections.Although viral infections are usually self-limiting, persistent infections are also found in the preschool population and immunosuppressed patients [6].Infection of HAdV type5 is mostly observed in humans.About 71-73% seroprevalence due to HAdV4 or HAdV5 was reported in Washington D.C, USA [7] and China [8].In addition, 73% seroprevalence was observed in pediatric patients due to HAdV5 infection [8].HAdV species B is further subdivided into two sub-species, B1 and B2.HAdV3, HAdV7, HAdV16, HAdV21, and HAdV50 belong to sub-species B1 and HAdV11, HAdV14, HAdV34, and HAdV35 belong to sub-species B2.HAdV3, HAdV7, and HAdV21 are commonly known causative agents of severe respiratory infections which have caused epidemic outbreaks in the past and so they are very well studied.On the other hand, HAdV16 and the recently identified HAdV50 are rarely detected and so are not well studied.A large amount of genetic variability is also observed in the HAdV3, 7, and 21 types [9].
In both developed and developing countries, adenovirus infections and infrequent outbreaks are quite common throughout the year [10,11].Since December 2022, there has been a noticeable increase in the number of childhood pneumonia cases in West Bengal, India.Adenovirus was identified in the samples as the causative agent related to illness [12].Since it was associated with the severity of the disease, it is important to assess the entire genome sequence of HAdV isolates to conduct molecular analysis of adenovirus strains involved in the West Bengal outbreak.In this study, we have analyzed whole genome sequences of 24 randomly selected samples of the West Bengal outbreak from December 2022-March 2023 with the aim of determining the genetic composition of the circulating HAdV types, providing valuable insights into their potential for causing outbreaks and the severity of associated illnesses.
Sample Source
Nasopharyngeal and oropharyngeal swabs from patients with severe acute respiratory illness (SARI) admitted to different tertiary care hospitals in West Bengal and patients coming to the outdoor facility with influenza-like illness (ILI) were referred to the Regional Virus Research and Diagnostic Laboratory, ICMR-NICED, Kolkata for testing.
Real-Time PCR of Respiratory Viral Panel
RNA extractions from 200 µL nasopharyngeal/oropharyngeal swabs in viral transport media (VTM) were performed using the QIAamp ® viral RNA mini kit (Qiagen, Germany, Cat.No. 52906) according to the manufacturer's instructions.The extracted RNA was screened for the presence of respiratory pathogen using a respiratory viral panel (InfA-H1N1, InfA-H3N2, Inf-B, RSV, hMPV, PIV, Adenovirus, Rhinovirus) with AgPath-ID™ One-Step RT-PCR Reagents (Cat No. 4387391).Thermal cycling was performed at 50 • C for 30 min for reverse transcription, followed by 95 • C for 5 min, then 45 cycles of 95 • C for 15 s, and 55 • C for 30 s for annealing and amplification.Data acquisition was performed at 55 • C.
Amplicon Preparation for Ion Torrent NGS Platform (Ion GeneStudio S5 System)
The whole genome of respiratory Adenovirus was amplified in five amplicons of ~7 Kbp size using self-designed primer sequences (Supporting Information Table S1).Invitrogen™ Platinum™ SuperFi™ DNA Polymerase (Catalog number: 12351050) was used to amplify the segments.Thermal cycling was performed at 95 • C for 5 min for one cycle, followed by 30 cycles of 95 for fragments 2 and 3 and 57 • C for fragments 1, 4, and 5) for 30 s for annealing, and 72 • C for 3 min 30 s for extension.The amplified products were gel purified from 0.8% agarose gel using the QIAquick Gel Extraction Kit (Catalog number: 28704) as per the manufacturer's instructions.
Library Preparation for the Ion Torrent NGS Platform (Ion GeneStudio S5 System)
The purified PCR fragments were further purified using AMPure XP magnetic beads (Catalog No.: A63881).The purified products were then quantified using Qubit 4 Fluorometers using Qubit™ 1X dsDNA High Sensitivity (HS) and Broad Range (BR) Assay Kits (Catalog number: Q33231).The concentration for all five amplicons for each sample was adjusted to the lowest amplicon concentration among the five and then pooled together for each sample.The 100 ng of pooled amplicons for each sample were sheared to ~450 bp using Ion Shear™ Plus Enzyme Mix II provided in the Ion Xpress™ Plus Fragment Library Kit (Catalog number: 4471269) as per the manufacturer's instructions.The Adaptor and Barcode sequence provided in the Ion Xpress™ Barcode Adapters Kit (Catalog number: 4474517) was then ligated and nick-repaired to the sheared products using DNA ligase and the Nick repair enzyme provided in the Ion Xpress™ Plus Fragment Library Kit.The ~490-500 bp ligated products were then selected using E-Gel™ Size Select™ II Agarose Gel, 2% (Cat.No. G661012).The size-selected product was then amplified using Platinum™ PCR SuperMix High Fidelity provided in the Ion Xpress™ Plus Fragment Library Kit.The amplified library was purified using AMPure XP magnetic beads.The prepared library was then quantified using Qubit™ 1X dsDNA High Sensitivity (HS) and Broad Range (BR) Assay Kits.Prepared libraries for each sample were then diluted to a concentration of 100 pM and pooled.Afterwards, 40pM of the pooled library was loaded on the Ion Chef instrument for clonal amplification and chip loading was performed according to the manufacturer's instructions.The loaded chip was then inserted into the Ion GeneStudio S5 for sequencing and initial analysis, using assembler SPAdes [13], coverage analysis, and variant calling plugins in the Ion Torrent Suite 5.18.1 software.
Sanger Sequencing of the 5 ′ ITR End and 3 ′ ITR End
The 5 ′ ITR regions and 3 ′ ITR end were amplified by self-designed primer sequence (Supporting Information Table S1) using Invitrogen™ Platinum™ SuperFi™ DNA Polymerase (Catalog number: 12351050) as per the manufacturer's protocol.Thermal cycling was performed at 95 • C for 5 min for initial denaturation, followed by 30 cycles of 95 • C for 15 s, 53 • C for 30 s for annealing, and 72 • C for 30 s for extension.The amplified products were gel purified from 0.8% agarose gel using the QIAquick Gel Extraction Kit (Catalog number: 28704) as per the manufacturer's instructions.Sanger sequencing PCR was performed using the Applied Biosystems™ BigDye™ Terminator v3.1 Cycle Sequencing Kit (Catalog number: 4337455) as per the manufacturer's protocol.PCR clean-up was achieved using the EDTA-Sodium acetate ethanol precipitation protocol, followed by a 70% ethanol wash.The final product was re-suspended in HiDi formamide and sequenced in the ABI 3700 genetic analyzer.
Bioinformatic Analysis
The reference-based consensus from each sample's VCF files was generated using the "bcftools Consensus" algorithm in https://usegalaxy.org/(accessed on 14 April 2023).The Sanger sequences were visualized, edited, and aligned, and the consensus was prepared using MEGA X version 10.2.2 [14] and BioEdit software version 7.2.5 to get the complete whole genome sequence of the 24 samples.The resulting FASTA sequences were aligned and inspected using MEGA X software.Phylogenetic trees were generated using MEGA X software, employing the maximum likelihood estimation with 1000 bootstrap replications.The tertiary and secondary RNA structures of the VA-I region were determined using the trRosettaRNA [15] and Forna (force-directed RNA) [16] web servers, respectively.The homology model of PKR protein was generated using the SWISS-MODEL server [17] and validated with PROCHECK [18].The VA-I RNA and PKR complex structure was generated using the HDOCK server [19][20][21][22][23].We investigated the potential recombination events in the Indian HAdVs' genomes in this study.The potential recombinants, parental sequences, and the possible recombination breakpoint were detected using the RDP, Geneconv, Maximum Chi-Square, Chimaera, BootScan, and SisterScan methods as implemented in RDP4 [24].In this study, an isolate was designated as recombinant when at least four methods of RDP4 detected it.
Genetic Characterisation of Adenovirus-Positive Clinical Samples Using Whole Genome Sequence Analysis
Since December 2022, there has been a noticeable increase in the number of childhood pneumonia cases in West Bengal, India.In response to this situation, the Virus Research and Diagnostic Laboratory (VRDL) and Division of Virology of Indian Council of Medical Research, National Institute of Cholera and Enteric Diseases (ICMR-NICED), located in Kolkata, India have taken on the task of identifying the primary agents responsible for these pneumonia cases.Screening of the child pneumonia samples for the respiratory viral pathogen identified adenovirus infections.Whole genome sequencing was performed on HAdV-positive patient samples obtained from 24 randomly selected hospitalized patients during the West Bengal outbreak between December 2022 and March 2023.Samples that were taken randomly for this study were found to be from patients belonging to the age group of <5 years, with sixteen (66.67%) male and eight (33.33%)female patients.Among the 24 samples, 12 (50%) had a history of ICU admission, including seven (29.17%) deceased outcomes (comprising five (71.43%) male and two (28.57%)female patients see Supporting Information Table S2 for further clinical information).Among the deceased cases, the most common symptom was breathlessness (100%), followed by coughing (85.71%) and fever (85.71%).Other symptoms were nasal discharge/stuffiness (71.43%) and vomiting/nausea (14.29%).On examination, wheezing (85.71%) and crepitations (85.71%) were the most common signs, followed by lower chest in-drawing (42.86%), nasal flaring (28.57%), accessory muscles used in breathing (14.29%), and apnea (8.33%).
Whole genome homology searches using BLAST [25] for all 24 samples showed homology to the human mastadenovirus B1 species.Genome length varied between 35246-35361 and the GC % of all samples was ~51%.The BLAST search of the 24 whole genome sequences revealed that 21 samples showed similarity with type 7 while three samples were similar to type 3. On the other hand, hexon, penton, and fiber genetic resemblance indicated that, out of 24 samples, one sample (NICED/23-13/2909) belonged to type 3 and the remaining 23 samples were recombinant strains between type 3 and type 7 HAdVs (Table 1).Among the recombinant strains, 21 samples belonged to type 7, of which 20 samples exhibited H7F3P7 genetic constituent and one sample (NICED/23-19/3312) showed H3F3P7 genetic constituent.The remaining two samples (NICED/23-01/1914 and NICED/23-12/2908) belonged to type 3, displaying genetic constituent H7F3P7 and H3F3P7, respectively.All the deceased cases were analyzed as adenovirus type 7 with H7F3P7 genetic constitution except one (NICED/23-19/3312), which belonged to H3F3P7 (Table 1).In total, 24 adenovirus whole genome sequences generated by NGS were submitted to the NCBI gene bank, as described in Supporting Information Table S3.
Recombinant Analysis
The recombination study provided valuable insights into the genetic makeup of various viral strains.Among the analyzed strains, NICED/23-01/1914, NICED/23-05/2220, NICED/23-06/2283, and NICED/23-17/3288 were identified as the major recombinant sequences.Of particular interest, NICED/23-13/2909, NICED/23-19/3312, and NICED/23-05/2220 were identified as potential parent strains involved in the recombination event (Figure 7).This finding suggests that the recombinant strain likely emerged from the genetic mixing of recent circulating strains belonging to species B of India.Notably, the presence of NICED/23-05/2220 as the major parental strain in the recombination process of NICED/23-01/1914 is a significant observation.This indicates that this particular strain may have played a crucial role in driving the adaptation and spread of the virus.In context, the phylogenetic study based on Fiber, Hexon, and Penton gene segments of adenovirus shows that the recombinant strains NICED/23-01/1914 and NICED/23-17/3288 clustered with an identical fiber-hexon-penton (H3F3P7) combination to their possible parental strains NICED/23-05/2220, NICED/23-13/2909, and NICED/23-19/3312.Interestingly, the NICED/23-06/2283 recombinant strain of this study clustered with the H7F3P3 genotype in the phylogenetic tree, and the Hexon-and Penton-based phylogeny represents a divergent cluster from its possible parental strains, NICED/23-19/3312 (H3F3P7) and NICED/23-13/2909 (H3F3P7).In the recombination analysis, maximum breakpoint densities occurred near the Hexon and Penton regions.The amino acid substitutions in the recombinant strain represent prior amino acid substitution patterns that might improve the survival of the recombinant strain, preventing the formation of novel chimeric proteins and maintaining the existing protein fold [26].
Viruses 2024, 16, 159 13 of 18 survival of the recombinant strain, preventing the formation of novel chimeric proteins and maintaining the existing protein fold [26].
Discussion
The scarcity of reported HAdV strains in India is attributed to the limited number of studies conducted on HAdV epidemiology and molecular evolution in the country.Our study has contributed towards the enrichment of the HAdV whole genome sequence database from India, furthering our understanding of circulating strains.The categorization of adenoviruses depends on the major capsid genes such as the Penton, Hexon, and Fiber genes, which have proven to be effective in understanding the epidemiology of circulating HAdV strains and provided valued insights into the genetic diversity as well as the interrelationships among the strains.Furthermore, homologous recombination of HAdV capsid genes significantly influences the progression of recombinant HAdVs and can influence HAdV pathogenicity.Therefore, molecular characterization and phylogenetic studies of HAdV exclusively depending on single gene sequences may not provide adequate molecular resolution.Hence, a whole genome sequencing approach helps to explore the phylogenomic relationships of the current circulating strains of adenoviruses [27,28].This study reflected the details analysis of the whole genome sequences of 24 randomly selected samples from the West Bengal outbreak from December 2022-March 2023.
Through comparative analysis of the whole genome sequences obtained from the 24 samples collected during the West Bengal outbreak, it was observed that genetic recombination, spontaneous mutation, and gene shuffling occurred between the circulating strains of human mastadenovirus B1 serotypes.The adenovirus genome is known to undergo significant variation due to nucleotide insertions, deletions, substitutions, and recombination.As evident from previous observations, intra-serotype variability is quite prominent in adenoviruses, particularly in types HAdV 7, HAdV 3, and HAdV 21 [9,29].Among the 24 sequences analyzed in our study, 21 samples showed homology with type 7, and three samples displayed homology with type 3.With regard to clinical outcomes, genotype 7[H7F3P7] showed more virulence.However, their genetic composition varied considerably based on the Penton, Hexon, and Fiber region sequences.
By conducting a comparative analysis of the adenovirus Hexon gene, it was possible to identify four conserved regions and three variable regions [30].These variable regions are distributed among the conserved regions and denoted as C1-V1-C2-V2-C3-V3-C4 [29,30].The variability observed in the Hexon gene in our study is present in variable region 3 (V3).The variability evident within the Hexon gene was further supported by our in silico recombination study, which recognized a maximum breakpoint near the Hexon region.The AA substitutions within the hypervariable regions 1-7 (loops 1 and 2) of the Hexon gene played an important role in developing vaccines and drugs against HAdV [31,32].The RGD loop interacts with αvβ3 or αvβ5 integrins to facilitate the endocytosis process of the virus and HVR1 may be a target of neutralizing antibodies [33].Therefore, analyzing the amino acid substitutions within the interactive domains of Hexon and Penton bases emerges as a valuable avenue for potential interventions.The identified substitutions and insertions may have implications for the virus's virulence, host interactions, and potential immune responses, warranting further investigation.Genetic variability was also evident in the adenovirus-associated RNAs, as well as in the VA-I and VA-II regions.VA-I and VA-II are non-coding structured RNAs that are abundantly expressed during the late phase of the infection.These virus-associated RNAs are known to sabotage host innate immune systems and inhibit protein kinase R (PKR), which is involved in interferon response activation [34,35].The homology of the VA-I region of all sequenced samples showed similarity with miR-197.miR-197 is known to downregulate IL-18 [36], and the downregulation of IL-18 in turn downregulates INF-γ which is known to regulate both innate and adaptive antiviral immunity [37].
Adenovirus E3 genes vary largely in different species as six to nine E3 proteins are expressed in a species-dependent manner.The E3 genes are known to modulate the cellmediated immune response and apoptosis of infected cells.The E3 CR1-Delta proteins are one of the known conserved regions (CR) and are known to play a complex role in modulating host immune responses.However, the actual mechanism needs to be deciphered further [38][39][40].Therefore, the insertion of 25 amino acids (showing similarity with type 16, a newly discovered adenovirus type) in the N-terminal region of E3 CR1-Delta indicates an active and functional E3 CR1-Delta protein and possible modulation of the host immune response.Furthermore, variation in both VA-RNAs and the E3 region suggests that the current circulating strains of human mastadenovirus B1 have a great ability to escape the host immune system, persist longer in the host, and create severe respiratory infections as observed in the hospitalized pediatric population in West Bengal.
The previous finding highlights a significant aspect of the emergence of novel HAdV pathogens and suggests recombination events occurring between different genotypes within the same species [41,42].This recombination process creates a strong genetic link between the newly formed viral variants due to shared sequences of high homology.The recombination results of this study present intriguing insights into the genetic makeup of various strains of the virus through a recombination analysis and suggest that recent circulating strains in West Bengal, India belonging to genotype B played a crucial role in the emergence of the recombinant strains.Phylogenetic study and genetic recombination analysis have shed light on the dynamic nature of the virus's genetic composition, the role of recombination in its evolution, and the significance of specific strains in driving its adaptation and transmission.Taken together, genetic recombination, base insertion, deletion, and spontaneous mutations all suggest the emergence of a new variant of human mastadenovirus B which is highly likely to cause epidemics and is capable of increasing the severity of the adenovirus infection.Furthermore, the presence of multiple circulating recombinant strains of HAdV and disease severity warrants a claim for regular surveillance and genetic characterization of the circulating adenovirus species.
Conclusions
The exploration into the strains of human mastadenovirus B1 during the West Bengal outbreak from December 2022 to March 2023 unraveled a rich tapestry of genetic complexities and recombination episodes.Our study illuminated the complexities within adenovirus genomes, emphasizing the role of recombination, mutation, and gene shuffling in driving the emergence of novel variants.Our study revealed distinct genetic variations in the VA, Hexon, and E3 region sequences of adenoviral genomes.The variability observed in the VA and E3 CR1-Delta regions of the adenoviral genome was particularly noteworthy, suggesting an active role in modulating the host's immune response which may have potentially induced severe respiratory infections, as evidenced in the hospitalized pediatric population in West Bengal.This understanding is pivotal in devising effective strategies for the containment, treatment, and prevention of the evolving landscape of adenovirus variants.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/v16010159/s1.Supporting Information Table S1: Primers for amplicon preparation for NGS and Sanger sequencing.Primers 1-10 were used for amplicon preparation for NGS and 11-16 were used for Sanger sequencing for end filling.Supporting Information Table S2: Clinical symptoms and signs of 24 WGS samples.Supporting Information Table S3: GenBank accession Number of sequences submitted in NCBI.
Figure 2 .
Figure 2. Phylogenetic analysis based on Penton, Hexon, and Fiber genes.The samples marked with ▲ are reference sequences derived from NCBI, KF268315 (HAdV16), KF268125 (HAdV7), andKF268210 (HAdV3).• Represents sequences of samples recovered from infection • Represents sequences of samples which had fatal outcomes due to adenovirus infection.(a) Phylogenetic tree based on the Penton gene, (b) Phylogenetic tree based on the Hexon gene, (c) Phylogenetic tree based on the Fiber gene, and (d) Pictorial representation of genotypes based on Fiber, Hexon, and Penton genes.
Figure 2 .
Figure 2. Phylogenetic analysis based on Penton, Hexon, and Fiber genes.The samples marked with ▲ are reference sequences derived from NCBI, KF268315 (HAdV16), KF268125 (HAdV7), andKF268210 (HAdV3).• Represents sequences of samples recovered from infection • Represents sequences of samples which had fatal outcomes due to adenovirus infection.(a) Phylogenetic tree based on the Penton gene, (b) Phylogenetic tree based on the Hexon gene, (c) Phylogenetic tree based on the Fiber gene, and (d) Pictorial representation of genotypes based on Fiber, Hexon, and Penton genes.
Figure 3 .
Figure 3. (a) zPicture (https://zpicture.dcode.org/)accessed on 17 May 2023 analysis based on the BlastZ local alignment algorithm for pairwise comparison between reference sequence KF268125 and NICED/23-17/3288 and (b) Overall variation observed in the VA-I and VA-II regions in sequences samples.
Figure 4 .
Figure 4. Comparison of change in secondary and tertiary VA-I RNA structures and their PKR bind ing.(a) Representative structure of type 3 and type 7 VA-I RNA (KF268210) (b) Representative struc ture of new VA-I RNA variant from WGS-sequenced samples (NICED/23-01/1914).
Figure 5 .
Figure 5.The lollipop plot illustrates amino acid (AA) substitutions (a) The immunogenic domain of the hexon protein in Human Adenovirus type 3 and type 7. Illustrates the AA substitutions in th hypervariable regions (HVR) within loop 1 (HVR 1-6) and loop 2 (HVR 7) of the hexon protein in the NICED/23-13/2909 strain of type 3 human adenovirus.(b) Amino acid (AA) substitutions in th
Figure 4 .
Figure 4. Comparison of change in secondary and tertiary VA-I RNA structures and their PKR binding.(a) Representative structure of type 3 and type 7 VA-I RNA (KF268210) (b) Representative structure of new VA-I RNA variant from WGS-sequenced samples (NICED/23-01/1914).
Figure 4 .
Figure 4. Comparison of change in secondary and tertiary VA-I RNA structures and their PKR binding.(a) Representative structure of type 3 and type 7 VA-I RNA (KF268210) (b) Representative structure of new VA-I RNA variant from WGS-sequenced samples (NICED/23-01/1914).
Figure 5 .
Figure 5.The lollipop plot illustrates amino acid (AA) substitutions (a) The immunogenic domain of the hexon protein in Human Adenovirus type 3 and type 7. Illustrates the AA substitutions in the hypervariable regions (HVR) within loop 1 (HVR 1-6) and loop 2 (HVR 7) of the hexon protein in the NICED/23-13/2909 strain of type 3 human adenovirus.(b) Amino acid (AA) substitutions in the conserved region, hypervariable region 1 (HVR 1), Arg-Gly-Asp (RGD) domain on the Penton base protein in human adenovirus type 3 and type 7. Illustrates the AA substitutions in the Penton base protein of the NICED/23-13/2909 strain which belongs to HAdV type 3.
Figure 6 .
Figure 6.Schematic representation of the whole E3 gene (marked in grey), different E3 products ( marked in magenta) and the insertion observed in the E3 CR1-delta region (marked in orange).
Figure 6 .
Figure 6.Schematic representation of the whole E3 gene (marked in grey), different E3 products (marked in magenta) and the insertion observed in the E3 CR1-delta region (marked in orange).
Author
Contributions: A.C.: Performed experiments, processed raw sequencing data for genome assembly and annotation, submitted to NCBI BankIt, bioinformatics analysis, wrote the draft manuscript.U.B.: designed primer, recombinant analysis, and wrote the draft manuscript.R.G.: processed and screened the clinical samples for the presence of respiratory viral pathogens, collected the final outcome of the patients, and performed Sanger sequencing.A.D.: processed and screened the clinical samples for the presence of respiratory viral pathogens, critically reviewed and edited the manuscript.A.M.: coordinated the collection of samples from hospitals, and critically reviewed and edited the manuscript.R.S.: processed and screened the clinical samples.M.C.-S.: critically reviewed and edited the manuscript.A.K.C.: conceptualized, designed, monitored the study, critically reviewed, and edited the manuscript.S.D.: conceptualized, designed, critically reviewed, and edited the manuscript.All authors have read and agreed to the published version of the manuscript.Funding: This research received no external funding.Institutional Review Board Statement: This study was approved by the Institutional Ethics Committee of ICMR-National Institute of Cholera and Enteric Diseases, Kolkata(sanction No. A-10(2)/2022-IEC, Dated 28 April 2022).
Table 1 .
Genetic characterization of whole genome-sequenced samples with clinical details and outcomes.The homology match based on WGS and the genotype of each sample as determined by Penton, Hexon, and Fiber homology searches.Phylogenetic analysis of the whole-genome sequences of 24 samples reveals that NICED/23-06/2213 and NICED/23-09/1639 form two separate clades which are closely | 5,965.4 | 2024-01-01T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
Multi-Objective Optimisation-Based Design of an Electric Vehicle Cabin Heating Control System for Improved Thermal Comfort and Driving Range
Modern electric vehicle heating, ventilation, and air-conditioning (HVAC) systems operate in more efficient heat pump mode, thus, improving the driving range under cold ambient conditions. Coupling those HVAC systems with novel heating technologies such as infrared heating panels (IRP) results in a complex system with multiple actuators, which needs to be optimally coordinated to maximise the efficiency and comfort. The paper presents a multi-objective genetic algorithm-based control input allocation method, which relies on a multi-physical HVAC model and a CFD-evaluated cabin airflow distribution model implemented in Dymola. The considered control inputs include the cabin inlet air temperature, blower and radiator fan air mass flows, secondary coolant loop pump speeds, and IRP control settings. The optimisation objective is to minimise total electric power consumption and thermal comfort described by predictive mean vote (PMV) index. Optimisation results indicate that HVAC and IRP controls are effectively decoupled, and that a significant reduction of power consumption (typically from 20% to 30%) can be achieved using IRPs while maintaining the same level of thermal comfort. The previously proposed hierarchical HVAC control strategy is parameterised and extended with a PMV-based controller acting via IRP control inputs. The performance is verified through simulations in a heat-up scenario, and the power consumption reduction potential is analysed for different cabin air temperature setpoints.
Introduction
Consumer acceptance of electric vehicles is increasing strongly [1], with the trend bound to continue due to imposed emissions and carbon tax legislation, e.g., in the European Union [2] and China [3]. Although the declared range of current battery electric vehicles (BEV) is typically between 300 km and 500 km, the mass market share is still hindered due to long and widely unavailable charging and end-users' perception of BEVs, having limited driving range compared to conventional vehicles.
The EV range significantly drops below the declared one when considering extremely hot or cold weather conditions since the heating, ventilation, and air-conditioning (HVAC) system has the highest energy consumption of all auxiliary systems [4,5]. In cold weather, the vehicle range can be decreased by up to 60% [6,7] when compared to the declared range obtained at room temperature. This is especially emphasised in BEVs, which utilise highvoltage positive thermal coefficient (HV-PTC) resistive heaters for cabin air heating with a power rating of up to 5 kW for passenger vehicles. Since the coefficient of performance of HV-PTC heaters is 1 at maximum [8], this can lead to the high energy consumption of the HVAC system. New heating concepts have been developed recently for improved BEV heating efficiency in cold weather. These particularly relates to vapour-compression cycle (VCC)-based thermal dynamics and thermal load disturbance by cabin air condition sources. The second contribution concerns analysis of optimisation results and establishing a related optimal control strategy to quantitatively recognise and exploit IRP benefits for both steady-state and emphasised transient conditions. The steady-state analysis is more comprehensive than analyses presented in available literature (see e.g., [16,17]) in terms of analysing a wide range of HVAC-only and HVAC+IRP cabin air temperature targets using both optimisation and simulation. Moreover, unlike in the available references (e.g., [17,23]), the optimisation and simulation analysis results are exploited to design an optimal HVAC+IRP control system, which is tested under heavily transient heat-up conditions, as well.
The presented study includes the following main steps: (i) multi-physical modelling of the HVAC system and passenger cabin (Section 2), (ii) setting the model-based control input allocation map optimisation framework (Section 3), (iii) analysing the optimisation results and related power consumption reduction potential based on using infrared panels (Section 4), and (iv) revealing the structure of optimal control strategy and presenting corresponding simulation verification results for steady-state and transient conditions, again with the emphasis on power consumption reduction potential (Section 5). Section 6 discusses the main results and open problems, while concluding remarks are given in Section 7.
HVAC System Configuration
The considered R290-based HVAC system configuration, operating modes, and thermal energy exchange principles are described in detail in [14] and [23]. The emphasis in this work is on the heat-pump operating mode of the HVAC system, where no use of powertrain/battery waste-heat is considered since it is insufficient for cabin heating [24].
The heat pump mode comprises of three distinct thermal energy exchange loops (see Figure 1): the core vapour-compression-cycle (VCC) refrigerant loop (green line), which exchanges thermal energy with a low temperature secondary coolant loop (blue line) and a high-temperature secondary coolant loop (red line). The secondary coolant loops serve as intermediary between VCC and ambient and cabin air (coloured arrows). The VCC actuators include the electric compressor and the electronic expansion valve (EXV). The secondary coolant loops are equipped with multiple pumps and proportional three-way valves, which allow for implementing more complex operating modes such as dehumidification. In the considered heat-pump operating mode, the low-temperature coolant is at lower temperature than the ambient air and it receives thermal energy through the main radiator and heats the refrigerant in a chiller (EVAP), while the high-temperature coolant loop is heated by a condenser (COND). The high-temperature coolant flows through the heater core (HC) where it heats up the cabin inlet air. The ambient air is forced through the front main radiator by means of a fan and ramming flow due to vehicle velocity, while the air flow into the cabin is achieved by the blower fan. The air properties at the blower fan inlet are influenced by the cabin air recirculation setting, which allows mixing of the fresh air and cabin air in a specified ratio. The HVAC system is equipped with multiple refrigerant pressure and refrigerant and air temperature sensors to allow for implementation of the complex automatic climate control system considered. The cabin inlet air temperature is controlled by a PI controller, which commands compressor speed, and the superheat temperature is controlled by a PI controller, which commands the EXV position (dashed black lines). The superheat temperature control ensures that the refrigerant at the compressor inlet is in gas state.
Dymola-Based HVAC and Cabin Air Flow Distribution Models
The multi-physical HVAC system model has been implemented in Dymola, parametrised using available system design data, technical datasheets, and component test bench data, and partly validated using test rig data [25]. Figure 2 depicts the passenger cabin model with associated HVAC inputs (red outline) and the airflow distribution subsystem (green outline). The cabin (red outline) is modelled as a single-zone humid air volume enveloped by a lumped-parameter thermal mass of interior elements such as seats, ducts, panels, and car body [26]. The lumped body thermal mass exchanges heat via forced convection with the ambient due to vehicle traveling at velocity vveh and with the interior cabin air volume. The cabin air volume is additionally subject to various thermal loads Q load, including passenger metabolic load Q met and the solar load Q sol. The cabin inlet air properties including mass flow rate ṁ bf , temperature Tcab,in, and relative humidity RHcab,in are determined by the HVAC system. The inlet air is assumed to ideally mix with the cabin air at a constant pressure. Therefore, the cabin outlet air mass flow rate comprised of the mass flow going to the ambient and recirculated mass flow (depending on the set recirculation ratio) is equal to the inlet air mass flow. The recirculation ratio used for heating is set, herein, to 90% of fresh air intake and 10% of recirculated cabin air.
The passenger thermal comfort is evaluated using the Predicted Mean Vote (PMV) index [27] whose inputs are provided by a look-up table-based cabin airflow distribution model described in [26]. The flow distribution model shown in Figure 2 (green outline) is comprised of four main zones (driver, co-driver, and the two backseat passengers) and each zone is further divided in three body-specific parts: head, torso, and legs. Such discretisation enables the PMV evaluation for different body parts and, therefore, the implementation of localised heating by means of spatially distributed infrared panels. Negative PMV values correspond to the occupant feeling cold, while positive PMV values indicate that the occupant is feeling hot. The best (neutral) thermal comfort is achieved at the PMV value of zero, while the favourable thermal comfort range is defined by PMV values lying between −0.5 and +0.5.
The airflow distribution model is extended with six infrared heating panel (IRP) clusters (Figure 2), i.e., those for driver (uIRP1,2) and co-driver (uIRP3,4) upper body (torso, head) and lower body (four clusters in total), and also backseat passengers (uIRP5,6; two in total). Each cluster is modelled by a thermal mass heated by an internal panel temperature controller that commands heating power, and each panel exchanges convective heat with cabin air. The maximum IRP heating power depends on the panel size, and it equals 390 W and 160 W for driver clusters 1 and 2, respectively. The IRP control level setting uIRP sets the panel target temperature command, which is equal to air temperature for uIRP = 0% and the maximum panel temperature for uIRP = 100% (80 °C for head panels, and 60 °C for other panels).
Dymola-Based HVAC and Cabin Air Flow Distribution Models
The multi-physical HVAC system model has been implemented in Dymola, parametrised using available system design data, technical datasheets, and component test bench data, and partly validated using test rig data [25]. Figure 2 depicts the passenger cabin model with associated HVAC inputs (red outline) and the airflow distribution subsystem (green outline). The cabin (red outline) is modelled as a single-zone humid air volume enveloped by a lumped-parameter thermal mass of interior elements such as seats, ducts, panels, and car body [26]. The lumped body thermal mass exchanges heat via forced convection with the ambient due to vehicle traveling at velocity v veh and with the interior cabin air volume. The cabin air volume is additionally subject to various thermal loads . Q load , including passenger metabolic load . Q met and the solar load . Q sol . The cabin inlet air properties including mass flow rate . m bf , temperature T cab,in , and relative humidity RH cab,in are determined by the HVAC system. The inlet air is assumed to ideally mix with the cabin air at a constant pressure. Therefore, the cabin outlet air mass flow rate comprised of the mass flow going to the ambient and recirculated mass flow (depending on the set recirculation ratio) is equal to the inlet air mass flow. The recirculation ratio used for heating is set, herein, to 90% of fresh air intake and 10% of recirculated cabin air.
The passenger thermal comfort is evaluated using the Predicted Mean Vote (PMV) index [27] whose inputs are provided by a look-up table-based cabin airflow distribution model described in [26]. The flow distribution model shown in Figure 2 (green outline) is comprised of four main zones (driver, co-driver, and the two backseat passengers) and each zone is further divided in three body-specific parts: head, torso, and legs. Such discretisation enables the PMV evaluation for different body parts and, therefore, the implementation of localised heating by means of spatially distributed infrared panels. Negative PMV values correspond to the occupant feeling cold, while positive PMV values indicate that the occupant is feeling hot. The best (neutral) thermal comfort is achieved at the PMV value of zero, while the favourable thermal comfort range is defined by PMV values lying between −0.5 and +0.5.
The airflow distribution model is extended with six infrared heating panel (IRP) clusters (Figure 2), i.e., those for driver (u IRP1,2 ) and co-driver (u IRP3,4 ) upper body (torso, head) and lower body (four clusters in total), and also backseat passengers (u IRP5,6 ; two in total). Each cluster is modelled by a thermal mass heated by an internal panel temperature controller that commands heating power, and each panel exchanges convective heat with cabin air. The maximum IRP heating power depends on the panel size, and it equals 390 W and 160 W for driver clusters 1 and 2, respectively. The IRP control level setting u IRP sets the panel target temperature command, which is equal to air temperature for u IRP = 0% and the maximum panel temperature for u IRP = 100% (80 • C for head panels, and 60 • C for other panels).
The inlet air distribution in the cabin is determined by air vent outlet locations. In the heating mode, the air distribution mode HEAT redirects 75% of the inlet air to the legs, 15% of the inlet air to the torso, while the rest of the air mass flow rate is redirected towards the windshield area (see [25,26] for details and also definition of other air distribution modes). The inputs to the airflow distribution model are cabin inlet air temperature T cab,in and volumetric inlet air flow . q bf (outputs of HVAC model), cabin air temperature T cab and humidity RH cab , and infrared panel control settings u IRP,1-6 . Metabolic activity and clothing factors are considered to be constant parameters. The cabin air temperature, the inlet air temperature and the inlet air volume flow determine the actual air temperature and velocity for each body part used as inputs for PMV calculation, and they are determined using CFD model-based look-up tables for the given ventilation and recirculation mode [26]. The mean radiant temperature, required by the PMV model, is determined as a linear combination of cabin air temperature and IRP temperature [26]. The inlet air distribution in the cabin is determined by air vent outlet locations. In the heating mode, the air distribution mode HEAT redirects 75% of the inlet air to the legs, 15% of the inlet air to the torso, while the rest of the air mass flow rate is redirected towards the windshield area (see [25,26] for details and also definition of other air distribution modes). The inputs to the airflow distribution model are cabin inlet air temperature Tcab,in and volumetric inlet air flow q̇b f (outputs of HVAC model), cabin air temperature Tcab and humidity RHcab, and infrared panel control settings uIRP,1-6. Metabolic activity and clothing factors are considered to be constant parameters. The cabin air temperature, the inlet air temperature and the inlet air volume flow determine the actual air temperature and velocity for each body part used as inputs for PMV calculation, and they are determined using CFD model-based look-up tables for the given ventilation and recirculation mode [26]. The mean radiant temperature, required by the PMV model, is determined as a linear combination of cabin air temperature and IRP temperature [26]. For the purpose of computationally-efficient control input allocation optimisation, the cabin thermal model (Figure 2, red outline) is replaced with the boundary conditions that determine the cabin air properties (Tcab and RHcab) and required thermal heating power demand Q hR , where the latter accommodates for various cabin thermal loads and the heat transferred into thermal masses [23]. The airflow distribution model inputs related to cabin air properties (Tcab and RHcab) are defined by the same boundary conditions, while the blower fan air mass flow ṁ bf and the inlet air temperature Tcab,in are provided by the HVAC model. The optimisation model structure is shown in Figure 3a. The considered HVAC control inputs are ( Figure 1): the low-temperature coolant pump speed np2, the high-temperature coolant pump speed np3, the main radiator fan power level setting P̅ rf and the blower fan air mass flow rate ṁ bf , which is transformed to blower fan input voltage using polynomial functions Vbf = fbf −1 (ṁ bf , Tbf,in) as described in [23]. The radiator fan air mass flow and power consumption are determined by three discrete input settings P̅ rf ϵ {0, 0.5, 1} corresponding to different fan power levels (turned off, at half power, and at full power). The blower fan power consumption is described by a look-up table. Another HVAC control input is the cabin inlet air temperature reference Tcab,in,R, which is feedback-controlled by varying the compressor speed. Other inputs include the ambient air temperature (Tamb) and relative humidity (RHamb), as well as the cabin air properties inputs (Tcab, RHcab) that replace the cabin model, as described above and in [23].
The cabin airflow distribution model control inputs include the driver-related IRP cluster control settings uIRP,1 and uIRP,2, while other PMV calculation-related inputs are obtained from the HVAC model (q̇b f and Tcab,in) and the cabin boundary conditions (Tcab and RHcab). The recirculation mode is set to fresh air mode and the ventilation mode corresponds to the above-described HEAT mode. For the purpose of computationally-efficient control input allocation optimisation, the cabin thermal model (Figure 2, red outline) is replaced with the boundary conditions that determine the cabin air properties (T cab and RH cab ) and required thermal heating power demand . Q hR , where the latter accommodates for various cabin thermal loads and the heat transferred into thermal masses [23]. The airflow distribution model inputs related to cabin air properties (T cab and RH cab ) are defined by the same boundary conditions, while the blower fan air mass flow . m bf and the inlet air temperature T cab,in are provided by the HVAC model. The optimisation model structure is shown in Figure 3a. The considered HVAC control inputs are ( Figure 1): the low-temperature coolant pump speed n p2 , the hightemperature coolant pump speed n p3 , the main radiator fan power level setting P rf and the blower fan air mass flow rate . m bf , which is transformed to blower fan input voltage using polynomial functions V bf = f bf −1 ( . m bf , T bf,in ) as described in [23]. The radiator fan air mass flow and power consumption are determined by three discrete input settings P rf {0, 0.5, 1} corresponding to different fan power levels (turned off, at half power, and at full power). The blower fan power consumption is described by a look-up table. Another HVAC control input is the cabin inlet air temperature reference T cab,in,R , which is feedback-controlled by varying the compressor speed. Other inputs include the ambient air temperature (T amb ) and relative humidity (RH amb ), as well as the cabin air properties inputs (T cab , RH cab ) that replace the cabin model, as described above and in [23].
The cabin airflow distribution model control inputs include the driver-related IRP cluster control settings u IRP,1 and u IRP,2 , while other PMV calculation-related inputs are obtained from the HVAC model ( . q bf and T cab,in ) and the cabin boundary conditions (T cab and RH cab ). The recirculation mode is set to fresh air mode and the ventilation mode corresponds to the above-described HEAT mode.
The relevant outputs are the total power consumption of HVAC+IRP system P tot and driver's comfort index PMV dr (Figure 3a), where the former is expressed as (see Figure 1): P HVAC = P com + P bf + P rf + P p2 + P p3 (2) The power consumption of other, low-power, positioning HVAC actuators, such as the EXV stepper motor and cabin air distribution flaps, is not modelled and, therefore, not accounted for.
The driver's comfort is evaluated by an arithmetic mean of the three individual body part PMV values (head, torso, legs): Figure 3b shows the driver's mean PMV map obtained for the HEAT ventilation mode, driving activity (metabolic rate of 1.2), typical winter clothing factor of 1.2, and with no use of IRPs (i.e., the mean radiant temperature is equal to the cabin air temperature). The driver's PMV is primarily determined by the cabin air temperature T cab and the blower fan volumetric flow . q bf , while the effect of cabin air relative humidity RH cab is of minor influence. For the particular CFD model-based look-up tables, the ideal thermal comfort (PMV = 0) is achieved at around 22.5 • C, while the comfortable range (|PMV| < 0.5) is achieved for the cabin air temperatures lying between 19 • C and 24 • C for moderate blower air mass flows. The relevant outputs are the total power consumption of HVAC+IRP system Ptot and driver's comfort index (Figure 3a), where the former is expressed as (see Figure 1):
Ptot = PHVAC + PIRP
(1) The power consumption of other, low-power, positioning HVAC actuators, such as the EXV stepper motor and cabin air distribution flaps, is not modelled and, therefore, not accounted for.
The driver's comfort is evaluated by an arithmetic mean of the three individual body part PMV values (head, torso, legs): Figure 3b shows the driver's mean PMV map obtained for the HEAT ventilation mode, driving activity (metabolic rate of 1.2), typical winter clothing factor of 1.2, and with no use of IRPs (i.e., the mean radiant temperature is equal to the cabin air temperature). The driver's PMV is primarily determined by the cabin air temperature Tcab and the blower fan volumetric flow q̇b f , while the effect of cabin air relative humidity RHcab is of minor influence. For the particular CFD model-based look-up tables, the ideal thermal comfort (PMV = 0) is achieved at around 22.5 °C, while the comfortable range (|PMV| < 0.5) is achieved for the cabin air temperatures lying between 19 °C and 24 °C for moderate blower air mass flows.
Control Input Optimisation Framework
The control input optimisation framework has been built around the one developed in [23], where the main extension relates to incorporating the IRPs model and controls.
General Aim
The optimisation is aimed at obtaining optimal control input allocation maps, which minimise total power consumption while maintaining high passenger thermal comfort. As a part of the hierarchical control strategy (see Section 5 and [23] for details), the optimised allocation maps transform the heating power demand . Q hR , commanded by a superimposed cabin air temperature controller, to low-level controller inputs/references for the given cabin air conditions (T cab and RH cab ). As described in Section 2 and [23], the above formulation of allocation optimisation problem allows for omitting the passenger cabin dynamics model, thus, leading to a straightforward and computationally efficient optimisations [23].
It should be noted that strictly speaking the optimal allocation ensures optimal control system performance only for quasi-steady-state operation, under which the actual HVAC variables (e.g., T cab,in ) accurately follow the optimal allocation commands (e.g., T cab,in,R ). This is not a major constraint since the (quasi)-stationary HVAC operation usually has a dominant influence on power consumption, because its share of a driving cycle is typically dominant over distinct transient operation (e.g., heat-up transient for which T cab,in can considerably differ from T cab,in,R due to HVAC system thermal inertia) [23].
Objectives and Constraints
The optimisation problem is to find an optimal set of control inputs, which simultaneously minimises the following conflicting optimisation objectives: the total actuator power consumption (J 1 ) and the absolute value of driver's mean PMV (J 2 ): The control inputs are the cabin inlet air temperature reference T cab,inR , the radiator fan setpoint P rf , two pump speeds n p2 and n p3 , and the IRP control level settings u IRP1,2 (see Figure 3a and also blue box in Figure 4). For the optimised control inputs, the blower fan air mass flow rate . m bf can be determined directly from the following relation: .
The cost functions J 1 and J 2 are subject to various hardware and controller setpoint tracking constraints. The control input constraints (left column of Table 1) relate to actuator hardware limits (those of pump speeds and IRP control level settings, as well as discrete states of radiator fan power setting) and software limits of cabin inlet air temperature reference for comfortable operation. Additional hardware-related constraints, such as compressor speed, EXV position, and blower fan control voltage limits, are not included in Table 1, as they are realised within the HVAC model as saturations of low-level feedback controllers.
The controller setpoint tracking constraints (right column of Table 1) are applied to ensure that the reference values of cabin inlet air temperature and superheat temperature are achieved for the optimised set of control inputs under steady-state conditions. In both cases the temperature tracking error of up to 1 • C is tolerated, as defined in Table 1. The third setpoint tracking constraint is related to the requested heating power, where the heating power error of 100 W is tolerated.
Optimisation Method
Optimisation of HVAC control inputs has been carried out by using a multi-objective genetic algorithm (GA) called MOGA-II, which is built in software modeFrontier. MOGA-II employs an effective multi-search elitism method and preserves good (Pareto or nondominated) solutions without converging prematurely to a local optimum [28]. The implemented workflow is shown in Figure 4, with the MATLAB node placed at the centre.
Optimisation Method
Optimisation of HVAC control inputs has been carried out by using a multi-objective genetic algorithm (GA) called MOGA-II, which is built in software modeFrontier. MOGA-II employs an effective multi-search elitism method and preserves good (Pareto or nondominated) solutions without converging prematurely to a local optimum [28]. The implemented workflow is shown in Figure 4, with the MATLAB node placed at the centre. The MATLAB node is used as an intermediary interface between the Dymola simulation model and modeFrontier. It feeds the control inputs determined by the GA to Dymola simulation model in an executable form, calculates the steady-state performance and constraint-related indices after the simulation is executed, and feeds them back to the GA. The model simulation is controlled as described in [29], where the model is initialised in accordance with the GA-set control inputs and the simulation time is set to be large enough to ensure that the steady-state condition is reached (1200 s).
In this study, the initial design of experiments for the GA is populated with 20 generations of control input values obtained using a quasi-random Sobol method and optimisation runs for 512 GA iterations. Execution of such an optimisation setup for a single operating point defined by Tcab, RHcab, and Q hR takes about 40 min when running on a PC based on 3.5 GHz Intel Xeon central processing unit and when using three parallel design evaluations.
The control input constraints from the left column of Table 1 are implemented in the form of limited GA search space. All optimisation inputs have continuous values between the set bounds, with the exception of the main radiator fan power settings where only The MATLAB node is used as an intermediary interface between the Dymola simulation model and modeFrontier. It feeds the control inputs determined by the GA to Dymola simulation model in an executable form, calculates the steady-state performance and constraint-related indices after the simulation is executed, and feeds them back to the GA. The model simulation is controlled as described in [29], where the model is initialised in accordance with the GA-set control inputs and the simulation time is set to be large enough to ensure that the steady-state condition is reached (1200 s).
In this study, the initial design of experiments for the GA is populated with 20 generations of control input values obtained using a quasi-random Sobol method and optimisation runs for 512 GA iterations. Execution of such an optimisation setup for a single operating point defined by T cab , RH cab , and . Q hR takes about 40 min when running on a PC based on 3.5 GHz Intel Xeon central processing unit and when using three parallel design evaluations.
The control input constraints from the left column of Table 1 are implemented in the form of limited GA search space. All optimisation inputs have continuous values between the set bounds, with the exception of the main radiator fan power settings where only three distinct values are given to GA to choose from. The controller setpoint tracking constraints are implemented using the modeFrontier's constraint node. The MOGA-II algorithm handles these hard constraints as soft constraints in the GA cost function by proportionally penalizing constraint violation [30].
The above described optimisation procedure has been repeated in a loop for the full set of cabin air conditions and heating power demands (T cab , RH cab , . Q hR ). The considered operating set range, optimisation setting parameters and main simulation parameters are given in Table 2. The following scenario is considered: the driver (with the metabolic rate activity of 1.2 and the clothing factor of 1.2) is the only passenger in the vehicle that is travelling at the velocity of 60 km/h and at the ambient temperature of T a = −10 • C. Without the loss of generality, the cabin relative humidity was kept at the value RH cab = 10%, which was obtained by open-loop simulations of cabin thermal dynamics model. Recall that the cabin air relative humidity has negligible impact on PMV thermal comfort index. Figure 5 shows the optimisation results in the form of Pareto frontiers obtained for two characteristic scenarios: without (squares) and with use of IRPs (circles). These results show that by using IRPs (circles) it is possible to significantly improve the thermal comfort (the driver's absolute mean PMV is reduced by around 1 to 1.5 point) in the cold cabin environment (below T cab = 22 • C) at the expense of somewhat increased power consumption. This is especially advantageous at cabin temperatures that marginally fall outside the comfortable PMV range (e.g., 15 • C, Figure 5d), since using IRPs can bring the PMV into the comfort range in that case.
Optimisation Results
On the other hand, the impact of solely HVAC system optimisation on the thermal comfort enhancement is marginal (squares in Figure 5), i.e., there is a possibility to reduce the PMV by only up to 0.2 points. This is achieved by changing the cabin inlet air temperature and flow rate, while satisfying Equation (7) (a more detailed discussion on the effect is given with Figure 7). Not only the thermal comfort improvement is marginal, but the power consumption needed to achieve this improvement is rather excessive, particularly for high thermal energy demands . Q hR and low cabin air temperatures T cab . Figure 5. Comparison of multi-objective Pareto optimal results for cases with (circles) and without use of infrared heating panels (IRPs) (squares) at different cabin temperatures (a-f). Figure 6 shows the trade-off between PMV and HVAC-only power consumption. In the majority of cases the HVAC power consumption does not depend on the IRP control input (circles in Figure 6), i.e., it coincides well with the minimum HVAC power consumption obtained for zero IRP control input. Only at the low-PMV endpoints, the optimiser chooses to increase the HVAC power consumption to gain marginal improvement of thermal comfort (see e.g., Tcab = −10 °C and Q hR = 5.5 kW). This indicates that the HVAC and IRP controls can be effectively decoupled with a very marginal effect on system performance. The HVAC system should be optimised for minimum power consumption due to its marginal impact on PMV, while the IRPs should be used for improving the thermal comfort as much as needed due to its relatively minor effect on the total power consumption ( Figure 5).
Figure 5.
Comparison of multi-objective Pareto optimal results for cases with (circles) and without use of infrared heating panels (IRPs) (squares) at different cabin temperatures (a-f). Figure 6 shows the trade-off between PMV and HVAC-only power consumption. In the majority of cases the HVAC power consumption does not depend on the IRP control input (circles in Figure 6), i.e., it coincides well with the minimum HVAC power consumption obtained for zero IRP control input. Only at the low-PMV endpoints, the optimiser chooses to increase the HVAC power consumption to gain marginal improvement of thermal comfort (see e.g., T cab = −10 • C and . Q hR = 5.5 kW). This indicates that the HVAC and IRP controls can be effectively decoupled with a very marginal effect on system performance. The HVAC system should be optimised for minimum power consumption due to its marginal impact on PMV, while the IRPs should be used for improving the thermal comfort as much as needed due to its relatively minor effect on the total power consumption ( Figure 5). Figure 7a shows the blower fan air mass flow vs. the cabin inlet air temperature for different heating power demands Q hR and the corresponding total power consumption Ptot. At the considered low cabin air temperature, the spread of cabin inlet air temperatures and blower fan air mass flows for the case of no infrared panels used is rather wide. The lowest power consumption is achieved at the left end points (low inlet temperature/high mass flow; see squares in Figure 7a), while the maximum comfort (and the highest power consumption) is achieved with low air mass flow and high inlet temperature (star symbols). The optimisation results for the case of infrared panels included (diamonds) are mostly grouped around the lowest total power consumption end (low inlet temperature/high mass flow).
The optimised evaporator and condenser pump speed control inputs np2 and np3 are given in Figure 7b,c, respectively. Higher pump speeds are preferred for better thermal Figure 7a shows the blower fan air mass flow vs. the cabin inlet air temperature for different heating power demands . Q hR and the corresponding total power consumption P tot . At the considered low cabin air temperature, the spread of cabin inlet air temperatures and blower fan air mass flows for the case of no infrared panels used is rather wide. The lowest power consumption is achieved at the left end points (low inlet temperature/high mass flow; see squares in Figure 7a), while the maximum comfort (and the highest power consumption) is achieved with low air mass flow and high inlet temperature (star symbols). The optimisation results for the case of infrared panels included (diamonds) are mostly grouped around the lowest total power consumption end (low inlet temperature/high mass flow).
The optimised evaporator and condenser pump speed control inputs n p2 and n p3 are given in Figure 7b,c, respectively. Higher pump speeds are preferred for better thermal comfort for most of the heating power demands . Q hR . However, this increase in pump speeds has a major impact on the HVAC power consumptions, since it correlates with higher cabin inlet air temperatures and, thus, higher speeds of the compressor as the largest HVAC consumer.
The optimised radiator fan control input plot is shown in Figure 7d. The radiator fan is turned off for the majority of operating points, while half or full power is demanded only at high heating power demands . Q hR , with an exception of heating demands of 2000 W and 2500 W and the maximum comfort case. For the considered constant vehicle velocity of 60 km/h, this means that the ramming air mass flow through the main radiator is high enough for sufficient thermal energy exchange. comfort for most of the heating power demands Q hR. However, this increase in pump speeds has a major impact on the HVAC power consumptions, since it correlates with higher cabin inlet air temperatures and, thus, higher speeds of the compressor as the largest HVAC consumer. The optimised radiator fan control input plot is shown in Figure 7d. The radiator fan is turned off for the majority of operating points, while half or full power is demanded only at high heating power demands Q hR, with an exception of heating demands of 2000 W and 2500 W and the maximum comfort case. For the considered constant vehicle velocity of 60 km/h, this means that the ramming air mass flow through the main radiator is high enough for sufficient thermal energy exchange. Figure 7e shows the blower fan air mass flow vs. the cabin inlet air temperature for the near-neutral comfort cabin temperature of 20 • C. The trade-off between cabin inlet air mass flow and temperature is narrower compared to low cabin air temperature, and the minimum total power consumption is achieved at lower air mass flow rate, unlike at low cabin air temperature. Similarly, the spread of pump speeds is narrower (Figure 7f,g), which is connected with the narrower spread of inlet air temperatures. The ramming air mass flow through the main radiator is sufficient for the conditions of high cabin air temperature and the radiator fan control input is set to zero (Figure 7h).
The optimised IRP control input plots are shown in Figure 8, again for the cabin air temperatures of −10 • C and 20 • C. The total consumption of the two IRP clusters (colour axis) is subject to both IRP control inputs (u IRP1 and u IRP2 ) and the local (cabin) air temperature. To achieve the IRP control input-demanded target radiant surface temperature at the lower cabin air temperature, the IRPs require greater input power when compared with warm cabin air temperature (cf. left and right columns of Figure 8). This is due to convective thermal heat exchange of IRP surface with the surrounding air.
As discussed with Figure 5, increasing IRP control input (and consequently the IRP power consumption) results in reduced PMV, i.e., improved thermal comfort. The spread of PMV values at a given IRP control input level (vertical spread in each subplot of Figure 8) can be explained by the fact that the Cluster 1 (chest and head; u IRP1 ) and Cluster 2 (legs; u IRP2 ) can operate at different control input settings.
At the low cabin air temperature (T cab = −10 • C), the maximum comfort (star symbols) is achieved for both panels operating at the maximum IRP control input. At the near-neutral cabin temperature (T cab = 20 • C), the maximum (lumped PMV-based) comfort is achieved by setting the IRP control inputs to approximately 25% and 50% for the head/chest and leg area, respectively.
Energies 2021, 14,1203 13 of 24 Figure 7e shows the blower fan air mass flow vs. the cabin inlet air temperature for the near-neutral comfort cabin temperature of 20 °C. The trade-off between cabin inlet air mass flow and temperature is narrower compared to low cabin air temperature, and the minimum total power consumption is achieved at lower air mass flow rate, unlike at low cabin air temperature. Similarly, the spread of pump speeds is narrower (Figure 7f,g), which is connected with the narrower spread of inlet air temperatures. The ramming air mass flow through the main radiator is sufficient for the conditions of high cabin air temperature and the radiator fan control input is set to zero (Figure 7h).
The optimised IRP control input plots are shown in Figure 8, again for the cabin air temperatures of −10 °C and 20 °C. The total consumption of the two IRP clusters (colour axis) is subject to both IRP control inputs (uIRP1 and uIRP2) and the local (cabin) air temperature. To achieve the IRP control input-demanded target radiant surface temperature at the lower cabin air temperature, the IRPs require greater input power when compared with warm cabin air temperature (cf. left and right columns of Figure 8). This is due to convective thermal heat exchange of IRP surface with the surrounding air.
As discussed with Figure 5, increasing IRP control input (and consequently the IRP power consumption) results in reduced PMV, i.e., improved thermal comfort. The spread of PMV values at a given IRP control input level (vertical spread in each subplot of Figure 8) can be explained by the fact that the Cluster 1 (chest and head; uIRP1) and Cluster 2 (legs; uIRP2) can operate at different control input settings.
At the low cabin air temperature (Tcab = −10 °C), the maximum comfort (star symbols) is achieved for both panels operating at the maximum IRP control input. At the near-neutral cabin temperature (Tcab = 20 °C), the maximum (lumped PMV-based) comfort is achieved by setting the IRP control inputs to approximately 25% and 50% for the head/chest and leg area, respectively.
Power Consumption Reduction Analysis
The Pareto optimal results presented in Section 4.1 have indicated that HVAC and IRP control actions for a given cabin air temperature and heating demand are strongly decoupled, where the HVAC system should be tuned for minimum power consumption while the IRPs should be adjusted for favourable thermal comfort. This opens the possibility of reducing the total power consumption by reducing the cabin air temperature target below the nominal one and compensating for the loss of thermal comfort by engaging the IRPs [17]. The optimisation results are further analysed for steady-state conditions to verify if this possibility is viable.
The considered scenarios relate to the cabin air temperatures T cab = 15 • C and T cab = 20 • C and the cases with and without the use of IRPs, respectively. In Section 4.1, the optimisation results are presented for a wide range of heating demands Q hR of 1000 and 1500 W for the comfort level |PMV dr |= 0.36 (see horizontal lines and hollow star symbols in Figure 9a,b). Figure 9c shows the obtained power consumptions for different scenarios and the same thermal comfort level. The total power consumption (green bars) at the reduced cabin temperature of 15 • C and IRPs engaged is almost 20% lower than at the cabin air temperature of 20 • C and convective heating only. This is because the saving in HVAC power consumption when reducing the target cabin air temperature to 15 • C (blue bars) is greater than extra power consumption taken by IRPs in that case (red bars).
Power Consumption Reduction Analysis
The Pareto optimal results presented in Section 4.1 have indicated that HVAC and IRP control actions for a given cabin air temperature and heating demand are strongly decoupled, where the HVAC system should be tuned for minimum power consumption while the IRPs should be adjusted for favourable thermal comfort. This opens the possibility of reducing the total power consumption by reducing the cabin air temperature target below the nominal one and compensating for the loss of thermal comfort by engaging the IRPs [17]. The optimisation results are further analysed for steady-state conditions to verify if this possibility is viable.
The considered scenarios relate to the cabin air temperatures Tcab = 15 °C and Tcab = 20 °C and the cases with and without the use of IRPs, respectively. In Section 4.1, the optimisation results are presented for a wide range of heating demands Q hR, thus, covering the transient and steady-state conditions for proper control strategy formulation. In order to obtain steady-state condition-related values of Q hR, closed-loop simulations of cabin thermal model (with no solar load) have been conducted, which gives Q hR = 1033 W for Tcab = 15 °C and Q hR = 1201 W for Tcab = 20 °C. The thermal comfort of | dr|= 0.36 obtained at Tcab = 20 °C is taken as the setpoint for determining the IRP control input at Tcab = 15 °C, both based on optimisation results. Figure 9a,b show the Pareto optimal fronts, which are redrawn from Figure 5d,e for the heating power values Q hR = 1000 W and Q hR = 1500 W in a form with substituted x-axis and colour bar. The total power consumptions for the aforementioned, steady-state heating power values Q hR = 1033 W for Tcab = 15 °C and Q hR = 1201 W for Tcab = 20 °C are calculated by linear interpolation of the charts corresponding to Q hR of 1000 and 1500 W for the comfort level | dr|= 0.36 (see horizontal lines and hollow star symbols in Figure 9a,b). Figure 9c shows the obtained power consumptions for different scenarios and the same thermal comfort level. The total power consumption (green bars) at the reduced cabin temperature of 15 °C and IRPs engaged is almost 20% lower than at the cabin air temperature of 20 °C and convective heating only. This is because the saving in HVAC power consumption when reducing the target cabin air temperature to 15 °C (blue bars) is greater than extra power consumption taken by IRPs in that case (red bars).
Control Strategy
Optimisation results presented in the previous section demonstrated that HVAC and IRP controls are effectively decoupled. Therefore, the HVAC control strategy can be based on the hierarchical structure proposed in [20] and refined in [23], with the control input
Control Strategy
Optimisation results presented in the previous section demonstrated that HVAC and IRP controls are effectively decoupled. Therefore, the HVAC control strategy can be based on the hierarchical structure proposed in [20] and refined in [23], with the control input allocation maps set for minimal HVAC power consumption. Such HVAC control system is shown in Figure 10 and extended with IRP control based on PMV feedback for improved comfort.
The high-level HVAC control subsystem consists of a superimposed cabin air temperature controller and optimal HVAC control input allocation maps. A superimposed PI controller regulates the cabin air temperature T cab by commanding the heating power demand . Q hR . The heating power demand is limited based on time-varying maximum feasible heating power for the given cabin air temperature and relative humidity, which is obtained from the optimisation results and implemented in the form of a look-up table. The optimal HVAC control input allocation maps are implemented in the form of look-up tables with the cabin air temperature T cab and heating demand . Q hR used as the map inputs. These look-up tables are obtained directly from optimisation results presented in Section 4 by choosing the design that corresponds to minimal HVAC power consumption (see [23] for details of allocation map generation).
The cabin air inlet temperature and the superheat temperature are controlled by PI controllers, which are given in a modified form with the P action moved into the feedback path. All of the PI controllers are extended with gain-scheduling maps and an anti-windup algorithm. The gain-scheduling maps of low-level controllers have been designed based on the controller parameter optimisation method described in [20] and [23], and they involve the heating power demand . Q hR and cabin temperature T cab as inputs. The superheat temperature reference ∆T SH,R is typically set to a fixed value (5 • C, herein). Additionally, pressure limit controllers of P type are added to reduce compressor speed if the compressor outlet pressure or the compressor outlet/inlet pressure ratio exceeds a certain safety threshold.
The PMV feedback controller commands the driver-side IRP control setting. The controller is of a proportional type with a dead-zone and output saturation, as described by: where e PMV = PMV R − |PMV dr | is the driver's absolute mean PMV error (which is positive for cold conditions, see Figure 3), PMV R is the PMV reference (set to the ideal value of zero), k PMV is the PMV controller gain, u IRP,max is IRP control setting limit (set to 1 or 100%), and ∆PMV is the PMV control error threshold for turning off the IRPs (set to 0.5, herein).
Energies 2021, 14, 1203 15 of 24 allocation maps set for minimal HVAC power consumption. Such HVAC control system is shown in Figure 10 and extended with IRP control based on PMV feedback for improved comfort. The high-level HVAC control subsystem consists of a superimposed cabin air temperature controller and optimal HVAC control input allocation maps. A superimposed PI controller regulates the cabin air temperature Tcab by commanding the heating power demand Q hR . The heating power demand is limited based on time-varying maximum feasible heating power for the given cabin air temperature and relative humidity, which is obtained from the optimisation results and implemented in the form of a look-up table. The optimal HVAC control input allocation maps are implemented in the form of lookup tables with the cabin air temperature Tcab and heating demand Q hR used as the map inputs. These look-up tables are obtained directly from optimisation results presented in Section 4 by choosing the design that corresponds to minimal HVAC power consumption (see [23] for details of allocation map generation).
The cabin air inlet temperature and the superheat temperature are controlled by PI controllers, which are given in a modified form with the P action moved into the feedback path. All of the PI controllers are extended with gain-scheduling maps and an antiwindup algorithm. The gain-scheduling maps of low-level controllers have been designed based on the controller parameter optimisation method described in [20] and [23], and they involve the heating power demand Q hR and cabin temperature Tcab as inputs. The superheat temperature reference ΔTSH,R is typically set to a fixed value (5 °C, herein). Additionally, pressure limit controllers of P type are added to reduce compressor speed if the compressor outlet pressure or the compressor outlet/inlet pressure ratio exceeds a certain safety threshold.
The PMV feedback controller commands the driver-side IRP control setting. The controller is of a proportional type with a dead-zone and output saturation, as described by: where ePMV = PMVR − | dr | is the driver's absolute mean PMV error (which is positive for cold conditions, see Figure 3), PMVR is the PMV reference (set to the ideal value of zero), kPMV is the PMV controller gain, uIRP,max is IRP control setting limit (set to 1 or 100%), and ΔPMV is the PMV control error threshold for turning off the IRPs (set to 0.5, herein). Figure 10. Overall hierarchical structure of HVAC control system including decoupled IRP-based PMV controller. Figure 10. Overall hierarchical structure of HVAC control system including decoupled IRP-based PMV controller.
Simulation Analysis of IRP-Based Steady-State Power Consumption Reduction Potential
The potential of employing IRP heating for power consumption reduction while maintaining the thermal comfort has been investigated in more detail using the overall simulation model consisting of plant model (Section 2) and control strategy (Section 5.1). Similarly, as in the case of optimisation-based analysis, the steady-state operation is considered. In the first scenario, the difference between the cabin air temperature targets of HVAC-only and HVAC+IRP systems is kept at the constant value of 5 • C. In the second scenario, the cabin air temperature target is set to the fixed, high-comfort value of 22 • C for the HVAC-only system, while in the HVAC+IRP system the target temperature is reduced from 22 • C in increments of 1 • C down to 14 • C. The PMV reference (PMV R ) of the HVAC+IRP system is set to the value obtained in the HVAC-only case, in order to provide a fair power consumption comparison for the two cases. The ambient conditions and other parameters remain the same as in the optimisation study from Section 4. Figure 11 shows the simulation results obtained in the first scenario, where the temperature target difference equals 5 • C. Figure 11a indicates that both total and HVAC power consumptions, with or without IRP heating, increase with the cabin air temperature T cab , which is due to increased cabin thermal load. On the other hand, the IRP power consumption is largely independent of cabin temperature. The thermal comfort is best (PMV is close to 0) at the HVAC-only system cabin temperature of 22 • C (see Figure 11b and cf. Figure 3). For all considered cabin air temperatures, the use of IRP heating results in power consumption reduction by roughly 300-400 W or 26-32%. In the best thermal comfort point, the power consumption saving is 360 W or 28.6%.
Simulation Analysis of IRP-Based Steady-State Power Consumption Reduction Potential
The potential of employing IRP heating for power consumption reduction while maintaining the thermal comfort has been investigated in more detail using the overall simulation model consisting of plant model (Section 2) and control strategy (Section 5.1). Similarly, as in the case of optimisation-based analysis, the steady-state operation is considered. In the first scenario, the difference between the cabin air temperature targets of HVAC-only and HVAC+IRP systems is kept at the constant value of 5 °C. In the second scenario, the cabin air temperature target is set to the fixed, high-comfort value of 22 °C for the HVAC-only system, while in the HVAC+IRP system the target temperature is reduced from 22 °C in increments of 1 °C down to 14 °C. The PMV reference (PMVR) of the HVAC+IRP system is set to the value obtained in the HVAC-only case, in order to provide a fair power consumption comparison for the two cases. The ambient conditions and other parameters remain the same as in the optimisation study from Section 4. Figure 11 shows the simulation results obtained in the first scenario, where the temperature target difference equals 5 °C. Figure 11a indicates that both total and HVAC power consumptions, with or without IRP heating, increase with the cabin air temperature Tcab, which is due to increased cabin thermal load. On the other hand, the IRP power consumption is largely independent of cabin temperature. The thermal comfort is best (PMV is close to 0) at the HVAC-only system cabin temperature of 22 °C (see Figure 11b and cf. Figure 3). For all considered cabin air temperatures, the use of IRP heating results in power consumption reduction by roughly 300-400 W or 26-32%. In the best thermal comfort point, the power consumption saving is 360 W or 28.6%. Figure 12 shows the results obtained for the second scenario characterised by the best thermal comfort setting of HVAC only system (Tcab = 22 °C). Figure 12a indicates that the total power consumption of HVAC+IRP system decreases with the fall of corresponding cabin air temperature target, as it is associated with the fall of cabin thermal load and, thus, the HVAC power consumption. However, the decreasing cabin target temperature requires higher IRP power consumption (Figure 12a) to maintain the thermal comfort (Figure 12b), which is feasible up to the temperature target difference of 6 °C; otherwise, Figure 12 shows the results obtained for the second scenario characterised by the best thermal comfort setting of HVAC only system (T cab = 22 • C). Figure 12a indicates that the total power consumption of HVAC+IRP system decreases with the fall of corresponding cabin air temperature target, as it is associated with the fall of cabin thermal load and, thus, the HVAC power consumption. However, the decreasing cabin target temperature requires higher IRP power consumption (Figure 12a) to maintain the thermal comfort (Figure 12b), which is feasible up to the temperature target difference of 6 • C; otherwise, the IRP power saturates, and the thermal comfort degrades. The total power consumption saving due to the use of IRP heating is 150 W or 12% for the temperature target difference of 2 • C, and 420 W or 34% for the temperature difference of 6 • C. Although the thermal comfort decreases towards cold when the temperature difference is increased beyond 6 • C, it still remains within the comfortable range PMV > −0.5 for the cabin air temperature of 14 • C, for which the power consumption reduction is 540 W or over 40%.
the IRP power saturates, and the thermal comfort degrades. The total power consumption saving due to the use of IRP heating is 150 W or 12% for the temperature target difference of 2 °C, and 420 W or 34% for the temperature difference of 6 °C. Although the thermal comfort decreases towards cold when the temperature difference is increased beyond 6 °C, it still remains within the comfortable range PMV > −0.5 for the cabin air temperature of 14 °C, for which the power consumption reduction is 540 W or over 40%.
Heat-up Scenario Simulation Results
The IRP-based power consumption reduction analysis presented in Section 5.2 is related to steady-state conditions. Another opportunity of improving the system efficiency based on IRP use corresponds to heavy transient conditions, such as those occurring during the heat-up process from a low ambient temperature. In the particular heat-up simulation test, the cabin and HVAC system are initialised with respect to ambient conditions given by Tamb = −10 °C and RHamb = 60%, and the goal is to reach the cabin air temperature setpoint Tcab,R = 22.5 °C in 10 min. The vehicle velocity is set to 60 km/h and no solar load is set. In the case of using IRPs, the case of reduced target temperature (Tcab,R = 17.5 °C) is additionally considered.
In all cases, the considered control performance metrics ( [20]. Figure 13 shows the comparative heat-up simulation results for the HVAC-only and HVAC+IRP systems for Tcab,R = 22.5 °C. Since the cabin air temperature is not affected by the IRPs, its response and the response of allocated HVAC control inputs are the same for both systems. The cabin inlet air temperature reaches the value of 40 °C in approximately 4 min and approaches its reference value in 6 min (Figure 13a, blue and red lines). During this transient, the compressor speed is saturated (Figure 13e) and the power consumption ( Figure 13h) is significantly increased compared to that occurring in the steady-state condition (i.e., in the final stage of response). For the same reason, the actual heating power
Heat-Up Scenario Simulation Results
The IRP-based power consumption reduction analysis presented in Section 5.2 is related to steady-state conditions. Another opportunity of improving the system efficiency based on IRP use corresponds to heavy transient conditions, such as those occurring during the heat-up process from a low ambient temperature. In the particular heat-up simulation test, the cabin and HVAC system are initialised with respect to ambient conditions given by T amb = −10 • C and RH amb = 60%, and the goal is to reach the cabin air temperature setpoint T cab,R = 22.5 • C in 10 min. The vehicle velocity is set to 60 km/h and no solar load is set. In the case of using IRPs, the case of reduced target temperature (T cab,R = 17.5 • C) is additionally considered.
In all cases, the considered control performance metrics (Table 3) include: • Total energy consumed (E el ); • Two thermal comfort indices: (i) time to reach the comfortable range defined by |PMV dr | < 0.5 (t PMV,05 ), and (ii) integral of absolute value of mean PMV in uncomfortable range C 2 [min] = |PMV dr |/60 dt, if |PMV dr | > 0.5 [20]. Figure 13 shows the comparative heat-up simulation results for the HVAC-only and HVAC+IRP systems for T cab,R = 22.5 • C. Since the cabin air temperature is not affected by the IRPs, its response and the response of allocated HVAC control inputs are the same for both systems. The cabin inlet air temperature reaches the value of 40 • C in approximately 4 min and approaches its reference value in 6 min (Figure 13a, blue and red lines). During this transient, the compressor speed is saturated (Figure 13e) and the power consumption (Figure 13h) is significantly increased compared to that occurring in the steady-state condition (i.e., in the final stage of response). For the same reason, the actual heating power differs from the demanded heating power (Figure 13c), and, therefore, the optimal allocation may be suboptimal during the transient phase.
In both cases, the cabin air temperature reaches 22 • C in 10 min (green plot in Figure 13a). However, it takes around 8 min for thermal comfort (Figure 13g) to reach the comfortable range (|PMV| < 0.5) in the HVAC-only case (dashed line), while when employing IRP heating the comfort range is reached in 5 min. The IRP control inputs are initially saturated to their maximum value (Figure 13d) and they then decrease as the PMV is approaching its target (zero) value (Figure 13g). The saturated IRP control inputs corresponds to the maximum IRP target temperatures of 60 • C for Cluster 2 (legs) and 80 • C for Cluster 1 (head and chest; Figure 13b). The panels have certain thermal inertia and it takes them around 3 min to reach 45 • C and 6 min to reach 60 • C. Moreover, once the IR panels are turned off, their temperature slowly decreases due to slow convection cooling by cabin air. During the transient stage, the total power consumption is increased by 500 W when using the IRPs (Figure 13h). Nevertheless, the indices listed in Table 3 under Case 2 indicate that the total energy consumption is only 9% higher, while the cumulative thermal comfort index C 2 is reduced by 23% and the time to reach the comfortable PMV range is decreased by 36%.
The remaining allocated control inputs are dependent on the heating power demand and the cabin air temperature. The pump speeds (Figure 13e) are increased at the start and decrease once the steady-state condition is reached. Similarly, the blower fan ( Figure 13f) is high at the start to achieve greater heating power demand. The radiator fan (Figure 13f) is mostly turned-off for the considered velocity of 60 km/h (cf. Figure 7g). Table 3 also shows the comparison of thermal comfort and energy consumption indices for the case of reduced target cabin air temperatures of 17.5 • C. Setting the lower cabin air temperature reference reduces energy consumption by 21% in the HVAC only case (Case 3), but results in the worst thermal comfort performance-thermal comfort range is not reached. When adding IRP heating (Case 3), the thermal comfort remains similar or even better than in Case 2, but the energy consumption is now reduced by (6 + 9)/109 = 14%.
Discussion
The presented optimisation and simulation results indicate that HVAC and IRP controls can effectively be decoupled, where the HVAC control system should be optimised for best efficiency, while the IRP heating improves the thermal comfort. The benefits of thermal comfort improvement are present under both steady-state and transient (heat-up) conditions. In the former case, IRP heating allows for reducing the cabin air temperature reference for the HVAC system without compromising thermal comfort, thus, resulting in considerable energy savings (around 350 W or 30% for the particular system). In the latter case, the IRPs can improve thermal comfort by locally heating the passenger compartment until the slower, HVAC system reaches regular operation.
The proposed IRP controller relies on PMV feedback, which cannot be directly measured. For in-vehicle implementation, the PMV should be estimated from available measurements based on available sensors (see Section 2.1. for details) such as cabin air temperature and relative humidity and cabin inlet air temperature and flow. This requires special attention and calibration due to (i) unmeasurable factors such as passenger's activity and clothing, which should be carefully set based on season or ambient temperature; (ii) lack
Discussion
The presented optimisation and simulation results indicate that HVAC and IRP controls can effectively be decoupled, where the HVAC control system should be optimised for best efficiency, while the IRP heating improves the thermal comfort. The benefits of thermal comfort improvement are present under both steady-state and transient (heat-up) conditions. In the former case, IRP heating allows for reducing the cabin air temperature reference for the HVAC system without compromising thermal comfort, thus, resulting in considerable energy savings (around 350 W or 30% for the particular system). In the latter case, the IRPs can improve thermal comfort by locally heating the passenger compartment until the slower, HVAC system reaches regular operation.
The proposed IRP controller relies on PMV feedback, which cannot be directly measured. For in-vehicle implementation, the PMV should be estimated from available measurements based on available sensors (see Section 2.1. for details) such as cabin air temperature and relative humidity and cabin inlet air temperature and flow. This requires special attention and calibration due to (i) unmeasurable factors such as passenger's activity and clothing, which should be carefully set based on season or ambient temperature; (ii) lack of the air velocity measurement, which should be replaced by an estimate determined from the commanded blower air mass flow, air distribution flap positions and calibrated air flow distribution models; and (iii) lack of mean radiant temperature, which should then be estimated from available air temperature sensors and calibrated 3D cabin models. When multi-zone heating is considered, PMV should be estimated locally, and localised PMV control should be implemented taking into account different passengers' preferences.
Vehicle velocity and ambient air conditions could potentially impact the performance of the HVAC system. The presented optimization results were obtained for the fixed values of vehicle velocity v veh = 60 km/h and ambient air conditions (T amb = −10, RH amb = 60%). However, the optimisation method is readily applicable for any external conditions, and the input parameter space can be extended with additional 'disturbance' inputs such as vehicle velocity or ambient air temperature. A preliminary study on 'disturbance' inputs influence has been conducted, and the results point out that varying the ambient relative humidity has no impact on the HVAC system, while varying the ambient air temperature and vehicle velocity have a marginal influence on allocation maps. Namely, varying vehicle velocity changes ramming air flow through the main radiator and in turn changes the total air flow. However, it is possible to empirically correct radiator fan power level with respect to vehicle velocity to counteract for this disturbance effect. Similarly, higher ambient air temperature slightly affects the cabin inlet air temperature reference, and this can also be accounted for through an empirical correction.
Due to the thermal inertia of both HVAC and IRP system components, the control system could be extended with predictive information to anticipate the transient effects, especially slow cooling of IRPs when its command is set to zero. A method worth considering is model predictive control, as it is capable of handling the process dynamics, limits, and predictive information.
It should be noted that the HVAC system and airflow distribution models used in this study have been systematically built up, parameterised, and partly validated based on the available test bench data [25]. The models can be further refined/re-parameterised after performing in-vehicle tests if experimental results show considerable discrepancies with respect to simulation results. Similarly, the control strategy can be re-calibrated to improve the performance for final implementation. Experimental validation is an on-going activity conducted both in climate chambers with dynamometers and also through on-road tests, whose results and recommendations are subject of future publications.
Finally, the optimisation and subsequent analyses were performed for extreme ambient conditions (−10 • C) where high power consumption occurs. The system should be tested and/or optimised in less severe ambient conditions to fully exploit the infrared panel heating control strategy.
Conclusions
A multi-objective genetic algorithm-based method for obtaining optimal control inputs has been proposed and applied to an advanced battery electric vehicle HVAC system, equipped with infrared panels (IRP), for the heat pump operating mode. The obtained Pareto frontiers have indicated that HVAC and IRP control actions are effectively decoupled. The HVAC system should be controlled to achieve minimum power consumption, as its tuning has a relatively minor impact on thermal comfort both in heat pump and A/C mode (Appendix A); on the other hand, the IRP heating effectively improve thermal comfort with a modest effect on power consumption.
Based on these findings, the hierarchical control strategy from [23] has been parameterised and extended with a superimposed, proportional-like PMV controller acting through IRP control channel. The control strategy has been verified by simulation, both for various steady-state conditions and a heat-up transient scenario. The steady-state, simulation-based analysis has showed that decreasing the cabin air temperature target by 6 • C from the nominal target of 22 • C reduces the power consumption by 420 W or 35% and maintains the ideal thermal comfort, in extreme ambient conditions of −10 • C.
Moreover, the results point out that IRPs can significantly improve driver's thermal comfort during the heat-up transient, where the combined HVAC+IRP system reaches the comfortable range in 314 s (36% faster) compared to HVAC only case, where the comfort settling time is 493 s. This comfort improvement is at the expense of increased energy consumption by 9%. However, if the target cabin air temperature is reduced by 5 • C, the energy consumption can, in fact, be reduced by 6% without affecting the thermal comfort gain.
The ongoing work includes experimental verification of the proposed control strategy, while the future work will concern development of model predictive controller to account for HVAC and IRP actuator transient behaviours, actuator limits, and possibly predictive information about various disturbances. Acknowledgments: It is gratefully acknowledged that the research work of the first author has been partly supported by the Croatian Science Foundation through the "Young researchers' career development project-training of new doctoral students".
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
Multi-objective optimisation of the HVAC control input allocation maps has also been carried for the air-conditioning (A/C) mode originally considered in [23]. The optimisation framework described in Section 3 is subject to the following modifications:
•
The air recirculation is set to 100%. This means that the blower fan inlet air temperature and the relative humidity correspond to the cabin air conditions, i.e., T in = T cab and RH in = RH cab .
•
The ambient air temperature is set to 40 • C and the relative humidity is 60%.
•
The air distribution vents are set to "VENT" mode, where the air is only distributed at chest-height positioned air vents (see [25] for details).
•
The lower limit of cabin inlet air temperature is set to 5 • C, while the upper limit is equated with the cabin air temperature to prevent heating, i.e., 5 • C < T cab,in,R < T cab .
•
The control inputs are allocated with respect to three inputs: the cooling power demand . Q cR , the cabin air temperature T cab , and the cabin relative humidity RH cab , where the latter is relevant in the A/C mode due to the dehumidification effect.
The Pareto optimal solutions are shown in Figure A1 for two cabin temperatures (25 • C and 40 • C) and RH cab = 20%. These results indicate that for lower cabin air temperature T cab = 25 • C ( Figure A1a), the HVAC system can achieve comfortable PMV range. Figure A1c shows that high blower air mass flow rate is preferred when aiming for maximum comfort (star symbol), while lower blower fan mass flow rate is suitable for low total power consumption (square symbol). However, similarly as with the heat-pump mode (cf. Figure 5), the Pareto frontiers are relatively narrow, particularly for higher cooling demands, i.e., the trade-off of efficiency and comfort is rather modest.
The Pareto frontiers are even narrower for high cabin air temperature (T cab = 40 • C, Figure A1b), and the PMV gain when using higher air mass flows is only 0.1 point on the scale of 4. On the other hand, the power consumption is strongly influenced by the control inputs, which should, therefore, be selected for minimum power consumption (square symbols in Figure A1d). Unlike the case of lower cabin air temperature ( Figure A1c), the lowest total power consumption is achieved at higher blower fan mass flow rates.
The optimisation results are influenced by the cabin relative humidity, as the cabin air is recirculated, and water condensation occurs on low-temperature radiator (LTR). This affects the performance of HVAC system in A/C mode, especially at high relative humidity due to higher latent heat loss on LTR. For more details, the interested reader is referred to [23]. | 16,720.2 | 2021-02-23T00:00:00.000 | [
"Engineering"
] |
Voluntary Exercise Can Ameliorate Insulin Resistance by Reducing iNOS-Mediated S-Nitrosylation of Akt in the Liver in Obese Rats
Voluntary exercise can ameliorate insulin resistance. The underlying mechanism, however, remains to be elucidated. We previously demonstrated that inducible nitric oxide synthase (iNOS) in the liver plays an important role in hepatic insulin resistance in the setting of obesity. In this study, we tried to verify our hypothesis that voluntary exercise improves insulin resistance by reducing the expression of iNOS and subsequent S-nitrosylation of key molecules of glucose metabolism in the liver. Twenty-one Otsuka Long-Evans Tokushima Fatty (OLETF) rats, a model of type 2 diabetes mellitus, and 18 non-diabetic control Long-Evans Tokushima Otsuka (LETO) rats were randomly assigned to a sedentary group or exercise group subjected to voluntary wheel running for 20 weeks. The voluntary exercise significantly reduced the fasting blood glucose and HOMA-IR in the OLETF rats. In addition, the exercise decreased the amount of iNOS mRNA in the liver in the OLETF rats. Moreover, exercise reduced the levels of S-nitrosylated Akt in the liver, which were increased in the OLETF rats, to those observed in the LETO rats. These findings support our hypothesis that voluntary exercise improves insulin resistance, at least partly, by suppressing the iNOS expression and subsequent S-nitrosylation of Akt, a key molecule of the signal transduction pathways in glucose metabolism in the liver.
Introduction
With the excess consumption of food and physical inactivity as well as advancing population aging, the incidence of lifestyle-related diseases, including metabolic syndrome, is rapidly and broadly increasing. However, studies to prevent these diseases have only recently been initiated in human. Caloric restriction and exercise are commonly recommended for the prevention and amelioration of obesity and lifestyle-related diseases [1]. It is important to note, however, that caloric restriction and resultant prevention of obesity does not always successfully reverse insulin resistance [2].
Chronic inflammation is a common etiology in patients with lifestyle-related diseases, such as diabetes or cardiovascular disease, or the aging process itself [3]. A new post-translational protein modification mechanism, termed S-nitrosylation, has recently been identified, in which the nitric oxide (NO) produced by conditions of chronic inflammation is covalently attached to the cysteine residue [4]. In addition, protein S-nitrosylation has recently been reported to be a regulatory component of signal transduction comparable to phosphorylation [5].
We and others have shown that inducible nitric oxide synthase (iNOS) inhibitor treatment improves insulin signaling at the level of insulin receptor substrate-1 (IRS-1) and -2 and Akt in the liver in genetically obese diabetic (ob/ob) mice [6,7]. iNOS and NO donors reversibly inactivate Akt via S-nitrosylation in vitro and in intact cells without altering the phosphorylation status at threonine 308 or serine 473 [8]. In addition, we previously reported that the overexpression of iNOS contributes to hepatic insulin resistance via the S-nitrosylation of insulin signaling molecules [9]. The reverse process of S-nitrosylation is called denitrosylation, which can occur spontaneously in the presence of the reduced form of glutathione (GSH) [10]. Moreover, we previously showed that regular exercise increases the intracellular GSH content and decreases chronic low-level inflammation in the liver in aged rats [11]. It has also been reported that treatment with a small-molecule activator of the transcription factor nuclear factor erythroid 2-related factor 2 (NRF2) denitrosylates HDAC2 by upregulating the GSH level [12]. Therefore, denitrosylation is facilitated when the intracellular GSH content is increased.
Otsuka Long-Evans Tokushima Fatty (OLETF) rats constitute a model of obesity and type 2 diabetes and are selectively bred for the null expression of the cholecystokinin-1 receptor [13,14]. Sedentary OLETF rats show insulin resistance at 10-20 weeks of age and type 2 diabetes at approximately 30 weeks of age [13,15]. However, OLETF rats subjected to voluntary exercise display suppressed body weight gain [13] and improved insulin sensitivity [16]. OLETF and Long-Evans Tokushima Otsuka (LETO) rats are well studied with respect to obesity-induced insulin resistance and the beneficial effects of regular physical activity [13,14,17].
Numerous studies have shown that regular exercise prevents obesity and insulin resistance, whereas sedentary behavior increases the risk of metabolic syndrome [18][19][20][21][22]. The potential mechanisms by which exercise prevents obesity-induced insulin resistance include the transport glucose of into skeletal muscles [23,24]. While increased glucose transportation is maintained for at least five days after exercise [24]. Therefore, other factors (e.g., hepatic and adipose tissue insulin resistance) atop of muscle insulin resistance may participate in the improvements in "whole-body" insulin sensitivity induced by regular exercise. Of note, hepatic insulin resistance plays a crucial role in hyperglycemia. Whereas muscle-specific insulin receptor knockout does not induce hyperglycemia or hyperinsulinemia [25], hepatocyte-specific insulin receptor knockout mice exhibit overt hyperglycemia and hyprerinsulinemia [26]. The reversal of hyperglycemia and hyperinsulinemia cannot be accounted for by the improvement in muscle insulin resistance alone. Hence, in this study, we assessed the mechanisms by which regular exercise improves insulin resistance in the liver.
We hypothesized that voluntary exercise improves insulin resistance by reducing the expression of iNOS and subsequent S-nitrosylation of key molecules of glucose metabolism in the liver. In order to evaluate this hypothesis, we tested whether voluntary exercise prevents hepatic insulin resistance in OLETF and LETO rats.
Animals
The study protocol was approved by the Juntendo University Animal Care Committee and was conducted according to the guiding principles for the Care and Use of Laboratory Animals set forth by the Physiological Society of Japan. Four-week-old Otsuka Long-Evans Tokushima Fatty (OLETF) and Long-Evans Tokushima Otsuka (LETO) rats were purchased from Japan SLC (Shizuoka, Japan). At five weeks of age, both the OLETF and LETO rats were randomly assigned to a sedentary group (OLETF-SED, LETO-SED) or voluntary exercise group (OLETF-VE, LETO-VE) for 20 weeks. The rats were housed in an environment-controlled animal facility (24 ± 1°C, 55 ± 5%) and illuminated with a 12:12-hour light-dark cycle. The animals were provided standard rodent chow and water ad libitum. The rats in the voluntary exercise group were granted free access to a running wheel during the experimental period.
Glucose tolerance test
Glucose (1.0 g/kg BW) was intraperitoneally administered to the LETO and OLETF rats following overnight fasting at 25 weeks of age. Blood samples were collected just before and at 30, 60 and 120 minutes after glucose injection. The blood glucose levels were measured using the Glutest Neo Super device (Sanwa Kagaku Kenkyusho, Aichi, Japan). The plasma insulin concentrations were determined using an AKRIN-010S rat insulin ELISA kit (Shibayagi, Gunma, Japan).
Assessment of insulin sensitivity according to the homeostasis model assessment (HOMA) and insulin resistance (IR) index
In order to assess the whole-body insulin sensitivity in the LETO and OLETF rats, the HOMA-IR index was determined using the HOMA2 Calculator software program (downloaded from www.OCDEM.ox.ac.uk) based on the blood glucose and plasma insulin concentrations at 25 weeks of age.
Evaluation of the effects of voluntary exercise on hepatic insulin signaling
To assess the hepatic insulin sensitivity in OLETF rats, we injected insulin via the portal vein in both sedentary and voluntary exercise groups. At five weeks of age, the OLETF rats were randomly assigned to a sedentary or voluntary exercise group. After 10-week voluntary exercise or lack thereof, the rats were fasted overnight at 15 weeks of age. Under anesthesia, insulin (0.5 units/Kg BW, Humalin R; Eli lilly, Indiana, IN) or saline was injected via the portal vein. Five minutes after the injection, the liver was removed and snap frozen in liquid nitrogen.
Immunoblotting
The liver samples were homogenized as previously described [9], with minor modifications. Briefly, the tissues were homogenized in ice-cold homogenization buffer A (50 mM HEPES, pH 8.0, 150 mM NaCl, 2 mM EDTA, 2.5% lithium dodecylsulfate, 2% CHAPS, 10% glycerol, 10 mM sodium fluoride, 2 mM sodium vanadate, 1 mM PMSF, 10 mM sodium pyrophosphate, 1 mM DTT, protease inhibitor cocktail). Following incubation at 4°C for 30 minutes, the homogenized samples were centrifuged at 13,000 g for 10 minutes at 4°C. Immunoblotting was subsequently performed as previously described [27]. ECL select reagent (GE Healthcare) was then used to visualize the blots, and bands of interest were scanned using the LAS 1000 (Fujifilm, Carson, Japan) and quantified according to the NIH Image 1.62 software program (NTIS, Springfield, VA).
Total RNA isolation and quantitative RT-PCR
Total RNA was isolated using an RNeasy Mini kit (Qiagen, Valencia, CA), and first-strand cDNA was synthesized from 1 μg of total RNA using a High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Carlsbad, CA). The real-time RT-PCR analyses were performed as previously described [28] with 10 ng of cDNA and TaqMan probes (Applied Biosystems) for inducible nitric oxide synthase (iNos as knoen as Nos2), sterol regulatory element binding transcription factor 1 (Srebp1), stearoyl-Coenzyme A desaturase 1(Scd1), acetyl-CoA carboxylase alpha (Acc), fatty acid synthase (Fas), glycerol-3-phosphate acyltransferase 2 (Gpat2) and 18S ribosomal RNA using a Thermal Cycler Dice (Takara, Osaka, Japan). The gene expression of iNOS was normalized to that of 18S ribosomal RNA.
Measurement of the GSH/GSSG ratio
The reduced glutathione (GSH) to oxidized glutathione (GSSG) ratio was measured using the Bioxytech GSH/GSSG Assay kit (Percipio Biosciences, Foster, CA) according to the manufacturer's instructions. Briefly, the liver tissues were homogenized in 15-fold volumes of ice-cold 5% metaphosphoric acid (MPA), and the homogenates were centrifuged at 10,000 g for 20 minutes at 4°C. For the GSH analysis, MPA extracts were diluted 60-fold in assay buffer (Na•PO 4 with EDTA). The final concentration in the samples was 1/488. For the GSSG analysis, M2VP was mixed with the MPA extracts to inhibit the oxidation of GSH to GSSG, and the samples were neutralized by the addition of 5 μl of TEA and subsequently diluted 4-fold in 5% MPA and 15-fold in assay buffer. The final concentration in the samples was 1/60. The changes in absorbance at 412 nm were recorded for three minutes, and the GSH/GSSG ratio was calculated based on the formula described in the manufacturer's instructions.
Measurement of the liver TG content
Lipid extraction from the liver tissues was performed as previously described [29]. The triglyceride content in the lipid extracts was measured using the Triglyceride E-test Wako kit (WAKO) according to the manufacturer's instructions.
Measurement of the plasma leptin concentrations
The plasma leptin concentrations were measured using the AKRIN-010S rat insulin ELISA kit (Shibayagi, Gunma, Japan) according to the manufacturer's instructions.
Statistical analysis
The data were compared using unpaired t test or one-way or two-way analysis of variance followed by Scheffe's multiple comparison test. A P value of <0.05 was considered to be statistically significant. All values are expressed as the mean ± SEM.
Exercise improved insulin sensitivity in the OLETF rats
We first confirmed that exercise improves insulin sensitivity in the OLETF rats under the experimental conditions. The OLETF and LETO rats were allowed to exercise voluntarily on the wheel placed in the cages for 20 weeks. Consequently, significant differences were found in the whole-body weight and epididymal fat weight of the OLEFT rats at 25 weeks of age after overnight fasting compared to those observed in the LETO rats under sedentary conditions (S1A to S1C Fig). The voluntary exercise significantly increased food intake normalized to body weight in both OLETF and LETO rats compared with sedentary condition (S1D Fig). This resulted in the total wheel running distance of 596.5 ± 39.1 km (mean + SEM) in the OLEFT rats and that of 709.1 ± 91.6 km in the LETO rats. Although the running distance appeared to be greater in the LETO rats than the OLEFT rats, there was no significant difference (S1E Fig).
The blood glucose levels were significantly greater in the OLETF rats than in the LETO rats. Exercise decreased the fasting blood glucose levels in the OLETF rats to the values noted in the normal control rats (Fig 1A). Exercise also tended to decrease the fasting plasma insulin levels in the OLETF rats, but there was no statistical significance (Fig 1B). The homeostasis model assessment insulin resistance (HOMA2-IR) index values revealed that the exercise improved insulin sensitivity in the OLETF rats by 25% (Fig 1C). Exercise did not change the fasting blood glucose or insulin levels in the LETO rats. In contrast, the intraperitoneal glucose tolerance tests (ipGTTs) confirmed that glucose tolerance was impaired in the OLETF rats and subsequently normalized by exercise ( S2 Fig).
Among factors that potentially affect insulin sensitivity, the body weight values were significantly greater in the OLETF rats than in the LETO rats. In addition, exercise decreased the body weight values in both the OLETF and LETO rats. Meanwhile, the amount of epididymal fat was significantly greater in the OLETF rats than in the LETO rats, and exercise decreased to the similar weight of epididymal fat in the OLETF and LETO rats. Finally, exercise reduced the level of food intake in the OLETF rats and increased the level of food intake in the LETO rats. The running distance was not significantly different between the OLETF and LETO rats.
Exercise prevented the expression of iNOS and subsequent Snitrosylation of Akt in the OLETF rats
The expression of iNOS mRNA was significantly increased in the liver in the sedentary OLETF rats compared with that observed in the counterpart sedentary LETO rats (Fig 2A). The increased iNOS expression in the OLETF rats was significantly suppressed by voluntary exercise (Fig 2A). The S-nitrosylation of Akt was markedly increased in the liver in the OLETF rats compared with that seen in the LETO rats ( Fig 2B). In addition, exercise significantly decreased the degree of S-nitrosylated Akt in the liver in the OLETF rats, while it did not alter Akt protein abundance ( Fig 2B). Similarly, S-nitrosylation of IRS-1 was also increased in liver of OLETF rats under sedentary condition (Fig 2C), consistent with previous studies in skeletal muscle of obese, diabetic mice [7,30,31]. Total IRS-1 expression was not altered by obesity or voluntary exercise (Fig 2C) GSH facilitates denitrosylation, and therefore an increase in the GSH level supposedly reduces the S-nitrosylation of Akt. However, contrary to our expectation, voluntary exercise did not increase the GSH content or GSH/GSSG ratio in the liver in either the LETO or OLETF rats (Fig 3A and 3C). Meanwhile, the GSSG content, which reflects oxidative stress, was significantly greater in the OLETF rats under sedentary conditions and was significantly decreased by voluntary exercise (Fig 3B). Concomitant oxidative stress enhances protein Snitrosylation [32]. It is conceivable, therefore that iNOS induction and oxidative stress may contribute in concert to the increased Akt S-nitrosylation in the liver of OLEFT rats.
Exercise reduced the liver triglyceride content and decreased the phosphorylation of c-jun N-terminal kinase (JNK) and insulin receptor substrate-1 (IRS-1) in the OLETF rats
In order to assess other mechanisms involved in the pathogenesis of hepatic insulin resistance, we evaluated the triglyceride content and the expression of molecules that participate in lipogenesis in the liver, such as sterol-regulatory element binding protein-1 (Srebp-1), stearoyl coenzyme A desaturase-1 (Scd-1), acetyl-CoA carboxylase (Acc), fatty acid synthase (Fas) and glycerol-3-phosphate acyltransferase 2 (Gpat2).
Notably, the triglyceride content was significantly greater in the liver in the OLETF rats than in the LETO rats (Fig 4A). Exercise decreased the TG content in the liver in the OLETF, but not LETO, rats (Fig 4A). Concordantly, the amount of mRNA for Srebp-1 and Scd-1 was greater in the liver in the OLETF rats than in the LETO rats, and exercise decrease the amount of this mRNA in the OLETF, but not LETO, rats (Fig 4B and 4C). In contrast, the amount of mRNA for Acc, Fas and Gpat2, which are partly regulated by Srebp-1 and play important roles in hepatic steatosis, did not differ between the OLETF and LETO rats (S3 Fig). Exercise did not affect the amount of mRNA for these molecules in either the OLETF or LETO rats. Hepatic steatosis is associated with the activation of JNK, which mediates inflammatory signals [33,34]. Western blot analysis revealed that the amount of both total and phosphorylated JNK was greater in the OLETF rats than in the LETO rats under sedentary conditions (Fig 4D and 4E). Exercise consequently suppressed the phosphorylation of JNK, while slightly decreasing the amount of total JNK proteins (Fig 4F).
Activation of the JNK pathway induces insulin resistance and JNK phosphorylates IRS-1 at serine 307 [35][36][37]. We, therefore, assessed phosphorylation status of IRS-1. Phosphorylation of IRS-1 at serine 307 was significantly increased in the liver of OLETF rats under sedentary condition, which paralleled increased phosphorylation (activation) of JNK. Voluntary exercise significantly decreased phosphorylation of serine 307 in IRS-1 as well as JNK phosphorylation (Fig 4F and 4H), while IRS-1 protein abundance was not altered (Fig 4G).
Improved hepatic insulin signaling in OLETF rats after voluntary exercise
Next, we examined the effects of voluntary exercise on insulin signaling. Insulin-stimulated phosphorylation of Akt at threonine 308 and serine 473 was significantly increased in the liver of voluntary exercise OLETF rats compared to the sedentary group (Fig 5). The protein Srebp-1 (B) and Scd-1 (C) was significantly decreased in the OLETF-VE rats. It is known that lipid accumulation in the liver increases the JNK activity. The total JNK amount was significantly increased in the OLETF rats compare with that observed in the LETO rats (D and E). The phosphorylation of JNK was significantly increased in the liver in the OLETF-SED rats compared with that observed in the LETO-SED rats (D and F). After 20 weeks of exercise, the activated JNK content in the liver decreased in the OLETF rats. The protein levels of JNK and p-JNK were normalized to that of actin. All values are presented as the mean ± SEM. n = 7-11 per group, *,p<0.05; **,p<0. abundance of Akt did not differ between the voluntary exercise and sedentary groups (Fig 5B). Basal (exogenous insulin-naïve) Akt phosphorylation seems to be decreased by voluntary exercise in OLETF rats, but there was no statistically significant difference between the voluntary exercise and sedentary OLETF groups (Fig 5C and 5D). Body weight, blood glucose and plasma insulin levels after overnight fasting were significantly decreased in the voluntary exercise group compared with the sedentary group in OLEFT rats (S4A to S4C Fig).
Discussion
In this report, we showed that voluntary exercise ameliorates insulin resistance, at least partly, by reducing the iNOS expression and the level of S-nitrosylated Akt in the liver. Voluntary exercise also decreases oxidative stress, hepatic steatosis and JNK activation, all of which contribute to suppressing chronic low-level inflammation in the liver and improve systemic insulin resistance.
To the best of our knowledge, this is the first report to show the mechanisms underlying improvements in hepatic insulin resistance induced by voluntary exercise. The favorable effects of exercise on glucose metabolism have been reported in rodents and humans, and, in most cases, these effects are attributed to reductions in insulin resistance in the skeletal muscle [38][39][40]. We previously showed that the forced expression of iNOS in the liver is sufficient to develop systemic insulin resistance and hyperglycemia in mice [9]. The excessive NO Voluntary Exercise Reduces S-Nitrosylation of Akt production by iNOS along with concomitant oxidative stress induces S-nitrosylation of Akt [8], a key player in the metabolic actions of insulin including insulin-stimulated glucose uptake [41]. We and others have previously reported that S-nitrosylation inactivates Akt [7,32,42], which in turn leads to insulin resistance in muscle and the liver. More specifically, in the liverspecific iNOS transgenic mice, increased Akt S-nitrosylation was associated with impaired insulin signaling and hyperglycemia [9]. In OLETF rats, the iNOS expression is enhanced and a key molecule, Akt, is S-nitrosylated, indicating that hepatic insulin resistance resulting from iNOS-induced S-nitrosylation plays a role in the onset of systemic insulin resistance in OLETF rats. Twenty weeks of voluntary exercise normalizes insulin resistance, the iNOS expression and the S-nitrosylation of Akt simultaneously, thus supporting the idea that voluntary exercise ameliorates insulin resistance, at least partly, by reducing the iNOS expression and the reversal of Akt S-nitrosylation in the liver.
In addition to Akt S-nitrosylation, S-nitrosylation of IRS-1 was increased in the sedentary OLEFT rats, similar to that in the liver-specific iNOS transgenic mice [9]. It is possible, therefore, that S-nitrosylation of IRS-1 may work in concert with S-nitrosylation of Akt to the insulin resistance in sedentary OLETF rats.
Moreover, our previous study has shown that increased iNOS expression is sufficient to cause increases in JNK phosphorylation (activity) and triglycerides content in the liver [9].
Together, there appears to be a vicious cycle involving S-nitrosylation and other mechanisms of insulin resistance, such as hepatic steatosis and activation of JNK. Fat accumulation in the liver causes insulin resistance [43,44] and induces inflammation in the liver, which in turn stimulates the expression iNOS and increases S-nitrosylation [6,45,46]. On the other hand, iNOS induction and subsequent insulin resistance result in fat accumulation in the liver by activating the JNK pathway [9,34,[47][48][49]. The activation of JNK in the liver enhances inflammation, iNOS production and hepatic insulin resistance [50,51], while the expression of iNOS induces JNK activation in the liver [9]. In fact, in the present study, exercise decreased S-nitrosylation as well as the levels of both TG and activated JNK. JNK activation plays an important role in the development of obesity-induced insulin resistance [50]. In addition, previous studies have reported that phosphorylation of IRS-1 at serine 307, a JNK phosphorylation site, is increased in obesity-induced insulin resistance [35][36][37]52]. Similarly, we found that phosphorylation of serine 307 in IRS-1 was increased in sedentary OLETF rat relative to LETO rats. Importantly, voluntary exercise reduced phosphorylation of IRS-1 at serine 307 in OLETF rats to the levels observed in LETO rats (Fig 4H). From a mechanistic point of view, however, controversial results have been reported about whether phosphorylation of serine 307 in IRS-1 mediates insulin resistance [52,53]. Regardless, our data suggest that iNOS-involved JNK activation in sedentary OLEFT rats and its amelioration by voluntary exercise may play a role in the insulin resistance and its improvement.
Our previous study showed that the expression of iNOS in the liver is sufficient to induce systemic insulin resistance [9], while the inhibition of iNOS blocks this vicious cycle and improves insulin resistance [8,27]. In OLETF rats, voluntary exercise significantly improved insulin-stimulated Akt phosphorylation compared to sedentary OLETF rats. These effects of voluntary exercise are associated with suppressed inflammatory response in the liver, such as decreased iNOS mRNA levels. These results are consistent with our previous reports [8,9]. Our findings, together with the previous studies conducted by our group and others, strongly suggest that iNOS plays an important role in exercise-induced improvements in insulin resistance.
The relative importance of S-nitrosylation of act in the liver and other proposed mechanisms underlying the exercise-induced improvement of systemic insulin resistance remain to be elucidated. Exercise improves insulin resistance in the skeletal muscle via various mechanisms, including the mechanical stretch-induced activation of AMP-activated kinase [54], changes in energy metabolism [55], decreases in the iNOS expression and S-nitrosylation [56,57], and reductions in the fat content in the muscle [58]. Exercise also decreases the level of food intake and suppresses obesity in OLETF rats [59,60]. Moreover, exercise suppresses inflammation in the liver as well as other parts of the body in OLETF rats [61][62][63][64]. It is therefore likely that the exercise-induced changes in S-nitrosylation and the iNOS expression observed in the liver contribute to improve insulin resistance in addition to these other mechanisms.
In conclusion, voluntary exercise induces a cascade of events, including the decreases in the triglyceride content, the iNOS expression, the S-nitrosylation of Akt and IRS-1, and the phosphorylation (activation) of JNK, leading to the improved insulin sensitivity in the liver of OLETF rats.
Supporting Information S1 Fig. Regular exercise prevented obesity in the OLETF rats. Five-week old male LETO and OLETF rats were randomly assigned to a sedentary (SED) or voluntary exercise (VE) group and their body weights and food intake were recorded for 20 weeks (A). Starting at 8 weeks of age, the body weights of the OLETF-SED rats were significantly greater than those of the LETO-SED rats. Ã ,p<0.05 ORETF-SED versus LETO-SED. (B, C) The body weight (B) and epididymal fat weight on both sides (C) were significantly greater in OLEFT-SED rats than LETO rats at 25 weeks of age, which were reversed by voluntary exercise. (D) Average food intake normalized to body weight did not significantly differ between LETO and OLETF rats. Voluntary exercise significantly increased food intake normalized to body weight in both LETO and OLETF rats relative to the respective sedentary counterparts. (E) Average daily wheel running distance was not significantly different between the LETO-VE and OLETF-VE rats. All values are presented as the mean ± SEM. n = 9-11 per group, Ã ,p<0.05; ÃÃ ,p<0. | 5,714.8 | 2015-07-14T00:00:00.000 | [
"Biology"
] |
Differences in Deformation Behaviors Caused by Microband-Induced Plasticity of [0 0 1]- and [1 1 1]-Oriented Austenite Micro-Pillars
: A uniaxial compression test and scanning/transmission electron microscopy observations were performed to investigate the differences in mechanical behavior and deformed microstructure between focused ion beam-manufactured [1 1 1]- and [0 0 1]-oriented austenite micro-pillars with 5 µ m diameter from duplex stainless steel. After yielding, the strain hardening of two orientation micro-pillars increased sharply as a result of the formation of a microband, namely microband-induced plasticity, MBIP. The same phenomenon could be observed in a [0 0 1]-oriented pillar due to the activation of the secondary slip system, while slight strain hardening behavior was observed in the [1 1 1] orientation because of the refinement of the microband. Furthermore, the trend of the calculated strain hardening rates of both [1 1 1]- and [0 0 1]-oriented micro-pillars were in good agreement with the experimental data. This study proved that MBIP can be helpful for the mechanical property enhancement of steels. investigation, Y.-Y.C.; resources, Y.-F.J. and F.-Z.X.; data curation, Y.-Y.C. and Y.-F.J.; writing—original draft preparation, Y.-Y.C.; writing—review and editing, Y.-F.J.; visualization, Y.-Y.C. and Y.-F.J.; supervision, F.-Z.X; F.-Z.X.; acquisition, Y.-F.J.
Introduction
Austenite-ferrite duplex stainless steels (DSS) are widely used as the pressure vessel and piping in thermal power, nuclear power and other industries [1,2]. As a dual-phase steel, DSS takes advantage of the beneficial properties of its constituent phase, exhibiting higher strength than pure austenite or ferrite stainless steels and not less than 15% elongation, i.e., excellent work hardening performance. Compared with the ferrite phase, the austenite phase undertakes most of the plastic deformation during straining, which mainly determines the overall ductility of DSS [3,4]. Hence, the investigation of the plasticity of local austenite phase is helpful to understand and improve the mechanical properties of whole DSS steel. Depending on the value of stacking fault energy (SFE), plastic deformation mechanisms of austenite phase include austenite-to-martensite transformation-induced plasticity (TRIP, SFE < 20 mJ/m 2 ) [5][6][7], twinning-induced plasticity (TWIP, 20 mJ/m 2 < SFE < 50 mJ/m 2 ) [8][9][10][11] and microband-induced plasticity (MBIP, SFE > 50 mJ/m 2 ) [12][13][14][15]. Many studies have reported the enhancement effect of the formation and evolution of strain-induced martensite and mechanical twinning on plasticity and ductility under uniaxial stress by macro-tensile test [16][17][18][19][20][21] and micro-pillar compression test [22][23][24][25]. I. V. Kireeva et al. [20] pointed out that strain hardening reached the maximum when twinning developed in two systems and decreased in a transition period of the development of twinning predominantly in one system simultaneously with slip. Soares et al. [21] showed strain hardening behavior and microstructural evolution of AISI 304 steel by uniaxial tensile test and found complex strain hardening behavior due to the formation of straininduced martensitic transformation. Choi et al. [23] conducted the compression tests on micro-pillars fabricated from an austenitic Fe-Mn-C twinning-induced plasticity steel and Metals 2021, 11, 1179 2 of 10 learned that deformation twinning induced higher flow stresses and dislocation glide produced more stable work hardening behaviors. Compared with TRIP and TWIP steels, MBIP steel possesses the same excellent combination of strength and ductility, but more stable and continuous strain hardening behavior during straining and more homogeneous microstructure and better strain coordination after deformation. There have been some studies about the strain hardening behavior of nickel compression pillars caused by the dislocation interaction [26][27][28], but little research has been conducted to investigate the contribution of microband formation to the work hardening behavior of the austenite phase at the micro-scale. Hence, in this paper, the plasticity caused by micro-band evolution was studied by compression test on two austenite micro-pillars with different orientations.
Materials and Methods
Studied austenite crystals were from commercial austenite-ferrite duplex stainless steel obtained from Baosteel stainless steel Co., Ltd (Shanghai, China). The chemical compositions and stacking fault energy of the tested austenite phase are shown in Table 1. According to the calculation based on the research result Dai et al. [29], SFE of austenite phase at room temperature was estimated to be 58-73 mJ/m 2 . More information was described in detail previously [30]. As-received material was solution-annealed at 1050 • C for 12 h followed by water quenching to obtain recrystallized coarse grain. Crystallographic orientations of austenite grains were characterized by Electron Backscatter Diffraction analysis (EBSD, Oxford Instruments). Figure 1 shows the EBSD inverse pole figure to Z axis (IPF-Z) of austenite and ferrite grains in the studied DSS. As the surrounded black and white circles show in Figure 1 cos ψ cos ϕ − sin ψ sin ϕ cos θ sin ψ cos θ + cos ψ sin ϕ cos θ sin ϕ sin θ − cos ψ sin ϕ − sin ψ cos ϕ cos θ − sin ψ sin ϕ + cos ψ cos θ cos ϕ sin θ sin ψ sin θ − cos ψ sin θ cos θ Figure 3 shows the SEM microtopography of compressed austenite micro-pillars. After dramatic deformation with strain larger than 30%, distinct glide traces of inclined slip planes were visible and concentrated on the top and middle of both [0 0 1]-and [1 1 1]-oriented deformed micro-pillar surface. However, the appearance of slip planes on the surface was also crystallographic orientation-dependent. As presented in Figure 3a, in the [1 1 1]-oriented pillar, surface steps were concentrated on one slip plane exclusively and uniformly, i.e., presented in the primary slip system. The repeated gliding on the same plane led to obvious lattice torsion. The angle between activated slip planes and the normal direction of experimental pillar was measured to be 63.9°. By contrast, the surface of the [0 0 1]-oriented deformed micro-pillar presented two glide traces on different slip systems, marked by white and black dashed lines, as shown in Figure 3b, which indicated that, besides the primary slip system, the second set of slip planes was activated. Because of the relatively homogeneous distribution and smaller spacing, the slip traces marked by red dashed lines were considered as the primary slip system, which had a deviation angle of 58.5°. Several slip lines with different angles were found near the top of the [1 1 1]-oriented micro-pillar, indicating that another slip system was activated, which could be explained by the inhomogeneous distribution of compression stress near the top surface of the micro-pillar. The same phenomenon can be observed in another study [31]. The effect of this slipping on strain hardening behavior was negligible to this study. Figure 3 shows the SEM microtopography of compressed austenite micro-pillars. After dramatic deformation with strain larger than 30%, distinct glide traces of inclined slip planes were visible and concentrated on the top and middle of both [0 0 1]-and [1 1 1]-oriented deformed micro-pillar surface. However, the appearance of slip planes on the surface was also crystallographic orientation-dependent. As presented in Figure 3a, in the [1 1 1]-oriented pillar, surface steps were concentrated on one slip plane exclusively and uniformly, i.e., presented in the primary slip system. The repeated gliding on the same plane led to obvious lattice torsion. The angle between activated slip planes and the normal direction of experimental pillar was measured to be 63.9 • . By contrast, the surface of the [0 0 1]-oriented deformed micro-pillar presented two glide traces on different slip systems, marked by white and black dashed lines, as shown in Figure 3b, which indicated that, besides the primary slip system, the second set of slip planes was activated. Because of the relatively homogeneous distribution and smaller spacing, the slip traces marked by red dashed lines were considered as the primary slip system, which had a deviation angle of 58.5 • . Several slip lines with different angles were found near the top of the [1 1 1]-oriented micro-pillar, indicating that another slip system was activated, which could be explained by the inhomogeneous distribution of compression stress near the top surface of the micro-pillar. The same phenomenon can be observed in another study [31]. The effect of this slipping on strain hardening behavior was negligible to this study. Figure 3 shows the SEM microtopography of compressed austenite micro-pillars. After dramatic deformation with strain larger than 30%, distinct glide traces of inclined slip planes were visible and concentrated on the top and middle of both [0 0 1]-and [1 1 1]-oriented deformed micro-pillar surface. However, the appearance of slip planes on the surface was also crystallographic orientation-dependent. As presented in Figure 3a, in the [1 1 1]-oriented pillar, surface steps were concentrated on one slip plane exclusively and uniformly, i.e., presented in the primary slip system. The repeated gliding on the same plane led to obvious lattice torsion. The angle between activated slip planes and the normal direction of experimental pillar was measured to be 63.9°. By contrast, the surface of the [0 0 1]-oriented deformed micro-pillar presented two glide traces on different slip systems, marked by white and black dashed lines, as shown in Figure 3b, which indicated that, besides the primary slip system, the second set of slip planes was activated. Because of the relatively homogeneous distribution and smaller spacing, the slip traces marked by red dashed lines were considered as the primary slip system, which had a deviation angle of 58.5°. Several slip lines with different angles were found near the top of the [1 1 1]-oriented micro-pillar, indicating that another slip system was activated, which could be explained by the inhomogeneous distribution of compression stress near the top surface of the micro-pillar. The same phenomenon can be observed in another study [31]. The effect of this slipping on strain hardening behavior was negligible to this study. Table 2 lists the Schmid factor and deviation angle of all potential slip systems under the theoretical and experimented loading conditions. The ideal [1 1 1]-oriented austenite micro-pillar had the same tendency on six slipping systems. Practical direction of the studied [1 1 1]-oriented austenite pillar was more inclined to [12 14 17]-orientation, and the deviation angle (8 • ), smaller than 10 • , was acceptable. Slip trace analysis indicated that (−1 1 1)[1 0 1] slip direction had the highest Schmid factor of 0.36, i.e., proved to be the primary activate slip system. The angle between the normal of the slip plane and the compression axis should be 63.9 • by the calculation result of the geometric relationship. The measured angle was 61.8 • , however, having a very small difference, which could be caused by the measurement issue, and the deviation was acceptable. The tested [0 0 1]-oriented micro-pillar exhibited two obvious slip systems during the deformation. The actual compressive axis along the [1 0 18] direction had tendentious slip systems of (−1 1 1)[1 0 1] and (1 1 −1)[0 1 1] with Schmid factors of 0.41 and 0.38., respectively. The inclined angle of above two slip system was observed to be 58.5 • , in good agreement with the theoretical angle of 57 • . The activation of the second slip system was attributed to the increase of the resolved shear stress on the second slip plane with increasing strain. Figure 4 illustrates cross-sectional TEM bright field images and the selected area diffraction pattern of [1 1 1]-and [0 0 1]-oriented compressed micro-pillars. One can note from both microstructure characterization results that clear slip traces could be observed, and no transformation-induced twinning and martensite phase could be found due to high stacking fault energy of the tested austenite crystals. For [1 1 1] orientation, the slip traces were parallel in the exclusive glide plane, and the distribution was uniform and dense on the top of micro-pillar, as shown in Figure 4a. Due to the crystal lattice twist, the slip bands exhibited arc-shaped traces, i.e., had a radian. Compared with the arc-shaped traces of the [1 1 1]-oriented pillar, the slip traces of compressed [0 0 1]-orientation were in the shape of straight lines. Two slip planes with an intersect angle of 69.57 • could be observed, indicating that the secondary slip system was activated. Differently from the whole slip traces with radian in the [1 1 1]-oriented micro-pillar, some of those in the [0 0 1] orientation were inflected, which was considered to be caused by jogged dislocation, marked by white arrows in Figure 4b. The caused dislocation stairs could also be observed on the surface of the deformed micro-pillar characterized by SEM. Some short dislocation fragments could be observed near the bottom of micro-pillar, marked by yellow arrows in Figure 4b, believed to be independent of deformation, i.e., presented in the initial dislocation. tion stairs could also be observed on the surface of the deformed micro-pillar characterized by SEM. Some short dislocation fragments could be observed near the bottom of micro-pillar, marked by yellow arrows in Figure 4b, believed to be independent of deformation, i.e., presented in the initial dislocation.
Elastic Deformation
From the above we can see that for single crystal micro-pillars, the mechanical responses were dependent on the crystallographic orientation. During the elastic deformation stage, the [ and thus, A111 and A001 have the largest discrepancy of 0 and 1/3, respectively. The relationship between Ehkl and Ahkl can be expressed by: where S11, S12 and S44 are three independent factors of a cubic system, taken as 10.7 × 10 −3 , 4.25 × 10 −3 and 8.6 × 10 −3 , respectively [33]. Hence, E111 should be much higher than E001, which is in accordance with our experimental result.
Plastic Deformation
From Figure 2, the yield stresses of [0 0 1]-and [1 1 1]-oriented austenite micro-pillars were 426.9 MPa and 492.5 MPa, respectively. Combined with the Schmid factors of the primary slip system verified by the deformed microstructure observation, the Critical Shear Stresses (CRSS) of [0 0 1]-and [1 1 1]-oriented micro-pillars were calculated to be 183.6 MPa and 173.3 MPa, respectively. The average value of CRSS was calculated to be 178.5 MPa, and the deviation was about 5%, which was negligible here. Other research has also shown that the CRSS value is related to the materials and independent of grain orientation [9,23]. After yielding, the engineering stress-strain curves of both
Elastic Deformation
From the above we can see that for single crystal micro-pillars, the mechanical responses were dependent on the crystallographic orientation. During the elastic deformation stage, the [1 1 1]-oriented micro-pillar possessed a higher Young's modulus of 273.3 GPa than the [0 0 1]-orientation with that of 197.2 GPa. The increase in E 111 can be explained by the anisotropy parameter [32]: and thus, A 111 and A 001 have the largest discrepancy of 0 and 1/3, respectively. The relationship between E hkl and A hkl can be expressed by: where S 11 , S 12 and S 44 are three independent factors of a cubic system, taken as 10.7 × 10 −3 , 4.25 × 10 −3 and 8.6 × 10 −3 , respectively [33]. Hence, E 111 should be much higher than E 001 , which is in accordance with our experimental result.
Plastic Deformation
From Figure 2, the yield stresses of [0 0 1]-and [1 1 1]-oriented austenite micro-pillars were 426.9 MPa and 492.5 MPa, respectively. Combined with the Schmid factors of the primary slip system verified by the deformed microstructure observation, the Critical Shear Stresses (CRSS) of [0 0 1]-and [1 1 1]-oriented micro-pillars were calculated to be 183.6 MPa and 173.3 MPa, respectively. The average value of CRSS was calculated to be 178.5 MPa, and the deviation was about 5%, which was negligible here. Other research has also shown that the CRSS value is related to the materials and independent of grain orientation [9,23]. After yielding, the engineering stress-strain curves of both [0 0 1]-and [1 1 1]-oriented micro-pillars exhibited multiple plastic deformation stages with different strain hardening rates. Compared with the slight strain hardening of the [1 1 1]-oriented micro-pillar from stage III, the deformation response of the [0 0 1]-orientation showed a climb in stage IV. Based on SEM and TEM observation of the tested austenite micro-pillars, the deformed microstructure was characterized by a pronounced dislocation glide plane, which led to the formation of crystallographic slip bands on the activated slip systems, i.e., microband-induced plasticity, MBIP. The tested [1 1 1]-oriented micro-pillar exhibited dislocation glide traces with the same direction while the [0 0 1]-orientation revealed the secondary slip system. Hence, the activation of the secondary slip system induced higher flow stresses.
Flow Stress Expression
The total flow stress (σ t ) after the yielding can be expressed by: (4) where σ y is the yield stress and σ(ε) is the plasticity-driven strain hardening capacity in the flow stress. The first term, σ y , representing the activation of perfect dislocation in the present tests, can be calculated by [8]: where α 0 is a constant describing the character of dislocation, considered as 0.5 for pure edge dislocations here [8], b p is the magnitude of the Burgers vector for perfect dislocation, considered as 0.256 nm for face-centered crystal lattice [8,13] and G is the shear modulus, which was taken as 105.1 GPa and 75.8 GPa for [0 0 1]-oriented and [1 1 1]-oriented micropillars, respectively. Additionally, Λ mentioned before is defined as the "mean free path", which is the average feasible distance of dislocations before they stored or annihilated [34]. The values of Λ were calculated to be 319.7 nm and 206.1 nm for the [1 1 1]-and [0 0 1]-oriented micro-pillars, respectively. For pure dislocation-dislocation mediated plasticity steels, the change of flow stress can be expressed by the evolution of dislocation density, ρ, invariably [35]: where α is a factor related to the material, generally ranged from 0.3 to 0.5 [35]. Differentiating both sides of Equation (6) to the strain, the strain hardening rate, Θ = dσ dε , can be expressed as: where the rate of dislocation accumulation to the strain can be formally expressed as dρ dε = 1 bΛ . Hence, the strain hardening itself can be expressed as: where D is the mean dislocation spacing, expressed as D = 1/ √ ρ. According to Equation (8), the strain hardening rate of the research material during plastic deformation is proportional to the ratio of the mean dislocation spacing and the mean free path. The trend of experimental and calculated strain hardening rates of [0 0 1]-and [1 1 1]-oriented micro-pillars are presented in Figure 5a,b, respectively. For better comparison, all experimental and calculated strain hardening rates were standardized by the strain hardening rate at the yield point. The calculation results had a good description of the trend of strain hardening rate. Micro-pillar compression tests combined with SEM/TEM observation were carried out in this work to describe the differences in mechanical responses and properties of [0 0 1]-and [1 1 1]-oriented single crystal austenite micro-pillars with high SFE value. Another task will be to explore the deformation behaviors of ferrite grains and the effect of grain boundaries on the mechanical property by the same experimental method. The research results can provide a local deformation mechanism support for the overall mechanical description and safety prediction of duplex stainless steel components.
Conclusions
The deformation behaviors and micro-structure evolution of austenite micro-pillars were investigated by a compression test and SEM and TEM observation. The main conclusions are: Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest. Micro-pillar compression tests combined with SEM/TEM observation were carried out in this work to describe the differences in mechanical responses and properties of [0 0 1]-and [1 1 1]-oriented single crystal austenite micro-pillars with high SFE value. Another task will be to explore the deformation behaviors of ferrite grains and the effect of grain boundaries on the mechanical property by the same experimental method. The research results can provide a local deformation mechanism support for the overall mechanical description and safety prediction of duplex stainless steel components.
Conclusions
The deformation behaviors and micro-structure evolution of austenite micro-pillars were investigated by a compression test and SEM and TEM observation. The main conclusions are: Deformation microstructures of both micro-pillars were characterized by pronounced planar slip. The slipping band structure undergoes refinement during straining, resulting in the strain hardening behavior.
3.
Higher flow stress and unstable strain hardening behavior in [0 0 1]-oriented austenite micro-pillars were assumed to be caused by the easy activation of secondary slip systems. Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest. | 4,761.4 | 2021-07-24T00:00:00.000 | [
"Materials Science"
] |
Estimating the Vaccine Effectiveness Against Serotype 3 for the 13-Valent Pneumococcal Conjugate Vaccine: A Dynamic Modeling Approach
Background: The 13-valent pneumococcal conjugate vaccine (PCV13) is the only PCV licensed to protect against serotype 3 in children. However, conflicting estimates exist of PCV13’s direct and indirect protection vaccine effectiveness (VE) for serotype 3. Objective: Our study examined the of PCV13 for serotype 3 using different assumptions for PCV13 direct and indirect VE to model trends in serotype 3 invasive pneumococcal disease (IPD) and comparing these to observed data from the United Kingdom (UK). Methods: A dynamic transmission model of the spread of pneumococcal carriage and development of IPD was used to fit pre-PCV13–modeled IPD incidence with observed data. To allow for comparison across scenarios, postPCV13–modeled IPD incidence was fit to observed data using assumptions for three different scenarios: (scenario 1) serotype 3 as a nonvaccine serotype, (scenario 2) VE against serotype 3 IPD of 63.5% based on a recent meta-analysis, and (scenario 3) a model-estimated VE against serotype 3. Results: Post-PCV13 introduction, modeled 2017 and average annual serotype 3 IPD incidence were within 20% and 59% of observed values for scenarios 2 and 3, respectively, but deviated by >100% for scenario 1. For adults aged ≥65 years, modeled 2017 IPD incidence in scenario 1 differed from observed data by 16% versus roughly 8% in scenarios 2 and 3. Conclusions: Observed data do not support a scenario of no serotype 3 VE, but rather a combination of direct protection among vaccinated children and a lower level of indirect protection among older adults. Policymakers should consider transmission dynamics when examining VE against covered serotypes.
Introduction
In 2000, a 7-valent pneumococcal conjugate vaccine (PCV7) was licensed to target pneumococcal disease due to the most common circulating serotypes at the time (4, 6B, 9V, 14, 18C, 19F, and 23F). Subsequently, 10-valent PCV (PCV10) and 13-valent PCV (PCV13) vaccines were licensed. PCV10 and PCV13 targeted the same serotypes as PCV7 in addition to three (1, 5, and 7F) or six (1, 3, 5, 6A, 7F, and 19A) additional serotypes, respectively. Both were licensed based on World Health Organization (WHO) recommendations whereby approval of new PCVs was based on the demonstration of immunologic noninferiority to PCV7. These PCVs have been highly effective in reducing incidence of diseases such as invasive pneumococcal disease (IPD), pneumococcal pneumonia, and acute otitis media due to the vaccine serotypes [1].
PCVs reduce the burden of pneumococcal disease through both direct and indirect protection [2,3]. The latter occurs by reducing nasopharyngeal carriage acquisition or density among vaccinated persons (primarily children) and thus reducing transmission of vaccine serotypes to unvaccinated persons.
Through these mechanisms, PCV13 has led to substantial reductions in IPD globally since its introduction [4]. However, although PCV13 is the only licensed PCV that contains serotype 3 in its formulation, authors have debated the presence and degree of vaccine effectiveness (VE) against this serotype [4,5]. Given that there was no prelicensure efficacy study for PCV13 in infants, all estimates of PCV13 VE against IPD have been based on realworld observational studies, which have often found different estimates for direct or indirect PCV13 protection against serotype 3 by geography and time [3,[6][7][8]. For example, after the United Kingdom (UK) introduced PCV13 into the routine pediatric immunization schedule in 2010, vaccine-targeted serotype IPD incidence decreased for PCV13 serotypes other than 19A or plateaued for serotype 19A over the first 4 years following PCV13 introduction in both children and unvaccinated adults [9]. However, beginning in 2014, serotype 3 IPD incidence began to increase in both age groups, while other vaccine-type IPD incidence kept decreasing [5].
There is a large body of evidence from both randomized controlled trials and observational studies that PCV13 provides direct protection against serotype 3 in vaccinated children and adults. A recent meta-analysis of observational studies in infants [10] estimated VE against serotype 3 IPD of 63.5% (95% confidence interval, 37.3%-89.7%). In a study funded by the European Centre for Disease Prevention and Control (ECDC), PCV13 VE for serotype 3 IPD was 70% (95% confidence interval [CI], 44%-83%) for ≥1 dose and 57% (95% CI, 5%-81%) for children who were fully vaccinated [11]. In post-hoc analyses from a randomized controlled trial in The Netherlands [12], VE against serotype 3 was 60.0% (95% CI, 5%-85%) for chest X-ray-confirmed community acquired pneumonia (CAP) and 61.5% (95% CI, 18%-83%) for clinical CAP in the modified intention-to-treat population [13]. Finally, a recent meta-analysis of three studies in adults also found that PCV13 had a VE against serotype 3 for hospitalized CAP of 52.5% (95% confidence interval, 62% -76%) [14]. However, there have also been published case control studies in children that have shown limited effectiveness against serotype 3 IPD, with some suggesting that VE against serotype 3 IPD wanes over time [1]. In addition, and as noted in a previous review of PCV13 impact on serotype 3 IPD in children [10], some prelicensure clinical trials showed that the immune response for serotype 3 following the booster dose was not increased above the levels seen after the infant vaccination series, suggesting potential hyporesponsiveness [16].
More problematic is the debate over the existence of indirect protection from PCV13 against serotype 3. As evidence of indirect protection among older unvaccinated persons, a study published by the ECDC [17,18] found a statistically nonsignificant reduction in serotype 3 IPD incidence in older unvaccinated adults across six PCV13 settings through 2014, followed by an increase through 2017, resulting in an overall 12% increase comparing 2017 to 2009. Conversely, in PCV10 countries, serotype 3 IPD incidence steadily increased from the time of PCV10 introduction, with a 56% increase in serotype 3 IPD comparing 2017 to 2009. In contrast, a randomized controlled trial of PCV7 versus PCV13 in children found no efficacy against serotype 3 carriage, although confidence limits were wide enough to allow for a potential effect [19].
In sum, some investigators [20,21] have concluded that PCV13 has no direct or indirect protection against serotype 3, classifying the serotype as a nonvaccine serotype (NVT). For example, a recent modeling exercise by the Joint Committee on Vaccination and Immunization categorized serotype 3 as an NVT in estimating the impact on removing a priming dose in infant vaccination [22]. Furthermore, a number of costeffectiveness studies also assume PCV13 provides no protection against serotype 3 [23][24][25]. If such an assumption is incorrect, it will underestimate the benefit of PCV13 vaccination. In this context, it is important to separate direct from indirect effects since a vaccine may provide the former without the latter. In this circumstance, effectiveness or efficacy studies may demonstrate direct protection while population-based surveillance shows no impact among unvaccinated age cohorts, or even target vaccine groups, if coverage remains sufficiently low.
The goal of the current study was to evaluate the validity of assumptions surrounding the VE of PCV13 against serotype 3. To do this, we compared the prospective trend in serotype 3 IPD incidence post-PCV13 introduction under different VE assumptions, using the observed UK IPD surveillance data as the backbone of the calculations. Many researchers have used mathematical models to carry out experiments, such as this one, that are impractical to conduct in the real world [26][27][28]. In carrying out this study, we used a previously published and validated model [29] of pneumococcal carriage and IPD that was calibrated to the UK.
Model Overview
To assess PCV13 VE against serotype 3 in the UK, we adapted a previously published dynamic transmission model that simulates the spread of pneumococcal carriage and development of IPD in a population over time [29]. The model stratifies individuals by the presence or absence of pneumococcal carriage, vaccine status, and age group. PCV13 vaccination in the UK, based on a 2+1 schedule (at 2, 4, and 12 months of age), is captured by transitioning eligible age groups through vaccine dose compartments based on dosing schedule and adherence.
In the model, individuals who acquire carriage may subsequently develop IPD. Entering a vaccine dose compartment protects against developing IPD by reducing the probability of developing IPD (VE IPD ) or reducing the probability of acquiring carriage (VE C ) when exposed to a particular serotype for the duration of protection within the model. The reduction in circulating carriage afforded by the vaccine's VE C lessens the probability of acquiring carriage for unvaccinated adults as well. The model tracks carriage acquisition, carriage duration, and development of IPD over time based on the individual's vaccination status and the population-level carriage prevalence for each serotype (force of infection). On the basis of Wasserman et al. [29], individuals who acquire serotype 3 carry for an average duration of 6.2 weeks and have a probability of IPD given carriage of nine cases per 100,000 acquisitions.
The calibration procedure used to estimate the unknown parameters has been described previously [29]. Briefly, the model estimates a prevaccine-era "steady state" by solving a set of linear equations to calculate the force of infection parameters for each serotype and age group to initialize the model. The model then uses a simulated annealing approach to randomly draw values for the unknown (calibrated) parameters within certain bounds. The model is then run forward over all years of available IPD surveillance (in this case, 2001 to 2017). The calibration procedure is then repeated for a given number of iterations with the goal of minimizing the sum of squared deviations of the resulting yearly IPD incidence values produced from the model as compared with the actual IPD surveillance values by age and serotype group. Table 1 shows the fixed and calibrated inputs for the VE C and VE IPD for each modeled serotype group excluding serotype 3. Details on other input parameter estimates can be seen in Supplementary Table 4. Vaccine effectiveness against IPD is based on Andrews et al. [38] and 2014 for all PCV13 serotypes, excluding serotype 3. For serotype 3, this value is scenario specific: in Scenario 1, serotype 3 is assumed to be an NVT; in Scenario 2, this value is derived from Sings et al. [10]; and in Scenario 3, this value is calibrated.
Approach to Modeling PCV13 VE Against Serotype 3
We conducted a series of scenario analyses to assess PCV13's VE against serotype 3. These scenarios included the following: a. Scenario 1: VE IPD and VE C against serotype 3 are both assumed to be 0, mimicking serotype 3 as an NVT. b. Scenario 2: VE IPD against serotype 3 is assumed to be 63.5% following the booster dose [10]. VE IPD against serotype 3 for the first and second priming doses followed the same procedure as described in Wasserman et al. [29]. VE C against serotype 3 was a calibrated parameter. c. Scenario 3: VE IPD and VE C against serotype 3 were calibrated parameters. The combination of these three scenarios provide for a broad perspective on evaluating PCV13 VE against serotype 3.
The model was first calibrated using the assumptions in Scenario 1. Then, to allow for comparison across scenarios, the pre-PCV13-calibrated inputs and initial conditions in Scenario 1 were also used for Scenarios 2 and 3. The post-PCV13calibrated inputs were then reestimated for Scenarios 2 and 3 by initializing the calibration procedure at the start of PCV13's introduction. For each scenario, the model was calibrated using the same observed data as in Wasserman et al. [29]. All other fixed inputs remained the same across scenarios. To assess the goodness of fit for each calibration, the sum of squared deviations between modeled and observed IPD incidence was compared between scenarios. Fit was assessed for all serotypes and serotype 3 IPD incidence separately.
Outcomes
The primary outcomes of the model were the trends in serotype 3 IPD incidence (from here forward referred to as IPD incidence). The study compared modeled annual IPD incidence rates in 2009 (the year preceding PCV13 introduction) and 2017 (the last year of observed data) with observed data. The study also examined the average annual IPD incidence post PCV13 introduction.
The study compared each scenario's outcome to the observed outcome. While all age groups were included in the model parameterization, results are presented for children aged 0-<2 years as well as adults aged ≥65 years, because these age groups represent the majority of IPD burden (calibration fit trends are presented in Supplementary Pneumococcal Conjugate Vaccine: A Dynamic Modeling Approach Material for each age group included in the model). Direct protection against serotype 3 IPD (VE IPD ) was therefore evaluated based on the best-fit scenarios for the vaccinated children aged 0-<2 years, while change in IPD incidence in the group aged ≥65 years was used to examine the existence of indirect protection (VE C ) against serotype 3.
Results
Following the calibration procedure for all scenarios, the model found the best fit for observed data across all age groups in the model (both vaccinated and unvaccinated) with the estimated VE IPD and VE C against serotype 3 and goodness-of-fit estimates as presented in Table 2. In Scenario 2, the model estimated VE C against serotype 3 to be 6% for the booster dose using a fixed 63.5% VE IPD . In Scenario 3, the model estimated VE IPD and VE C against serotype 3 to be 31% and 19%, respectively. Scenario 3 had the best fit to observed data for all serotypes in addition to serotype 3. Figures 1 and 2 illustrate the trends in IPD incidence for each scenario and for the observed data for children aged 0-<2 years and adults aged ≥65 years, respectively (results of the calibrated parameters and trends in IPD incidence for each age group included in the model are listed in Supplementary Table 1 and Figures 1 to 7). Pre-PCV13 (2001-2009) modeled IPD incidence aligned well with the observed data. From 2010 on, the fit (compared to observed data) and trend lines differed across scenarios due to the calibration. In observed data, IPD incidence decreased after 2009 and then immediately increased (Figures 1 and 2). After the introduction of PCV13, modeled IPD incidence in Scenario 1 (assuming serotype 3 is an NVT) increased annually for both age groups. Conversely, for children aged 0-<2 years, scenarios allowing for direct and indirect protection against serotype 3 led to incidence decreasing before leveling off (Scenarios 2 and 3). For adults aged ≥65 years, Scenario 2 showed a lower increase in IPD incidence than Scenario 1 after 2009, and Scenario 3 resulted in IPD incidence roughly plateauing from the point of PCV13 introduction. Table 3 quantifies the results presented in Figures 1 and 2 and presents the modeled IPD incidence in 2009 (pre-PCV13) and 2017 (most recent year of observed data) and the average annual IPD incidence over the entire PCV13 era (2010-2017) for each scenario. In 2009, observed IPD incidence was 1.41 cases per 100,000 among children aged 0-<2 years (Table 3). Modeled IPD incidence in 2009 was within 0.2% of observed data for the same age group. For children aged 0-<2 years, following the introduction of PCV13, in Scenario 1, modeled 2017 IPD incidence and the average annual IPD incidence in 2010-2017 deviated from observed IPD incidence by 107.7% (2.14 vs 1.03 cases per 100000) and 134.2% (1.79 vs 0.76 cases per 100000), respectively. By contrast, in scenarios 2 and 3, for children aged 0-<2 years, modeled 2017 and average annual IPD incidence remained within roughly 20% (Scenario 2) and 60% (Scenario 3) of observed data. Scenario 2 resulted in the smallest average annual deviation from the observed data in 2017 and in the average annual IPD incidence. For adults aged ≥65 years, observed IPD incidence was 2.93 cases per 100,000 in 2009, 3.34 per 100,000 in 2017, and an average of 2.45 cases per 100,000 in 2010-2017 (Table 3). Modeled 2017 IPD incidence in Scenario 1 differed from observed data by 16% in Scenario 1 versus roughly 8% in Scenarios 2 and 3. When considering the average annual IPD incidence in 2010-2017, the model differed from observed data by roughly 38% (Scenario 1), 33% (Scenario 2), and 24% (Scenario 3).
Observed IPD incidence rate ratios for adults aged ≥65 years from the ECDC [17,18] were also compared with modeled IPD incidence rate ratios from Scenario 1 for the same age group ( Figure 3). Observed data from six sites with universal childhood PCV13 programmes showed a general decrease in serotype 3 IPD incidence rate ratios in 2011-2014, followed by an increase, whereas data from four sites with universal PCV10 programmes showed a general increase in serotype 3 IPD incidence rate ratios starting at the time of PCV10 introduction. When using the modeled IPD incidence rate ratios in Scenario 1 that assume serotype 3 is an NVT, the data were more closely aligned with the observed data from PCV10 sites but underestimated the observed increases. In addition, two PCV10 countries included regions that used PCV13, thus the ECDC data also may have underestimated IPD incidence in PCV10 countries.
Discussion
Using a previously developed dynamic transmission model, we tested several scenario analyses to estimate PCV13 VE against serotype 3 assuming a 2+1 schedule in the UK. Of the three scenarios tested, modeled results produced the closest approximation to observed IPD incidence among children aged 0-<2 years when assuming VE IPD against serotype 3 was equal to 63.5%, a value taken from a previous meta-analysis of PCV13 VE against serotype 3 IPD in children [10]. This evidence further suggests that PCV13 provides direct protection against serotype 3 among vaccinated persons.
The evidence for PCV13 indirect protection against serotype 3 was mixed in our findings. Scenarios assuming non-zero VE IPD and VE C (Scenarios 2 and 3) exhibited better alignment with the average observed annual IPD incidence among adults aged ≥65 years post PCV13 introduction. However, these scenarios were not able to fully capture the immediate decrease in IPD incidence in the ≥65-year age group that was observed in the UK. In contrast, assuming serotype 3 is an NVT resulted in markedly higher IPD incidence than was observed during this time period. However, the fits were generally better for the 0-to <2-year population (direct protection) than for the ≥65-year population (indirect protection).
Modeled results for adults aged ≥65 years in Scenarios 2 and 3 were consistent with observed serotype 3 IPD incidence trends in countries with PCV13. Many countries with pediatric PCV13 immunization programmes have experienced a decrease and subsequent increase in serotype 3 IPD incidence among unvaccinated persons aged ≥65 years. If PCV13 does not provide sufficient protection against carriage of serotype 3, then, over time, cases of serotype 3 could increase if serotype 3 carriage and transmission increase. Conversely, countries that have implemented pediatric PCV10 programmes have experienced an immediate and consistent upward trend in pediatric serotype 3 IPD incidence. As presented in Figure 3, compared to PCV10 countries, PCV13 countries also have experienced a markedly different serotype 3 evolution among unvaccinated older persons. Similarly, comparing the results of the model in Scenario 1 to the ECDC data from PCV10 countries demonstrated that if PCV13 VE C against serotype 3 were 0%, then IPD incidence rates from PCV13 settings would be much higher and more closely aligned to PCV10 settings. In summary, these results highlight that a nontrivial, possibly low level of indirect protection against serotype 3 is conferred by PCV13 and could reflect impact against carriage acquisition, density, or duration. However, it should also be noted that several serotypes that are known to cause IPD, including serotype 3, undergo multiyear epidemic patterns that are independent of PCVs [30] and that may be due to a variety of factors. The observed differences in the PCV13 and PCV10 countries could also have been due to factors other than vaccine, including differences in the serotype distribution of nasopharyngeal carriage among children in the pre-PCV10 or PCV13 vaccine periods [31].
Given our model's difficulty in fitting observed incidence in adults aged ≥65 years, it is likely that other nonvaccine exogenous factors play a role in serotype 3 dynamics and NVT replacement in the UK, and several hypotheses exist. First, as with influenza infection, pneumococcal carriage density may increase logarithmically with use of live attenuated influenza vaccine (LAIV), which, in the UK, is the primary influenza vaccine used among young children, the age group most likely to transmit infection [32]. Several lines of evidence are consistent with but do not confirm this hypothesis. A randomized clinical trial of 151 children found that pneumococcal carriage density was 2.68 times higher in children aged 2-4 years who received LAIV compared with those who did not [33]. A second study in mice found that LAIV increased bacterial transmigration and could increase the risk of otitis media [34]. Given that the UK has a pediatric influenza vaccine uptake of 60%-80% [32], a relatively modest protection from PCV13 against serotype 3 could be overwhelmed by stimulation of pneumococcal proliferation among heavy transmitters immediately before the season of greatest pneumococcal disease risk. Second, there may be additional carriage reservoirs outside of children that may lead to increases in disease caused by NVT and serotype 3, as vaccine serotype carriage is reduced at a population level. By using pediatric carriage-testing methodology (e.g., a focus on the nasopharynx versus saliva and oropharynx), recent studies have reported that adult pneumococcal carriage has been underestimated multiple-fold [35]. Third, a recent report on global serotype 3 genotypes and genetic evolution reported that a new antimicrobial-resistant clade of serotype 3 emerged in 2014 [36], and that emergence did not correlate with the use or timing of introduction of PCV13 at a population level. As with the LAIV hypothesis, this finding raises the possibility that PCV13 impact is being overwhelmed by a new clade for which community antibiotic pressure is no longer providing sufficient synergy. Fourth, serotype 3 may actually represent a serogroup with multiple related serotypes, similar to those identified in the recent separation of serotypes 6A and 6C [37]. In this scenario, PCV13 may have greater impact against one form of serotype 3, which then is replaced by a second form less susceptible to vaccineinduced immunity. However, further effectiveness or immunogenicity studies are required to support this theory. Fifth, there may be changes in antibiotic pressure with rational antibiotic use policies, or temporal changes may have occurred in risk factors for pneumococcal disease, including those that favor serotype 3, such as aging of the population, increased prevalence of chronic diseases, changes in breastfeeding or child group care practices, or changes use of extended care facilities for older adults. Finally, there are aspects of pneumococcal disease that are still not fully understood. For example, NVT IPD incidence in the UK increased approximately linearly following the introduction of PCV13 [5]. Starting in 2014, however, NVT incidence substantially increased [5]. Further research is recommended to better understand these phenomena and to better capture the dynamics of a very complex disease.
As with any modeling study, this analysis is subject to several limitations. The most notable limitation is the limited availability of data on VE and duration of protection for each dose. Similarly, real-world data on carriage of each serotype over time is extremely limited. The results are thus subject to uncertainty. A further limitation is that the results are based on only one country, and as such our findings may or may not be generalizable to other settings. This question could be explored in other countries for which PCV13 was introduced to better assess the consistency of evidence of PCV13 against serotype 3. However, few countries have sufficiently robust data available to model complex serotype dynamics before and after vaccine introduction. Additionally, for computational reasons and lack of data, the model does not consider the impact of immigration/emigration or transient travel on carriage and IPD incidence. These factors could influence the transmission of disease carriage, as individuals traveling from countries without PCV13 may affect the carriage rate within the UK population. Finally, implications of LAIV or other population-level impacts that could be driving increases in serotype 3 IPD incidence in more recent years, as discussed above, were not addressed in the model. These structural limitations would require a substantially more computationally intensive model, and that combined with the absence of data to estimate key parameters may render accounting for them infeasible.
Conclusions
Using a dynamic transmission model parameterized with the best available evidence and calibrated to the UK surveillance system, our results support the hypothesis that PCV13 provides direct protection against serotype 3 for vaccinated persons and may provide additional protection against some aspect of serotype 3 carriage, whether acquisition, density, or duration. Policymakers should consider direct and indirect effects of conjugate pneumococcal vaccines when interpreting changes in disease incidence rates, including those for specific serotypes. Additionally, policymakers should recognize that PCVs represent only one of many potentially competing factors that can influence pneumococcal disease epidemiology. Further research is necessary to better understand the complexity of disease transmission dynamics and the evolution of serotype epidemiology.
Calibrated Parameter Estimates
All non-calibrated parameters are taken from Wasserman et al., 2018. All parameters that varied across scenarios are listed in Table 4. Probability of IPD given carriage acquisition (NVT) a 2 per 100000 acquisitions 2 per 100000 acquisitions IPD = invasive pneumococcal disease; PCV = pneumococcal conjugate vaccine; VE = vaccine effectiveness; NVT = nonvaccine serotype. a To allow for comparison across scenarios, the pre-PCV13 calibrated inputs and initial conditions in Scenario 1 were also used for Scenario 2 and 3.
Modeled and Observed Serotype 3 IPD Incidence for All Age Groups
Figures 1 to 7 illustrate the trends in IPD incidence for each scenario and for the observed data for each age group included in the model. Results were inconclusive in analyzing the fit and primary outcomes of interest for people aged 2 -64 years, as IPD incidence was very close to zero for this population. IPD = invasive pneumococcal disease; PCV13 = 13-valent pneumococcal conjugate vaccine. Scenario 1 assumes serotype 3 is a nonvaccine serotype. Scenario 2 derives the PCV13 vaccine effectiveness against serotype 3 IPD from Sings et al., 2018 [10]. Scenario 3 assumes the PCV13 vaccine effectiveness against both serotype 3 IPD and carriage is unknown, and the model calibrates these values. IPD = invasive pneumococcal disease; PCV13 = 13-valent pneumococcal conjugate vaccine. Scenario 1 assumes serotype 3 is a nonvaccine serotype. Scenario 2 derives the PCV13 vaccine effectiveness against serotype 3 IPD from Sings et al., 2018 [10]. Scenario 3 assumes the PCV13 vaccine effectiveness against both serotype 3 IPD and carriage is unknown, and the model calibrates these values. IPD = invasive pneumococcal disease; PCV13 = 13-valent pneumococcal conjugate vaccine. Scenario 1 assumes serotype 3 is a nonvaccine serotype. Scenario 2 derives the PCV13 vaccine effectiveness against serotype 3 IPD from Sings et al., 2018 [10]. Scenario 3 assumes the PCV13 vaccine effectiveness against both serotype 3 IPD and carriage is unknown, and the model calibrates these values. IPD = invasive pneumococcal disease; PCV13 = 13-valent pneumococcal conjugate vaccine. Scenario 1 assumes serotype 3 is a nonvaccine serotype. Scenario 2 derives the PCV13 vaccine effectiveness against serotype 3 IPD from Sings et al., 2018 [10]. Scenario 3 assumes the PCV13 vaccine effectiveness against both serotype 3 IPD and carriage is unknown, and the model calibrates these values. IPD = invasive pneumococcal disease; PCV13 = 13-valent pneumococcal conjugate vaccine. Scenario 1 assumes serotype 3 is a nonvaccine serotype. Scenario 2 derives the PCV13 vaccine effectiveness against serotype 3 IPD from Sings et al., 2018 [10]. Scenario 3 assumes the PCV13 vaccine effectiveness against both serotype 3 IPD and carriage is unknown, and the model calibrates these values. IPD = invasive pneumococcal disease; PCV13 = 13-valent pneumococcal conjugate vaccine. Scenario 1 assumes serotype 3 is a nonvaccine serotype. Scenario 2 derives the PCV13 vaccine effectiveness against serotype 3 IPD from Sings et al., 2018 [10]. Scenario 3 assumes the PCV13 vaccine effectiveness against both serotype 3 IPD and carriage is unknown, and the model calibrates these values. | 6,497.4 | 2019-12-07T00:00:00.000 | [
"Biology",
"Medicine"
] |
Semantic 3D Reconstruction with Learning MVS and 2D Segmentation of Aerial Images
Semantic modeling is a challenging task that has received widespread attention in recent years. With the help of mini Unmanned Aerial Vehicles (UAVs), multi-view high-resolution aerial images of large-scale scenes can be conveniently collected. In this paper, we propose a semantic Multi-View Stereo (MVS) method to reconstruct 3D semantic models from 2D images. Firstly, 2D semantic probability distribution is obtained by Convolutional Neural Network (CNN). Secondly, the calibrated cameras poses are determined by Structure from Motion (SfM), while the depth maps are estimated by learning MVS. Combining 2D segmentation and 3D geometry information, dense point clouds with semantic labels are generated by a probability-based semantic fusion method. In the final stage, the coarse 3D semantic point cloud is optimized by both local and global refinements. By making full use of the multi-view consistency, the proposed method efficiently produces a fine-level 3D semantic point cloud. The experimental result evaluated by re-projection maps achieves 88.4% Pixel Accuracy on the Urban Drone Dataset (UDD). In conclusion, our graph-based semantic fusion procedure and refinement based on local and global information can suppress and reduce the re-projection error.
Introduction
Semantic 3D reconstruction makes Virtual Reality (VR) and Augmented Reality (AR) much more promising and flexible. In computer vision, 3D reconstruction and scene understanding receive more and more attention these days. 3D models with correct geometrical structures and semantic segmentation are crucial in urban planning, automatic piloting, robot vision, and many other fields. For urban scenes, semantic labels are used to visualize targets such as buildings, terrain, and roads. A 3D point cloud with semantic labels makes the 3D map more simple to understand, thereby propelling the subsequent research and analysis. 3D semantic information also shows potential in automatic piloting. For a self-driving vehicle, one of the most important things is to distinguish whether the road is passable or not. Another essential thing for an autonomous automobile is to localize other vehicles in real-time so that it can adapt to their speed, or exceed it if necessary. In the field of robotics, scene understanding is a standard task for recognizing surrounding objects. The semantics of the surrounding environment play a vital role in applications such as loop closure and route planning.
Although 3D semantic modeling has been widely studied in recent years, the approaches of extracting semantic information through the post-processing of point cloud reconstruction generally lead to inconsistent or incorrect results. Performing semantic segmentation on point cloud data is more difficult than it is on 2D images. One major problem is the lack of 3D training data, since labeling a dataset in 3D is much more laborious than in 2D. Another challenge is the unavoidable noise in 3D point clouds, which makes it difficult to accurately distinguish which category a point belongs to. Thus, it is necessary to develop new semantic 3D reconstruction approaches by simultaneously estimating 3D geometry and semantic information over multiple views. In the past few years, many studies on image semantic segmentation have achieved promising results by deep learning techniques [1][2][3][4]. Deep learning methods based on well-trained neural networks can help us do pixel-wise semantic segmentation on various images. Meanwhile, deep-learning-based methods are not only able to extract semantic information, but are also practical for solving Multi-View Stereo (MVS) problems. Recently, learning-based MVS algorithms [5,6] have been proposed to generate high precision 3D point clouds for large-scale scenes. These results inspired us much and gave rise to the research of semantic 3D reconstruction. In this paper, we mainly focus on developing accurate, clear, and complete 3D semantic models of urban scenes.
Once satisfactory depth and semantic maps are acquired, 3D semantic models can be easily generated. 3D laser scanners can detect depth directly but only perform well in short-distance indoor scenes. Compared with 3D laser scanners, the purely RGB-based method to reconstruct 3D models from 2D images is cheaper, faster, and more generalized. Recently, Unmanned Aerial Vehicles (UAV) have become applicable to collecting multi-view, high-resolution aerial images of large-scale outdoor scenes. The calibrated camera poses can be obtained from the images by the traditional Structure from Motion (SfM) technique. After that, 3D point clouds are determined by fusing 2D images according to multi-view geometry.
However, due to the occlusions, the complexity of environments, and the noise of sensors, both 2D segmentation and depth estimation results contain errors. As a result, many inconsistencies may occur when projecting the multi-view 2D semantic labels to the corresponding 3D points. There is still plenty of work to do to obtain accurately-segmented 3D semantic models. With the booming of deep learning methods, 2D segmentation tasks are reaching high performance levels, which makes it possible to acquire a large-scale 3D semantic model easily. Nevertheless, errors within depth maps and semantic maps may lead to inconsistency. This can be alleviated by considering 3D geometry and 2D confidence maps together in an optimization module. Moreover, 3D models with coarse segmentation still need further refinement to filter error points. In a nutshell, the main contributions of our work are three folds:
•
We present an end-to-end, learning-based, semantic 3D reconstruction framework, which reaches high Pixel Accuracy on the Urban Drone Dataset (UDD) [7].
•
We propose a probability-based semantic MVS method, which combines the 3D geometry consistency and 2D segmentation information to generate better point-wise semantic labels.
•
We design a joint local and global refinement method, which is proven effective by computing re-projection errors.
Related Work
Right before the renaissance of deep learning, it was a hard task to get a good pixel-wise segmentation map on images. Bao, S.Y. et al. [8] take object-level semantic information to constrain camera extrinsic. Some other methods perform the segmentation directly on the point cloud or meshes, according to their geometric characteristics. Martinovic, A. et al. [9] and Wolf, D. et al. [10] take the random forest classifier to do point segmentation, while Häne, C. et al. [11,12] and Savinov, N. et al. [13] treat it as an energy minimization problem in a Conditional Random Field (CRF). Ray potential (likelihood) is frequently adopted in semantic point cloud generation.
The flourishing CNN-based semantic segmentation methods are quickly outperforming traditional methods in image semantic segmentation tasks; take, for example, the Fully Convolutional Network (FCN) [1] and Deeplab [3]. High-level computer tasks such as scene understanding and semantic 3D reconstruction are now steady and rudimentary processes. The goal of 3D semantic modeling is to assign a semantic label to each 3D point rather than each 2D pixel. Several learning-based approaches follow the end-to-end manner, analyzing the point cloud and giving segmentation results directly in 3D. Voxel-based methods such as ShapeNets [14] and VoxNet [15] were proposed naturally. Some methods learn a spatial encoding of each point and then aggregate all individual point features to a global point cloud signature [16,17]. However, current deep learning-based segmentation pipelines cannot handle noisy, large-scale 3D point clouds. Thus, a feasible method is required to firstly perform pixel-wise semantic segmentation on 2D images and then back-project these labels into 3D space using the calibrated cameras to be fused. The methods above handle the point cloud directly, which means they carry a costly computational burden. In other words, they cannot manage large-scale 3D scenes without first partitioning the scene. More than that, because the morphological gap between point clouds in different scenarios is too large. These algorithms may be poorly generalized.
There are several methods doing semantic segmentation on 2D image and making use of multi-view geometric relationships to project semantic labels into 3D space. For RGBD-based approaches, once good semantic maps of each image are acquired, the semantic point clouds can easily be fused. Vineet, V. et al. [18] took advantage of a random forest to classify 2D features to get semantic information, while Zhao, C. et al. [19] used FCN with CRF-RNN to perform segmentation on images. McCormac, J. et al. [20] and Li, X. et al. [21] proposed incremental semantic label fusion algorithms to fuse 3D semantic maps. For RGB-based approaches, also addressed as Structure from Motion (SfM) and MVS, each point in the generated 3D structure corresponds to pixels on several images. Following the prediction of 2D labels, the final step is to assign each 3D pixel a semantic label [20,22]. The refinement process is as essential as the generation process of the semantic point cloud itself. Chen, Y. et al. [7] and Stathopoulou, E.K. et al. [23] filter the mismatching by semantic labels of feature points. With the motivation of denoising, Zhang, R. et al. [24] proposed a Hough-transform-based algorithm called FC-GHT to detect plane on point cloud for further semantic label optimization. Stathopoulou, E.K. et al. [23] used semantic information as a mask to wipe out the meshes belonging to the semantic class sky. These methods have two primary drawbacks. Firstly, they only use the final semantic maps, which means the probabilities of other categories are discarded. Secondly, they contain no global constraints integrated into their algorithms. In response, we propose some ideas for improvement.
Overall Framework
The overall framework of our method is depicted in Figure 1. In the Deeplab v2 [3]-based 2D segmentation branch, we discard the last Argmax layer of the network. We save pixel-wise semantic probability maps for every image instead. With the help of COLMAP-SfM [25], we simultaneously estimate the camera parameters and depth ranges for the source images. In order to acquire 3D geometry for large scale scenes, we utilize learning-based MVS method R-MVSNet [6] to estimate depth maps for multiple images. After 2D segmentation and depth estimation, we obtain a dense semantic point cloud by the semantic fusion method according to multi-view consistency. Finally, we propose a graph-based point cloud refinement algorithm integrating both local and global information as the last step of our pipeline. Figure 1. General pipeline of our work. Three branches are implemented to process the reconstruction dataset. The upper branch is the semantic segmentation branch to predict the semantic probability map; the middle branch is SfM to calculate the 3D odometry and camera poses; the lower branch is to estimate the depth map. Then, semantic fusion is applied to fuse them into a coarse point cloud. The last step is to refine the point cloud by local and global methods.
2D Segmentation
In this research, Deeplab v2 [3] with Residue Block is adopted as our segmentation network. The pretrained weights of ResNet-101 [26] on Imagenet [27] are used as our initial weights. We adopt the residual block to replace the ordinary 2D convolution layer to improve the training performance. We also modify the softmax layer that classifies the images to fit the label space of the UDD [7] dataset. With the network all set up, the training set of UDD [7] is employed for transfer learning.
The label space of UDD [7] is denoted as L = {l 0 , l 1 , l 2 , l 3 , l 4 }, which contains Vegetation, Building, Road, Vehicle, and Background. After the transfer learning process, we predict the semantic maps for every image in the reconstruction dataset. Furthermore, we save the weight matrix before the last Argmax layer. This matrix P(L) represents the probability distributions of every pixel in the semantic label space.
Learning-Based MVS
In order to acquire 3D geometry for large scale scenes, we explore the learning-based MVS method to estimate depth maps for multiple images. R-MVSNet [6], a deep learning architecture with capability to handle multi-scale problem, has advantages in processing high-resolution images and large-scale scenes. Moreover, R-MVSNet utilizes the Gated Recurrent Unit (GRU) to sequentially regularize the 2D cost maps, which reduces the memory consumption and makes the network flexible. Thus, we follow the framework of R-MVSNet to generate corresponding depths of the source images and train it on the DTU [28] dataset. Camera parameters and image pairs are determined by the implementation of COLMAP-SfM [25], while depth samples are chosen within [d min , d max ] using the inverse depth setting. The network returns a probability volume P where P(x, y, d) is the probability estimation for the pixel (x, y) at depth d; then, the expectation depth value d(x, y) is calculated by the probability weighted sum over all hypotheses: However, as with most depth estimation methods, the coarse pixel-wise depth data d(x, y) generated by R-MVSNet may contain errors. Therefore, before point cloud fusion by the depth maps, it is necessary to perform a denoising process on the depth data. In this paper, we apply the bilateral filtering method to improve the quality of depth maps with edge preservation; the refined depth data d (x, y) are obtained by: where As shown in Figure 2, the depth map becomes more smooth with edge preservation after bilateral filtering.
Semantic Fusion
With the learning 2D segmentation and depth estimation, pixel-wise 2D semantic labels and depth maps are obtained for each view. However, because of the occlusions, complexities of environments, and the noise of sensors, both image segmentation results and depth maps might have a large number of inconsistencies between different views. Thus, we further cross filter the depth maps by their neighbor views, and then produce the 3D semantic point clouds by combining 2D segmentation and depth maps with multi-view consistency.
Similar to other depth-map-based MVS methods [6,29], we utilize geometric consistency to cross filter the multi-view depth data. Given the pixel (x, y) from image I i with depth d(x, y), we project (x, y) to the neighbor image I j through d(x, y) and camera parameters. In turn, we re-project the projected pixel back from the neighbor image I j to the original image I i ; the re-projected depth on I i is d reproj . We consider the pixel consistent in the neighbor view I j when d reproj satisfies: According to the geometric consistency, we filter the depths which are not consistent in more than k views. In this paper, we take τ = 0.01 and k = 3.
After cross filtering, the depths are projected to 3D space to produce 3D point clouds. Since our purpose is to assign point-wise semantic labels for the 3D model, we propose a probabilistic fusion method to aggregate multi-view 2D semantic information. With the fine-tuned CNN, a pixel-wise label probability distribution P(L) has been calculated for each source image. Given a 3D point X which is visible in N views, the corresponding probability on view i for label l j is p i (l j ); we accumulate the multi-view probability distribution of each view as follows: where P(l j ) denotes the probability of point X labeling by l j . In this way, we transfer the probability distribution of multi-view images into 3D space. Generally, the predicted 3D semantic label can be determined by the Argmax operation as: where l(X) is the 3D semantic label of X. As depicted in Figure 3, the probabilistic fusion method effectively reduces errors since it integrates information from multiple images.
Point Cloud Refinement
Through the semantic fusion method, the 3D point cloud is classified into point-wise semantic labels. However, there are still few scattered points with error labels due to incorrect semantics or depths of source images. To remove these unexpected semantic errors, we explore both local and global refinement strategies for point cloud refinement. The KD-Tree data structure is employed to accelerate the query speed of the point cloud from O(n) to O(log(n)).
Generally, adjacent point clouds often have some correlation and are more likely to be segmented into the same class. Hence, we utilize the local refinement method for each point by combining the hypothesizes with the neighbor points. Given a 3D point X from the dense semantic model, through the KD-Tree structure established by the whole point cloud, the k-nearest neighbor of X could be determined in a short time. P i (l j ), i = 1, · · · , k represents the probability for neighbor point i labeling by l j ; the new semantic label l (X) is updated by: However, the local refinement method only takes the local adjacency into consideration with the global information ignored. For overall optimization, we further apply a graph-based global refinement method by minimizing an energy function. For every 3D point in the point cloud V , a graph G is established by connecting it with its k-nearest neighbor. Then the energy function is defined as: where L = {l(X)|X ∈ V } are the semantics of V and D is the set of all neighbor pairs. Similarly to [30], B(l(X p ), l(X q )) = 1 and R(l(X)) = 1 k ∑ k i=1 P i (l j ) are the boundary term and inner region term respectively, while λ ≥ 0 is a constant. Finally, the energy E(L) is minimized by a max-flow algorithm, as implemented in [31]. The refined point cloud is illustrated in Figure 4. Compared with the coarse result, our method wipes out semantic outliers and noises.
Experimental Protocol
Dataset: We carry out the training process of semantic segmentation on UDD https://github. com/MarcWong/UDD [7], an UAV collected dataset with five categories, containing 160 and 40 images in the training and validation sets, respectively. The categories are Building, Vegetation, Road, Vehicle, and Background. The performance is measured on its test set called PKU-M1, which is a reconstruction dataset also collected by a mini-UAV at low altitude. PKU-M1 consists of 288 RGB images at 4000 × 3000 resolution. We down-sample the result to 1000 × 750 to accelerate the prediction speed.
Coloring policy: Cityscapes https://www.cityscapes-dataset.com [32] is the state-of-the-art semantic segmentation dataset for urban scene understanding, which was released in 2016 and received much attention. We borrow the coloring policy of semantic labels from Cityscapes [32].
Training: UDD [7] is trained by Deeplab V2 [3] network structure implemented on TensorFlow [33]. We use the stochastic gradient descending [34] optimizer with weight decaying parameter 5 × 10 −5 . Learning rate is initialized to 1 × 10 −3 with a momentum of 0.99. The entire apparatus is conducted on a Ubuntu 18.04 server, with an Intel core i7-9700K CPU, 32GB memory, and a single Titan X Pascal GPU.
Measurements recap: Assume the number of non-background classes is k. The confusion matrix M for foreground categories can be denoted as below: For a specific foreground semantic label l x ∈ L, the problem can be formulated to a binary classification problem, where: Then, Pixel Accuracy, precision, recall, and F1-score can be deducted as below:
Evaluation Process
We choose proper measurements to quantitatively evaluate the 2D segmentation performance and 3D semantic model. We randomly labeled 16 images in PKU-M1 to test the segmentation performance. An example of PKU-M1 is shown in Figure 5. Table 1 gives class-wise statistics, where the Building category is segmented very well, but Vegetation, Road, and Vehicle are segmented relatively poorly. Since hand-crafted 3D semantic labeling is now still a challenging and tedious task, especially for large-scale scenarios, we have to evaluate the 3D semantic model indirectly. Notice that each 3D point is assigned a semantic label during the semantic fusion process; the label can be projected back to each camera coordinate by the geometric relation. We call this step re-projection. Then, we can indirectly evaluate the 3D semantic point cloud by re-projection images in a simpler manner. However, the re-projection map Figure 5d is quite sparse. Only foreground labels, which include Vegetation, Building, Vehicle, and Road, are countable for evaluation. So several common measurements for 2D segmentation are not suitable in our cases, such as MIoU (Mean Intersection over Union) and FWIoU (Frequent Weighted Intersection over Union). In our experiment, we choose Pixel Accuracy (Equation (13)) and class-wise F1-score (Equation (16)) for evaluation.
Quantitative Results
With the semantic fusion process introduced in Section 3.4, the coarse semantic 3D point cloud was generated. Its quantitative result is denoted as the 3D baseline in Table 2. To be more specific, most points in 3D baseline are correct, yet with outliers and errors. The evaluation result of 3D baseline's re-projection map demonstrates that the 3D baseline is much better than 2D in both PA and F1-score. Figure 6a,b illustrate this fact vividly, where Vehicle is segmented badly in 2D segmentation and segmented much better in 3D baseline. Furthermore, as shown in Table 2, the Pixel Accuracy of 3D baseline is 87.76%, and the F1-scores of Vehicle, Vegetation, and Road are relatively low. The refinement methods introduced in Section 3.5 are denoted as Local, Global, and Local+Global in Table 2. Local, Global, and Local+Global methods in Table 2 have been fully tested, and we put the best results under various parameters to this table. With refinement, the F1-score of Vehicle significantly rises, while Building, Vegetation, and Road also have increased scores. In addition, the Local+Global optimization approach is better than the Local or Global approach in each semantic category. It leads to the conclusion that the Local+Global approach outperforms any single Local or Global approach.
Discussion
In the following part, the discussion of our semantic fusion method will be arranged in three aspects: the down-sample rate, the parameter chosen for the k-nearest neighbor algorithm, and the decision strategies between soft and hard.
Parameter Selection for K-Nearest Neighbors
There are two criteria for judging neighbor points. As the name k-nearest neighbors itself indicates, the maximum number of neighbors is k. Besides that, the absolute distance in 3D space should also be limited. We down-sample the point could again with a rate of 0.001 to build a small KD-tree. Then we calculate the average distance of these points, setting the value to be the threshold of absolute distance. As indicated in Figure 7, the Pixel Accuracy firstly increases with the growth of k, and reaches its peak with k = 15. After crossing the peak, accuracy decreases as k increases. This is because as k increases, the local method negatively optimizes for small areas such as vehicles and narrow roads.
Soft vs. Hard Decision Strategy
The decision strategies based on probability like Bayesian and Markov Decision are soft, while threshold and Argmax layer are hard decision strategies. There is no doubt that hard decision processes discard some information. As demonstrated in Figure 7, Prob outperforms Argmax under the same k in most circumstances. The best result of Prob is also greater than Argmax as well. It reveals that the soft decision strategy leads to better performance.
Down-Sample Rate
Since the dense point cloud's scale of a specific outdoor scene collected by UAV is usually around 20M or bigger, global-wise algorithms cannot handle all points at once. For instance, PKU-M1 contains 27 million points. Table 3 shows a trend that the Pixel Accuracy generally reaches its peak at the down-sample rate of 1, equivalent to which means there are no down sampling process is taken at all. Increasing of down-sample rate makes the filtered point cloud denser, which intends the neighbors of a single point to become closer. The closer points are, the more likely they belong to the same semantic class. So it is sensible that the increasing of the down-sample rate avails the final Pixel Accuracy. If the performance of a method with lower sampling rate is higher than another, it is reasonable to believe that the former method is better. Figure 7. Ablation study on parameter selection for k-nearest neighbor and soft vs. hard decision strategy. For both Prob and Argmax methods, k = 15 is the best parameter. In most circumstances, the soft decision strategy Prob dominates hard decision strategy Argmax.
Conclusions
In this paper, we proposed a semantic 3D reconstruction method to reconstruct 3D semantic models by integrating 2D semantic labeling and 3D geometric information. In implementation, we utilize deep learning for both 2D segmentation and depth estimation. Then, the semantic 3D point cloud is obtained by our probability-based semantic fusion method. Finally, we apply the local and global approaches for point cloud refinement. Experimental results show that our semantic fusing procedure with refinement based on local and global information is able to suppress noise and reduce the re-projection error. This work paves the way for realizing finer-grained 3D segmentation and semantic classifications.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,646.2 | 2020-02-14T00:00:00.000 | [
"Computer Science"
] |
Graphene Hybrid Metasurfaces for Mid-Infrared Molecular Sensors
We integrated graphene with asymmetric metal metasurfaces and optimised the geometry dependent photoresponse towards optoelectronic molecular sensor devices. Through careful tuning and characterisation, combining finite-difference time-domain simulations, electron-beam lithography-based nanofabrication, and micro-Fourier transform infrared spectroscopy, we achieved precise control over the mid-infrared peak response wavelengths, transmittance, and reflectance. Our methods enabled simple, reproducible and targeted mid-infrared molecular sensing over a wide range of geometrical parameters. With ultimate minimization potential down to atomic thicknesses and a diverse range of complimentary nanomaterial combinations, we anticipate a high impact potential of these technologies for environmental monitoring, threat detection, and point of care diagnostics.
Introduction
Distinctive mid-infrared (MIR) molecular vibrations, ranging between~2 and 12 µm, act as characteristic 'molecular fingerprints' for label-free identification of a wide range of chemicals and biomolecules. Usefully, atmospheric transparency windows, at 3-5 µm and 8-13 µm, enable a range of applications such as CO 2 gas sensing and alcohol detection. Within the same MIR spectral range, black body radiation can also be utilised for photodetection and thermal imaging technologies. Taken together, MIR technologies have a substantial role in environmental monitoring, medical diagnosis, and security. However, in comparison to other wavelength regions, there exists a relative lack of MIR sources, detectors, and methodologies.
The current state-of-the-art technologies for MIR sensors are based on semiconductor bulk or quantum structures, such as gallium arsenide (GaAs), indium antimonide (InSb), and mercury-cadmium-telluride (MCT) [1]. Whilst these detectors offer very high sensitivity performance, their wider implementation and dissemination is significantly limited by a combination of high costs, limited spectral range, scarce materials, temperature sensitivity, and need for cooling. As a result, commercially available MIR sensing technologies are typically bulky and expensive, requiring specially controlled operating conditions. Whilst MIR technologies continue to develop, alternative materials and approaches are in high demand to meet the multiple challenges of device sensitivity, spectral range, size, cost, and power consumption [1][2][3].
A new family of low dimensional nanomaterials (graphene, transition metal dichalcogenides, topological insulators) offer unique optoelectronic functionalities and new technological solutions beyond those attainable with conventional semiconductors [4][5][6][7][8][9]. Graphenebased devices, in particular, attracted extensive research and attention due to their unique optoelectronic properties, broadband absorption, and electronic tuneability. For example, graphene shows good potential as atomically thin transparent conductive electrodes, combining high optical transparency (over 97%) for a wide range of wavelengths with
Integration of Graphene with Metal Arrays to Form Hybrid Metasurfaces
Two approaches were implemented for integrating the metal metastructures with monolayer CVD graphene (Graphenea, San Sebastián, Spain): (1) graphene transfer onto pre-fabricated and characterised metal arrays and (2) direct metal deposition on the graphene surface. Transferring graphene allowed direct comparison of optically characterized surfaces and reduced risks related to graphene-metal adhesion. Nominally single layer graphene was acquired from and transferred on to the fabricated and optically characterised metal arrays on SiO 2 substrates by an adhesive transfer method [24]. The presence of graphene layers was confirmed by a combination of contrast enhanced optical microscopy [44], electrical conductivity, and confocal micro-Raman analysis (S&I Spectroscopy & Imaging, Warstein, Germany).
Graphene optoelectronic device structures were patterned by electron beam lithography, graphene plasma ashing, metallisation, and lift-off as described in [45]. Material and device conductivity were measured at room temperature in an electronic probe station and 2450 SourceMeter (Keithley, Cleveland, OH, USA). In the absence of graphene, macroscopic surface conductivity was too low to measure for the Au metasurfaces on SiO 2 , with resistivities of a few Ω on the Au contacts, indicating correctly isolated Au nanostructures.
With the addition of graphene, two-dimensional sheet resistivities of the devices were estimated as~0.7 ± 0.3 kΩ/square. By considering commercial material standardisation and comparing to previous characterisation data, the graphene device mobilities were estimated as~1000 cm 2 ·V −1 ·s −1 , with two dimensional charge carrier densities of
Fourier-Transform Infrared Characterisation of the Metasurfaces
Micro reflection and transmission Fourier-transform infrared spectroscopy (FTIR) measurements were performed by a VERTEX 80v FT-IR spectrometer attached to a HYPER-ION 2000 FTIR microscope (Bruker, Ettlingen, Germany), which allowed to measure both transmission and reflection spectra of the samples. The FTIR spectra were recorded in the range of 5000-500 cm −1 (2-20 µm), with a resolution of 2 cm −1 , measured over areas from 40 × 40 µm 2 to 1 × 1 cm 2 . Polarization measurements used an additional model P03 IR Polarizer (Bruker, Ettlingen, Germany).
Finite-Difference Time-Domain Studies
The applied FDTD code was developed in-house, as described in several previous studies [46][47][48]. The purpose of the development and application of our own FDTD codes was to tackle the considerable difference between the mid-infrared wavelengths (2~10 µm) of interest and the thickness (d = 0.335 nm) of the graphene sheet. Dielectric coefficients of gold were numerically described using Lorentz equation with two poles by fitting refractive index and extinction coefficient data obtained from [49]. Periodic boundary conditions were applied in the x and y directions, while the ends of the calculation domain in the z dimension were simulated by perfectly matched layers (PMLs). Time step duration in the FDTD calculations was 1.83 × 10 −13 s, while a mesh size of 0.012 nm was used, so that detailed features of the electromagnetic wave within the single graphene sheet were properly resolved. The total number of simulated time steps was 16 × 10 5 ; it was sufficient that all fields propagated away from the calculation domain of the FDTD study. The FDTD code required up to 200 GB RAM in a mini-supercomputer Intel(R) Xeon(R) 144 cores, 500 GB RAM, 20 TB HDD.
Geometric Tuneability of Metal Metasurfaces on SiO 2
Metasurfaces, comprised of asymmetric metal nanoantenna arrays, were initially designed based on FDTD studies, which indicated strong interactions with electromagnetic waves, even for sparse metal arrays, with significantly enhanced reflectance (85%), a substantial diffraction (10%), and a much-reduced transmittance (5%) for an array of only 15% surface metal coverage [48]. Importantly, the propagating electromagnetic fields were estimated to be transiently concentrated around the surface nanolayer (e.g., graphene) in a time duration on the order of tens of nanoseconds, suggesting a novel efficient near-field optical coupling.
Since the direct interaction of unpatterned graphene in the midinfrared can be relatively weak, in this approach, we began with fabrication and analysis of metal nanoantenna metasurface on SiO 2 -Si substrates, followed by description and analysis of the integrated hybrid graphene metasurfaces. Figure 1 shows the geometry of one of the studied metastructure arrays measured by scanning electron microscopy. For all structures presented in this study, the designed metal thickness (L z = 50 nm ≈ λ MIR /80), width (L y = 200 nm), and nanogap lengths (g x = P x − L x = 200 nm) remained fixed, whilst the metal length (L x ) and lateral pitch (P y ) were varied. Metal coverages were estimated as M % = (L x × L y )/(P x × P y ).
The MIR photoresponse of the fabricated metal metasurfaces was investigated by micro-FTIR spectroscopy. Figure 2a shows FTIR spectra comparing metasurfaces with widely varying metal coverages, defined by the lateral pitch P y . Two main transmittance minima were observed, defined here as λ 1 and λ 2 . The second transmittance minima, around 9-10 µm (λ 2 ), was also observed for unpatterned reference regions of Si-SiO 2 (SiO 2 layer thickness 192 nm), which was attributed to asymmetric Si-O stretching modes for the sample substrate [50]. However, the interaction strength appeared locally enhanced with increasing coverage density of the metasurface arrays. In contrast, the transmittance minimum, λ 1 , displayed around 4 µm, was absent for the unpatterned Si-SiO 2 regions. This minimum depended strongly on the IR polarization and metal geometry, and can be fully attributed to the patterned metasurfaces. Peak interaction strengths (transmittance minima, reflectance maxima) were observed to increase with the metal coverage in the range of 3% (P y = 10 µm) − 20% (P y = 1.2 µm). For higher density arrays of nanoantenna, the intensity increases appeared to saturate, with a slight broadening of the peak. Nanomaterials 2023, 13, x FOR PEER REVIEW 5 of 15 minimum, λ1, displayed around 4 µm, was absent for the unpatterned Si-SiO2 regions. This minimum depended strongly on the IR polarization and metal geometry, and can be fully attributed to the patterned metasurfaces. Peak interaction strengths (transmittance minima, reflectance maxima) were observed to increase with the metal coverage in the range of 3% (Py = 10 µm) − 20% (Py = 1.2 µm). For higher density arrays of nanoantenna, the intensity increases appeared to saturate, with a slight broadening of the peak. Here, the metasurface photoresponses λ1 were targeted at the 4.25 µm absorption peak for CO2, with wavelength dependent peak interaction intensities increasing with the metal coverage. (b) Measured dependence of the peak-response wavelengths (λ1, λ2) on the designed metal length (Lx), for different array pitch (Py). λ1 can be attributed to the metal metasurface geometry and increases with metal length Lx whilst decreasing with sub-wavelength metal coverage densities (Pitch, %Metal). The dashed line follows the expected trend for isolated nanoantenna, derived in Section 3.3.
The peak response wavelengths of the main MIR features were investigated as a function of the metasurface geometry. Figure 2b displays FTIR analysis of the peak mid-IR , and nanogap distance (200 nm) were fixed, whilst the lateral pitch P y and horizontal metal lengths L x were varied. From left to right, the density of metal antennas and coverage % are increased, whilst the horizontal metal lengths L x were also slightly increased to compensate for P y -induced blueshifts and targeting CO 2 (4.25 µm) with their peak photoresponse wavelengths. minimum, λ1, displayed around 4 µm, was absent for the unpatterned Si-SiO2 regions. This minimum depended strongly on the IR polarization and metal geometry, and can be fully attributed to the patterned metasurfaces. Peak interaction strengths (transmittance minima, reflectance maxima) were observed to increase with the metal coverage in the range of 3% (Py = 10 µm) − 20% (Py = 1.2 µm). For higher density arrays of nanoantenna, the intensity increases appeared to saturate, with a slight broadening of the peak. Here, the metasurface photoresponses λ1 were targeted at the 4.25 µm absorption peak for CO2, with wavelength dependent peak interaction intensities increasing with the metal coverage. (b) Measured dependence of the peak-response wavelengths (λ1, λ2) on the designed metal length (Lx), for different array pitch (Py). λ1 can be attributed to the metal metasurface geometry and increases with metal length Lx whilst decreasing with sub-wavelength metal coverage densities (Pitch, %Metal). The dashed line follows the expected trend for isolated nanoantenna, derived in Section 3.3.
The peak response wavelengths of the main MIR features were investigated as a function of the metasurface geometry. Figure 2b displays FTIR analysis of the peak mid-IR Figure 2. (a) Linear polarized micro-FTIR spectra of the fabricated metasurfaces shows two clear mid-infrared photoresponses peaks, λ 1 , λ 2 . Here, the metasurface photoresponses λ 1 were targeted at the 4.25 µm absorption peak for CO 2 , with wavelength dependent peak interaction intensities increasing with the metal coverage. (b) Measured dependence of the peak-response wavelengths (λ 1 , λ 2 ) on the designed metal length (L x ), for different array pitch (P y ). λ 1 can be attributed to the metal metasurface geometry and increases with metal length L x whilst decreasing with sub-wavelength metal coverage densities (Pitch, %Metal). The dashed line follows the expected trend for isolated nanoantenna, derived in Section 3.3.
The peak response wavelengths of the main MIR features were investigated as a function of the metasurface geometry. Figure 2b displays FTIR analysis of the peak mid-IR photoresponse for 38 unique fabricated and characterised metal metasurface arrays on SiO 2 substrates. Both λ 1 and λ 2 displayed geometry dependant tuneability. It was found that the primary geometric variable was the nanoantenna metal length L x , whilst an important perturbation was induced by the lateral pitch P y . The shorter infrared peak wavelength response λ 1 exhibited the strongest dependence on the array geometries. As with λ 1 , transmittance minimum λ 2 was also observed to shift with varying metal length and lateral pitch (L x , P y ). However, the shift in λ 2 was significantly less sensitive to changes in the geometry, only ranging between~9.2 and 10.0 µm for the fabricated structures. The smaller observed blueshift in λ 2 , at high metal coverage, was understood to be the effect of the non-symmetric density-of-states of the Si-O vibration modes in the three-dimensional SiO 2 layer (thickness 192 nm) [51,52]. The enhanced transmittance at λ 2 was the result of the transiently concentrated electromagnetic field around the surface nanolayer which strongly activates the Si-O vibration modes in the SiO 2 layer. Midinfrared photoresponses were qualitatively similar for both peaks from room temperature down to 10 Kelvin, suggesting temperature resilient performance, however with an apparent blueshift of approximately 150 nm at low temperatures.
The characterised photoresponse for both features were also observed to be less critically sensitive to other geometric factors, varied within~50%, such as metal thickness, metal width, longitudinal pitch, nanogaps, and SiO 2 dielectric thickness. Combining this information, enabled reliable reproduction of metasurfaces with optimised and well-defined photoresponse for integration with graphene, and towards devices for targeted molecular sensing applications.
Graphene Metasurface Device Integration and Photoresponse
Two technological approaches were investigated to integrate graphene with the geometrically optimised metal arrays to form hybrid metasurface devices. In the first approach, monolayer CVD graphene was transferred onto the pre-characterised SiO 2 -metal metasurfaces, enabling direct comparison of the photoresponse [24]. Alternatively, the nanofabrication of the metasurfaces was replicated on substrates where graphene already covered the full surface (Figure 3a), allowing simplified and reproducible device processing [45]. With the addition of graphene layers, FTIR spectra were qualitatively similar to uncovered metal arrays on bare SiO 2 , with the persistent presence of λ 1 and λ 2 (Figure 3b). However, with the addition of graphene monolayers, an additional blueshift was observed at λ 1 for each of the measured array structures (Table 1). Significant signal enhancements were observed, of Si-O (at 9.5 µm) and PMMA (at 5.8 µm) peaks of +16-19% and +11%, respectively. Considering that the total metal coverages were only 20%, and that the most active focus area around and between the poles of antenna elements was close to~1% coverage, this represents an order of magnitude enhancement of the local molecular signals. Table 1. Graphene-induced blueshifts in the FTIR peak photoresponses of metasurface arrays following transfer of monolayer CVD graphene (G).
Time-Resolved FDTD Study of the Infrared Pulse Transmission of Graphene
To understand our experimental results, we theoretically investigated infrared pulse transmission through hybrid graphene metasurfaces by the FDTD method. The wavelength range of interest was within the midinfrared range, 2-10 µm, so the intraband conductivity of graphene sheet is adopted [27,53]: where is the Fermi level, in the order of approximately 0.2 eV and τ = 100 fs, and the real and imaginary parts of the relative dielectric coefficient are for the graphene plasmonic resonance to the IR radiation as functions of the wavelength of the IR radiation, where d = 0.335 nm is the thickness of the graphene sheet. Note that ε′ is negative, and the absolute value increases quickly following the increase of the IR wavelength. ε′′ increases with the IR wavelength, and is positive, indicating that the transmission of the IR waves through the graphene sheet is very lossy. Note that in Equations (1) and (2), = 0.2 eV and τ = 100 fs are adopted from [27,53]. In [53], the graphene mobility was 2700 cm 2 V −1 s −1 , whilst the experimental graphene properties in this study were estimated as 1000 cm −2 /V s (at ~1 × 10 12 cm −2 ). By scaling τ proportional to mobility, we obtain a new τ = 100/2.7 fs. Such a modification does not affect Equations (1) and (2), since 1/τ = 1.0/100.0 10 = 1.0 10 s −1 , 1/(100 10 /2.7) = 2.7 10 s −1 , while in the wavelength range of interest, ω = 2π c/λ = 6.28 × 3 10 /4.0 10 = 4.7 10 s −1 is much larger than the τ −1 factor.
The first theoretical study subject was to understand the almost identical FTIR spectra of graphene and removed graphene in Figure 3b. In other words, a single graphene sheet does not affect the optical properties of SiO2-Si. This sounds reasonable, since the wavelengths of the IR waves were much larger than the sub-nano-feature sizes of the
Time-Resolved FDTD Study of the Infrared Pulse Transmission of Graphene
To understand our experimental results, we theoretically investigated infrared pulse transmission through hybrid graphene metasurfaces by the FDTD method. The wavelength range of interest was within the midinfrared range, 2-10 µm, so the intraband conductivity of graphene sheet is adopted [27,53]: where E f is the Fermi level, in the order of approximately 0.2 eV and τ = 100 fs, and the real and imaginary parts of the relative dielectric coefficient are for the graphene plasmonic resonance to the IR radiation as functions of the wavelength of the IR radiation, where d = 0.335 nm is the thickness of the graphene sheet. Note that ε is negative, and the absolute value increases quickly following the increase of the IR wavelength. ε increases with the IR wavelength, and is positive, indicating that the transmission of the IR waves through the graphene sheet is very lossy. Note that in Equations (1) and (2), E f = 0.2 eV and τ = 100 fs are adopted from [27,53]. In [53], the graphene mobility was 2700 cm 2 V −1 s −1 , whilst the experimental graphene properties in this study were estimated as 1000 cm −2 /V s (at~1 × 10 12 cm −2 ). By scaling τ proportional to mobility, we obtain a new τ = 100/2.7 fs. Such a modification does not affect Equations (1) and (2), since 1/τ = 1.0/100.0 × 10 −15 = 1.0 × 10 13 s −1 , 1/(100 × 10 −15 /2.7) = 2.7 × 10 13 s −1 , while in the wavelength range of interest, ω = 2π c/λ = 6.28 × 3 × 10 8 /4.0 × 10 −6 = 4.7 × 10 14 s −1 is much larger than the τ −1 factor. The first theoretical study subject was to understand the almost identical FTIR spectra of graphene and removed graphene in Figure 3b. In other words, a single graphene sheet does not affect the optical properties of SiO 2 -Si. This sounds reasonable, since the wavelengths of the IR waves were much larger than the sub-nano-feature sizes of the graphene sheet (0.335 nm); so, macroscopically, the IR waves should transmit through without perceivable perturbation. However, previous studies reported significant crests and troughs in the IR transmission spectra through single graphene sheet, e.g., [54].
We started with a single graphene sheet in vacuum. The transmittance of the IR plane wave at normal incidence through this single graphene sheet extended in the xy plane positioned at z = 0 is easily calculated by applying Fresnel's equations where E r , E i , and E t denote the electric field of the reflected, incident, and transmitted wave, respectively, between medium 1 denoted by complex refractive index ñ 1 and medium 2 having ñ 2 . Let ñ = n + iκ be the complex refractive index of the thin graphene film (denoted as medium 2), and the measurement is performed in air so that the refractive indices of the spaces above the upper interface of the graphene sheet (medium 1) and below the lower interface of the graphene sheet (medium 3) are 1.0. By Equation (3), the reflection and refraction coefficients at the upper and lower interfaces are for a single reflection and transmission. Here, r 12 is the reflection of the plane wave from medium 1 back to medium 1 reflected by the upper interface of the graphene sheet, t 12 is the refraction from medium 1 into medium 2 at the upper interface. r 23 and t 23 are likewise defined but at the lower interface. Note that r 12 = −r 23 . The series of the transmitted waves are E t E i = e iδ t 12 t 23 1 + β + β 2 + · · · = e iδ t 12 t 23 lim n→∞ n ∑ i=0 β i where δ = ωñd/c 0 and β = e iδ r 23 r 21 . It is easy to see that the result of the above infinite summation is E t E i = e iδ t 12 t 23 1 + e 2iδ r 12 r 23 (6) Since the optical power of the transmitted light is while the incident optical power is S i = 2c 0 ε 0 E i 2 from which we obtain the transmittance T through the thin graphene film The numerically calculated transmittance, by Equation (5) with limited summation over n, then Equation (6) for n = ∞ (the black line marked with "∞") are presented in Figure 4a. For n = ∞, the transmittance is very close to 1.0 due to the extremely thin layer thickness of the graphene sheet (d = 0.335 nm) and appears featureless as a function of the IR wavelength. However, when n is limited, strong oscillation in the transmittance spectrum is observed. Next, we performed FDTD numerical calculations using in-house FDTD codes [46][47][48]. Numerical results were carefully examined in both space and time domains.
FDTD-calculated transmission spectrum of the graphene sheet is presented in Figure 4b as a function of the number of FDTD simulation steps, where the graphene sheet was placed on the xy plane, and the IR pulse impinged on the graphene structure along the z axis (see Figure 4c). As compared with the black ∞ line in Figure 4a, which is also presented in Figure 4b for direct comparison, the FDTD transmission spectrum oscillated strongly along the simulation time.
The critical aspect about results of Equation (6) and FDTD calculation is that Equation (6) is derived for a time interval of infinite length, i.e., n → ∞ in Equation (5). When we calculated the transmittance as a function of n, as a means to emulate measurements of finite time intervals, Equations (5) and (6) produced crests and troughs in the transmission spectrum (see Figure 4a), similar to the oscillations in Figure 4b. For the single graphene sheet, there existed a very substantial difference between n = 100 and n = ∞. This can be expected since the intensity of the IR wave in the graphene sheet will reduce gradually due to both the absorption (loss, ε′′ ≠ 0, but the real loss was negligibly small due to thin thickness) but mainly the transmission.
This also explains the time-dependence of the FDTD-calculated transmission spectrum presented in Figure 4b. The Ex field of the transmitting electromagnetic field is shown in Figure 4d, indicating that the major electromagnetic field passed through the graphene sheet already at approximately the time step of 10 5 (the time duration of each time step is δt = 1.83 × 10 −13 sec in FDTD); we still observed significant EM field passing through the transmission detector, which caused the strong oscillation in the FDTD-calculated transmission spectrum in Figure 4b. This effect was very significant for the graphene sheet since the dielectric coefficient of the graphene sheet was very large, so that the effective light speed there was much slowed.
FDTD Analysis of the Hybrid Graphene Metasurfaces
Here, we studied periodic gold nanoantenna metasurfaces embedded in SiO2 then covered with a single graphene sheet schematically shown in Figure 5. The two- Next, we performed FDTD numerical calculations using in-house FDTD codes [46][47][48]. Numerical results were carefully examined in both space and time domains.
FDTD-calculated transmission spectrum of the graphene sheet is presented in Figure 4b as a function of the number of FDTD simulation steps, where the graphene sheet was placed on the xy plane, and the IR pulse impinged on the graphene structure along the z axis (see Figure 4c). As compared with the black ∞ line in Figure 4a, which is also presented in Figure 4b for direct comparison, the FDTD transmission spectrum oscillated strongly along the simulation time.
The critical aspect about results of Equation (6) and FDTD calculation is that Equation (6) is derived for a time interval of infinite length, i.e., n → ∞ in Equation (5). When we calculated the transmittance as a function of n, as a means to emulate measurements of finite time intervals, Equations (5) and (6) produced crests and troughs in the transmission spectrum (see Figure 4a), similar to the oscillations in Figure 4b. For the single graphene sheet, there existed a very substantial difference between n = 100 and n = ∞. This can be expected since the intensity of the IR wave in the graphene sheet will reduce gradually due to both the absorption (loss, ε = 0, but the real loss was negligibly small due to thin thickness) but mainly the transmission.
This also explains the time-dependence of the FDTD-calculated transmission spectrum presented in Figure 4b. The E x field of the transmitting electromagnetic field is shown in Figure 4d, indicating that the major electromagnetic field passed through the graphene sheet already at approximately the time step of 10 5 (the time duration of each time step is δt = 1.83 × 10 −13 s in FDTD); we still observed significant EM field passing through the transmission detector, which caused the strong oscillation in the FDTD-calculated transmission spectrum in Figure 4b. This effect was very significant for the graphene sheet since the dielectric coefficient of the graphene sheet was very large, so that the effective light speed there was much slowed.
FDTD Analysis of the Hybrid Graphene Metasurfaces
Here, we studied periodic gold nanoantenna metasurfaces embedded in SiO 2 then covered with a single graphene sheet schematically shown in Figure 5. The two-dimensional metasurface array of gold nanorod antenna was placed on the surface of an insulating SiO 2 /Si substrate. The size of the metal patch was denoted as L x × L y × L z , where L x = 1.4 µm is the metal length in the x direction, L y = 0.2 µm is the metal width in the y direction, and L z = 0.05 µm is the metal thickness in the z direction. The periods of the array are denoted as P x = 1.6 µm and P y = 1.2 µm in the x and y direction, respectively. (Table 1). Figure 5 shows the transmittance spectra of the three sub-micron structures at three different simulation times. Because of the large dielectric coefficients of both the graphene sheet and the gold nanoantenna as well as the long wavelengths of interest (1∼10 µm), the transmittance spectra shown in Figure 5a strongly oscillated because the transmission detector in the simulation was placed within the residual electromagnetic fields captured around the graphene sheet and gold nanoantenna, diffracted from the z propagation direction to propagate along the x and y directions, which persist a long time since the submicron structures are periodic along the xy plane. Only after ~12 × 10 5 simulation steps, all transmitted EM field passed through the detector and the transmittance spectra converge (see Figure 5c). By comparison, here, we emphasised the technical importance of avoiding numerical artifacts for the system (Figure 5a,b). Similar to Figure 4, the transmittance through the single graphene sheet was almost perfect, save a small reduction due to the reflection by the SiO2-air interface. The addition of the graphene sheet to the gold nanoantenna array blueshifted the transmission minimum from 4.10 µm to 3.63 µm. This blueshift of approximately −470 nm is in close agreement with the experimental observations ( Table 1). Note that due to the simplified wavelength-independent dielectric coefficients of the theoretical models, additional vibrational mode features of SiO2 and Si substrate materials appearing in the experimental spectra from (2-10 µm) are not displayed in the simulated spectra.
Discussion
Similar blueshifts, of around −500 nm (~10%), were observed both from (i) direct comparison by graphene transfer onto pre-characterised metal metasurfaces and (ii) indirect comparison by patterning identical metal metasurfaces on graphene-covered-or bare-SiO2 substrates. A weaker blueshift, but much enhanced reflectance was also observed for the second photoresponse peak λ2 due to the Si-O vibration mode in the SiO2 layer (Table 1).
For the application relevant PMMA encapsulated (200 nm) metal metasurfaces and hybrid-graphene devices (Figure 3), FTIR revealed only a limited additional absorption in Figure 5. FDTD transmittance spectra of parallel (E x , H y ) polarized EM fields. The gold metasurface geometry was L x × L y × L z = 1.4 × 0.2 × 0.05 µm 3 , with pitch P x × P y = 1.6 × 1.2 µm 2 . The number of simulation steps was increased from (a) 4 × 10 5 , (b) 8 × 10 5 , up to (c) 16 × 10 5 to allow all transmitted EM field through the detector. The addition of graphene is associated with a blueshift in the photoresponse, in close agreement with experimental FTIR spectra (Table 1). Figure 5 shows the transmittance spectra of the three sub-micron structures at three different simulation times. Because of the large dielectric coefficients of both the graphene sheet and the gold nanoantenna as well as the long wavelengths of interest (1∼10 µm), the transmittance spectra shown in Figure 5a strongly oscillated because the transmission detector in the simulation was placed within the residual electromagnetic fields captured around the graphene sheet and gold nanoantenna, diffracted from the z propagation direction to propagate along the x and y directions, which persist a long time since the sub-micron structures are periodic along the xy plane. Only after~12 × 10 5 simulation steps, all transmitted EM field passed through the detector and the transmittance spectra converge (see Figure 5c). By comparison, here, we emphasised the technical importance of avoiding numerical artifacts for the system (Figure 5a,b). Similar to Figure 4, the transmittance through the single graphene sheet was almost perfect, save a small reduction due to the reflection by the SiO 2 -air interface. The addition of the graphene sheet to the gold nanoantenna array blueshifted the transmission minimum from 4.10 µm to 3.63 µm. This blueshift of approximately −470 nm is in close agreement with the experimental observations ( Table 1). Note that due to the simplified wavelengthindependent dielectric coefficients of the theoretical models, additional vibrational mode features of SiO 2 and Si substrate materials appearing in the experimental spectra from (2-10 µm) are not displayed in the simulated spectra.
Discussion
Similar blueshifts, of around −500 nm (~10%), were observed both from (i) direct comparison by graphene transfer onto pre-characterised metal metasurfaces and (ii) indirect comparison by patterning identical metal metasurfaces on graphene-covered-or bare-SiO 2 substrates. A weaker blueshift, but much enhanced reflectance was also observed for the second photoresponse peak λ 2 due to the Si-O vibration mode in the SiO 2 layer (Table 1).
For the application relevant PMMA encapsulated (200 nm) metal metasurfaces and hybrid-graphene devices (Figure 3), FTIR revealed only a limited additional absorption in the MWIR region of interest, with some sharp characteristic features around 5.8 µm, and at longer wavelengths. However, a substantial redshift of around 400 nm was observed of λ 1 for these devices.
These effects can have a significant practical impact on the design and functionality of such devices, where the optical response may require precise tuning for suitable operational efficiency. For example, for the application of molecular sensing where the absorption wavelengths can be relatively narrow. The demonstrated influence of the graphene layer, with indications of the range of sensitivity and application, also suggests the possibility of a further dynamic tunability of the system by modifying the graphene properties by electrostatic or electrochemical gating [55]. For a molecular sensing mechanism by peak wavelength shifts, the sharpness of the metasurface-induced MIR peaks could be further sharpened by replacing the metal nanoantenna arrays with high-q dielectrics and by introducing chirality to the geometrical design [36].
Empirical Metasurface Photoresponse Calculator for Reliable Precision Design
The main factors determining the peak photoresponse wavelengths and interaction strength were experimentally observed to be the metal length Lx, the lateral pitch Py, and the material properties at the metasurface. From [56], we can expect the resonance condition for a single one dimensional metal antenna to occur at where m N is an integer denoting the order of the resonance condition, n eff is the effective mode refractive index, and δ L represents the deviation of the effective metal length from the geometrical length L x . Figure 6 shows the dependence of (λ N /L x ) vs. (1/P y ), revealing the overall trend for the metasurface arrays to be approximated by with extracted fitting parameters of A (SiO2-Au) ≈ 4.12, b (SiO2-Au) ≈ 1.07 for metasurfaces on SiO 2 substrates. With the addition of graphene, we find A (SiO2-Au-G) ≈ 3.72, b (SiO2-Au-G) ≈ 1.32. From this, we can estimate n eff(SiO2-Au) ≈ 2.06, and n eff(SiO2-Au-G) ≈ 1.86. This pitch related shift effect can be quite substantial for subwavelength periodicities, e.g., for P y = 0.6 µm, The extraction of accurate fitting relations can further be used to improve the design precision of the metasurface geometries. This simple photoresponse design method was used to define the geometric parameters within this study to target specific wavelengths, for example in Figures 2 and 3. Although we focused here on 4.25 µm, CO 2 , Table 2 indicates the expected design parameters to target a range of other MIR wavelengths and molecules.
The physics of the blueshift due to the insertion of the graphene sheet into the metal metasurface can be interpreted from two perspectives: (1) The electromagnetic wave is strongly perturbed, transiently, by the graphene sheet. This, in turn, affects the free electrons in the metal nanoantenna, resulting in a stronger plasmonic effect and a blueshift; (2) by viewing the graphene and the metal nanoantenna independently, the nanoantenna encloses a space in the form of a resonant cavity. The insertion of the graphene sheet reduces this cavity volume so that the resonant frequency is increased, resulting in a blueshift. Since this affect is expected to be influenced by the graphene charge carrier density, it is anticipated that the spectral photoresponse wavelength of the hybrid graphene metasurfaces can be further tuned or modulated by electrostatic or electrochemical gating [38][39][40].
with extracted fitting parameters of A(SiO2-Au) ≈ 4.12, b(SiO2-Au) ≈ 1.07 for metasurfaces on SiO2 substrates. With the addition of graphene, we find A(SiO2-Au-G) ≈ 3.72, b(SiO2-Au-G) ≈ 1.32. From this, we can estimate neff(SiO2-Au) ≈ 2.06, and neff(SiO2-Au-G) ≈ 1.86. This pitch related shift effect can be quite substantial for subwavelength periodicities, e.g., for Py = 0.6 µm, ∆λ1 ≈ −40%. Figure 6. Dependence of the peak-response wavelengths on the metal length Lx and lateral pitch Py. Strong blueshifts are evident for Py < λ 1 , and where the gold metasurfaces are integrated with graphene. Analysis of these trends enables simple and effective targeted design of the metasurface geometries. Table 2. Estimated metal metasurface geometries for midinfrared molecular targeting, for a range of metal lengths L x and lateral pitch P y , with and without the presence of graphene.
Conclusions
In summary, we demonstrated simple, precise, and reproducible geometrical photoresponse tuning of hybrid graphene metasurface towards targeted molecular sensing. Systematic midinfrared photoresponse characterisation was enabled by a combination of electron beam lithography-based nanofabrication, micro-FTIR spectroscopy, and FDTD studies. Peak-response wavelengths were found to depend most critically on two geometrical parameters; the longitudinal metal nanoantenna length and the lateral pitch between the antenna. Substantial blueshifts were observed and characterised upon the integration of graphene with the metal metasurfaces, and for high density metasurface structures, observed up to~40%. The careful interpretation of more than 100 unique structures enabled the development of a simple and precise set of design tools to ensure geometrically fine-tuned photoresponses for reproducible mid-infrared molecular targeting. The combi-nation of hybrid metasurfaces and their detailed characterisation is important for the next generation of smart portable sensors and lab-on-chip technologies.
Data Availability Statement:
The data presented in this study are available on request from the corresponding authors. | 8,542.8 | 2023-07-01T00:00:00.000 | [
"Materials Science",
"Physics",
"Chemistry",
"Engineering"
] |
Comparative genomics of 16 Microbacterium spp. that tolerate multiple heavy metals and antibiotics
A total of 16 different strains of Microbacterium spp. were isolated from contaminated soil and enriched on the carcinogen, hexavalent chromium [Cr(VI)]. The majority of the isolates (11 of the 16) were able to tolerate concentrations (0.1 mM) of cobalt, cadmium, and nickel, in addition to Cr(VI) (0.5–20 mM). Interestingly, these bacteria were also able to tolerate three different antibiotics (ranges: ampicillin 0–16 μg ml−1, chloramphenicol 0–24 μg ml−1, and vancomycin 0–24 μg ml−1). To gain genetic insight into these tolerance pathways, the genomes of these isolates were assembled and annotated. The genomes of these isolates not only have some shared genes (core genome) but also have a large amount of variability. The genomes also contained an annotated Cr(VI) reductase (chrR) that could be related to Cr(VI) reduction. Further, various heavy metal tolerance (e.g., Co/Zn/Cd efflux system) and antibiotic resistance genes were identified, which provide insight into the isolates’ ability to tolerate metals and antibiotics. Overall, these isolates showed a wide range of tolerances to heavy metals and antibiotics and genetic diversity, which was likely required of this population to thrive in a contaminated environment.
INTRODUCTION
Heavy metals, while naturally occurring, cause harm to both human and ecosystem health. Specifically, chromium is a mutagen and carcinogen and its presence in the environment can be natural due to anthropogenic activities, such as industrial manufacturing (e.g., metal plating and tanneries) and mining (chromite ore) (Barak et al., 2006;Brose & James, 2010;Cheng, Holman & Lin, 2012). In the environment, chromium can exist as hexavalent chromium [Cr(VI)], which is soluble and more toxic, or as insoluble trivalent chromium [Cr(III)] (Bartlett, 1991;Cheng, Holman & Lin, 2012). Thus, the redox cycling of chromium is critical to understanding its impacts on the environment. In natural systems, Cr(III) can be oxidized by manganese oxides and hydrogen peroxide, while Cr(VI) can be reduced by ferrous iron and hydrogen sulfides (Brose & James, 2010;Oze et al., 2004;Viti et al., 2013).
Many bacteria are able to tolerate Cr(VI) stress based on their ability to transport it outside the cell or enzymatically reduce it to the less toxic form Cr(III). The efflux pump, chrA, has been associated with providing Cr(VI) resistance by transporting it outside the cell (Cervantes & Ohtake, 1988;Cervantes et al., 1990;Nies, Nies & Silver, 1990). A 2008 study found 135 chrA orthologs of this efflux pump (Ramirez-Diaz et al., 2008); however, the presence of chrA is not always a sole predictor of the amount of Cr(VI) that bacteria can resist (Henne et al., 2009a). Bacteria can also have additional efflux pumps that provide resistance to other metals (Nies, 2003;Silver, 1996), which is advantageous as anthropogenically impacted sites are often contaminated with multiple stressors. Further, there is a well-known association between metal and antibiotic tolerance (Baker-Austin et al., 2006;Seiler & Berendonk, 2012;Wright et al., 2006). For example, Staphylococcus species use multiple efflux pumps to tolerate chromium, lead, and penicillin (Ghosh et al., 2000) and Salmonella abortus equi strains were able to tolerate chromium, cadmium, mercury, and ampicillin (Ug & Ceylan, 2003). Thus, efflux pumps can play a vital role in how bacteria thrive in contaminated ecosystems.
Bacteria are also known to catalyze the reduction of Cr(VI) aerobically. Specifically, certain bacteria use a soluble NADH/NADPH-dependent oxidoreductase to reduce Cr(VI) (Ackerley et al., 2004a;Barak et al., 2006;Cheung & Gu, 2007;Gonzalez et al., 2005;Park et al., 2000). Two well-studied Cr(VI) reductases are chrR and yieF. In Pseudomonas putida, chrR reduces Cr(VI) via the transfer of one electron, generating Cr(V), a reactive intermediate, which then requires a second electron transfer to generate the more stable insoluble Cr(III) (Park et al., 2000). Conversely, Escherichia coli uses yieF to catalyze the transfer of four electrons to reduce Cr(VI) (Ackerley et al., 2004b;Barak et al., 2006;Ramirez-Diaz et al., 2008).
The objective of this study was to examine the genomes of 16 Microbacterium spp. strains isolated from metal contaminated soil to identify their genetic potential and physiological ability to reduce Cr(VI) and to tolerate both heavy metals and antibiotics. A recent genomic study of four Cr(VI) reducing Microbacterium spp. identified two putative reductases (Henson et al., 2015), one genetically similar to chrR in Thermus scotoductus (Opperman, Piater & van Heerden, 2008) and the other to yieF of Arthrobacter sp. RUE61a (Niewerth et al., 2012); however, these putative reductases had low sequence similarity to the well-studied reductases found in Pseudomonas putida and E. coli. While other studies have shown Microbacterium isolates are able to reduce Cr(VI) (Humphries et al., 2005;Liu et al., 2012;Pattanapipitpaisal, Brown & Macaskie, 2001;Soni et al., 2014), there are still unknowns related to the genetics of Cr(VI) reduction and resistance. Further, the study by Henson et al. (2015) documented high levels of genomic variability between the isolates, which might be related to ecotypes. However, the conclusions of this study were limited based on the low number of genomes examined. Comparative genomics has been used in various environments and microorganisms to examine inter-strain variation and ecotypes (Briand et al., 2009;Coleman & Chisholm, 2010;Denef et al., 2010;Frangeul et al., 2008;Humbert et al., 2013;Martiny, Coleman & Chisholm, 2006;Meyer & Huber, 2014;Meyer et al., 2017). Thus, this study examined the genomic and physiological variability of heavy metal and antibiotic resistance of 16 Microbacterium strains from the same contaminated soil environment to elucidate the potential genomic flexibility of closely related strains.
Bacterial isolation
Isolation of Microbacterium spp. from soil is described in Kourtev, Nakatsu &Konopka (2009) andHenson et al. (Henson et al., 2015). Bill Jervis from the Indiana Department of Transport provided site access (no permit was required) and the project did not involve endangered or protected species. Briefly, contaminated soil samples were collected from the Department of Transportation site in Seymour, IN, USA. Previous studies have documented the site was contaminated with Pb (1,156 mg -1 soil), Cr (5,868 mg -1 soil), and hydrocarbons (toluene and xylenes: >200 mg g -1 soil) (Joynt et al., 2006;Kourtev, Nakatsu & Konopka, 2006;Nakatsu et al., 2005). Bacteria were initially isolated from the soil on 50% tryptic soy agar amended with 0.25 mM Cr(VI) (K 2 CrO 4 ). From this, 16 isolates were further selected based on their ability to grow on at least 0.5 mM Cr(VI). The isolates were stored in glycerol stocks at -80 C until further use.
Cr(VI) reduction
For Cr(VI) reduction experiments, isolates were grown in 25% tryptic soy broth (TSB) with 0.5 or 1 mM Cr(VI) at 30 C and 225 rpm for 72 h. Two concentrations were used as certain isolates had relatively lower growth at 1 mM compared to other isolates. The cultures were pelleted and Cr(VI) reduction was determined using a colorimetric assay (Henson et al., 2015;Urone, 1955).
DNA sequencing and bioinformatics
Isolates were grown in TSB with 0.5 or 2 mM Cr(VI) at 30 C and 225 rpm for 24-72 h. The cultures were pelleted via centrifugation and stored at -20 C until used in DNA extractions. FastDNA Spin Kits (MP Biomedical) were used to extract genomic DNA from the thawed pellets. The resulting DNA was stored at -20 C. The DNA from the 16 bacterial isolates were sent to Cincinnati Children's Hospital Medical Center's Genetic Variation and Gene Discovery Core facility for whole genome shotgun sequencing using one lane of an Illumina HiSeq 2000 (100bp PE). The resulting raw genomic reads can be found in the National Center for Biotechnology Information's (NCBI) Short Read Archives (SRA), accession number: SRP120551.
Using multiple different read depths, MEGAhit (Li et al., 2016) was used to assemble the data, which were then assessed for quality with QUAST (Gurevich et al., 2013). CheckM (Parks et al., 2015) was used to examine genome completeness and contamination (taxon marker Microbacterium). The resultant genomes were annotated by the Department of Energy's Joint Genome Institute Integrated Microbial Genomes (IMG) system (Markowitz et al., 2012) and are publicly available (see Table S1). A pangenomic analysis of 20 genomes (from this study and Henson et al., 2015) was conducted using the An'vio program (version 3) (Eren et al., 2015) following the pangenomics workflow from Delmont & Eren (2018). The pangenomic analysis within An'vio also utilized other programs like HMMER (Eddy, 2011), Prodigal (Hyatt et al., 2010), and NCBI's blastp (Altschul et al., 1997). An'vio was also used to define clusters of orthologous groups (COGs) for the pangenomic analysis using the command anvi-run-ncbi-cogs.
All 16, 16S rRNA gene sequences from the Cr(VI) isolates were downloaded from IMG. These genes were used to generate a maximum likelihood phylogenetic tree in MEGA7 (Kumar, Stecher & Tamura, 2016) and evolutionary history was inferred using the maximum likelihood Tamura-Nei model (Tamura & Nei, 1993). Within IMG, annotations were searched for genes related to Cr(VI) resistance and Cr(VI) reduction via keywords (e.g., efflux, reductase, and chromate). Annotated Cr(VI) reductase genes were compared via BLASTP (one-way comparison on NCBI's website) to a known Cr(VI) reducing chrR from Pseudomonas putida KT2440 (NP_746257) (Ackerley et al., 2004b). Genes related to heavy metal resistance and antibiotic resistance were searched using functional categories. Beta-lactam, chloramphenicol, and vancomycin resistance genes were also identified using KEGG pathway KO identifiers. The COG category "Inorganic ion transport and metabolism" was used to find annotated heavy metal efflux genes and metal resistance genes.
Metal and antibiotic tolerance assays
Bacterial isolates were grown on agar plates (10% TSB, 1.5% agar, and 0.5 mM Cr(VI)) at 30 C until colonies were visible. Colonies were then streaked onto fresh agar plates amended with different concentrations of heavy metals (Cd 0-5 mM, Co 0-5 mM, Cr 0-20 mM, Cu 0-1 mM, Ni 0-15 mM, and Zn 0-1 mM). Duplicate plates were incubated at 30 C for 14 days and growth was confirmed by the presence of visible colonies. All bacterial isolates were plated on each metal concentration in duplicate.
The Minimum Inhibitory Concentration (MIC) for ampicillin, chloramphenicol, and vancomycin was tested on each isolate using MIC test strips (Liofilchem Ò ). To do this, each of the 16 Microbacterium spp. isolates was streaked onto agar plates (50% TSB and 1.5% agar) with 0.5 mM Cr and incubated at 30 C for 5 days. Next, each isolate was suspended in a 0.85% sterile saline solution until it reached a density that approximated the 0.5 McFarland Turbidity Standard. The isolates were then spread uniformly on a Mueller-Hinton agar plate. A MIC test strip was placed in the center of each plate. The plates were incubated at 30 C for 24 h. After the 24 h, the plates were removed and the MIC of each isolate to each antibiotic was determined by visually documenting the zones of inhibition.
Isolation of Cr(VI) reducing bacteria
Though originally isolated and studied for their ability to reduce Cr(VI) (Henson et al., 2015;Kourtev, Nakatsu & Konopka, 2009), it was speculated that these isolates may also be tolerant to other contaminants, because the soils were contaminated with other heavy metals and organic solvents (Joynt et al., 2006;Kourtev, Nakatsu & Konopka, 2006;Nakatsu et al., 2005). Among the 16 isolates studied here, a wide range of Cr(VI) tolerance and reducing ability was found ( Table 1). The majority (13) of the isolates could tolerate two mM Cr(VI), while three of the isolates (K2B2, K36, and K40) could only tolerate 0.5 mM Cr(VI). Interestingly, five of the 13 isolates that were able to tolerate two mM Cr(VI) (A20, K24, K30, K33, and PF3) had observable colonies on plates containing up to 20 mM Cr(VI). Further, the 16 isolates also showed a wide range in Cr(VI) reduction (0-88.8%) over a 72-h incubation (Table 1). Only two isolates, K27 and K33, reduced over 80% of the Cr(VI) in the experiment. Interestingly, K27 reduced 83% of the Cr(VI) in liquid medium but was only able to tolerate 0.5 mM Cr(VI) when grown on solid medium, while K33 tolerated up to 20 mM Cr(IV) and reduced 88% of the Cr(IV) present. Two additional isolates, K31 and PF3, reduced over 50% of the Cr(VI) in the experiment. There was no statistical relationship between the ability of isolates to tolerate and their ability to reduce Cr(VI). , 2008). To gain a better understanding of the Cr(VI) tolerance of these isolates, their genomes were assembled and annotated. The isolate genomes ranged in total length from 3.31 to 4.23 Mb and all had GC content ranging from 68 to 70%% (Table 2). A genomic study of 10 Macrobacterium sp. documented similar GC ranges and genome length (Corretto et al., 2015). The assembled genomes were generally over 96.7% complete and all genomes had under 4% contamination (Table 2). Phylogenetic analysis of the 16S rRNA gene indicated that all of the isolates belong to the genus Microbacterium ( Fig. S1; Table S2). Specifically, the isolates were closely related to Microbacterium oxydans, Microbacterium maritypicum, or Microbacterium paraoxydans. Microbacterium spp. have been isolated from sites contaminated with metals (Avramov et al., 2016;Bollmann et al., 2010;Corretto et al., 2015), radionuclides (Nedelkova et al., 2007), and petroleum (Avramov et al., 2016;Chauhan et al., 2013;Wang et al., 2014). Moreover, some environmental isolates, such as Microbacterium laevaniformans strain OR221, have shown to tolerate multiple metals (Ni, Co, and Cd) (Bollmann et al., 2010). A genomic analysis of the latter strain provided evidence of genes (e.g., transporter and detoxification genes) that could aid the strain's ability to tolerate metals (Brown et al., 2012). Other Microbacterium spp. have been known to The pan-genome size of the 20 isolates was 7,902 protein clusters with the core genome encompassing 2,073 protein clusters (26%) (Fig. 1). The majority of the genes in the core genome of the 20 isolates (36%) were placed in the "Metabolism" COG (Fig. 2). Genes in the COG categories "Cellular processes and signaling" and "Information storage and processing" comprised 21% and 17%, respectively of the core genome (Fig. 2). The flexible pangenome was dominated by genes in the COG categories "Metabolism" and "Poorly characterized/No assigned COG" (36% for both) (Fig. 2). Relative to the core pangenome, the flexible pangenome had few genes in the COG categories "Cellular processes and signaling (21%)" and "Information storage and processing 17%" (Fig. 2). The percentage of core genes in a genome can vary from 3% to 84% (McInerney, McNally & O'Connell, 2017). For example, the Bacillus cereus core genome is 27% of the The outer most complete ring (labeled Num cont.) represents the number of genomes that a gene occurs in, with the core genome highlighted with an additional black bar. The internal rings represent an isolate's genome with the black color representing the presence of genes in a genome (gray is the absence of a gene). The bar charts show quality control measurements (total genome length [base pairs], GC content, and percent completion) and also the abundance of genes that were annotated as Co, Zn, Cd efflux system genes and antibiotic tolerance genes (beta lactam, vancomycin, and chloramphenicol resistance).
Full-size DOI: 10.7717/peerj.6258/ fig-1 pangenome, whereas the core genome of Bacillus anthracis is 65% (McInerney, McNally & O'Connell, 2017). An analysis of 20 Microcystis sp. showed the genome was comprised of 34-49% core genes and 51-66% flexible genes (Meyer et al., 2017). A study on the genomes of 12 Prochlorococcus isolates found a core genome ranging from 40% to 67% (Kettler et al., 2007). The relatively large genomic variability seen with the Microbacterium spp. examined here could be due to selective pressures that drove gene loss or horizontal gene transfer, which in the end enhanced the ability of the isolates to survive in a contaminated sediment environment. The COG categories "Inorganic ion transport and metabolism (P)" and "Self-defense mechanism (V)" were examined due to their relation to heavy metal and antibiotic transport. The flexible genome has relatively higher percentages of genes in both the self-defense (1% of the core genome and 3% of the flexible genome) and inorganic ion transport and metabolism (5% of the core genome and 6% of the flexible genome) COG category. Each of these COG categories includes predicted gene functions for heavy metal transport and antimicrobial resistance and antibiotic efflux/transporters (Fig. 1). Interestingly, the core and flexible genomes contain different Co/Zn/Cd efflux system component genes and multidrug efflux pump genes, which suggest tolerance to heavy metals and antibiotics. The variability found here may be related to the ability of each isolate to tolerate different types and concentration of heavy metals and antibiotics.
Genomic insights of Cr(VI) reduction
The annotated genomes from this study provided evidence that these isolates could reduce Cr(VI). All possess annotated Cr(VI) reductase, chrR (Table S3). In addition, M. sp K24 and K30 contained two annotated chrR genes. The Cr(VI) reductases were compared with a known Cr(VI) reductase from Pseudomonas putida KT2440 (Park et al., 2000) and BLASTP results showed that all the Microbacterium spp. chrR genes shared a high degree of homology (ranging from 40% to 46% identity) (Table S3). Thus, it is likely that these chrR genes are responsible for the Cr(VI) reduction ability of these isolates. Interestingly, other Microbacterium spp. isolated from this site did not have annotated Cr(VI) reductases but BLAST searches were able to identify genes homologous to chrR and yeiF (Henson et al., 2015), further suggesting high interspecies genetic diversity in the putative Microbacterium spp. Cr(VI) reductases found in the same soil. Cr(VI) tolerance might not be related to efflux pumps and Cr(VI) reductases. While the genomes did contain Cr(VI) specific reductases, they did not contain a Cr(VI) efflux pump (chrA). The lack of an assembled and annotated chrA was interesting since the ability of the isolates to reduce and resist Cr(VI) did not correlate. While the lack of an assembled chrA does not confirm its absence from the genome, tolerance could be related to other genes. Cr(IV) tolerance has been shown to be related genes that involve oxidative stress response (Ackerley et al., 2004a;Cheng et al., 2009), DNA repair (Hu et al., 2005Miranda et al., 2005), and metabolism (Brown et al., 2006;Decorosi et al., 2009;Henne et al., 2009b).
Tolerance to multiple heavy metals and antibiotics
The isolates' genomes all contained numerous genes that suggest these bacteria can tolerate multiple heavy metals (e.g., Co, Zn, and Cd) and antibiotics. When examining KEGG pathways that related to transport, all of the isolates had genes that were annotated to be Co/Zn/Cd efflux system components (Table S4). In addition, some isolates had a putative cadmium resistance protein (Table S4). KEGG pathways also documented genes that could provide tolerance to three different antibiotics (ampicillin, chloramphenicol, and vancomycin) ( Table S4).
The isolates' physiological ability to tolerate heavy metals were examined to determine whether the inferred resistance based on the putative metal transport genes could be confirmed. All the isolates were streaked onto plates with various concentrations of Cd, Co, Cr, Cu, Ni, and Zn. Overall, 11 of the 16 isolates tolerated between 0.1 and 1.0 mM of all six tested metals (Table 1). All 16 isolates were able to tolerate Cr, Ni, Zn, and Cu (Table 1). As the isolates were enriched with Cr, it was expected that they would be able to tolerate higher concentrations of Cr. Overall, the bacteria isolated in this study were able to tolerate many heavy metals, a finding seen in other studies. Microbacterium spp. enriched from Ni-rich serpentine soils have been previously documented to tolerate multiple heavy metals including Cd (0.5-2.5 mM), Co (0.1-5 mM), Cr (1-5 mM), Cu (0.25-10 mM), Ni (5-15 mM), and Zn (5-10 mM) (Abou-Shanab, Van Berkum & Angle, 2007). Another study found that multiple isolates closely related to M. oxydans were able to tolerate Cu (0.25-16 mM), Cr (0.5-16 mM), and Ni (0.25-16 mM) (Nedelkova et al., 2007). As Microbacterium spp. have been isolated from contaminated environments, it is not surprising to find they can tolerate heavy metal contamination. While this study did not attempt to confirm the function of the Co/Zn/Cd efflux system components found in these genomes (Table S4), the presence of these genes could be related to the ability of these isolates to tolerate multiple heavy metals. Whether in culture (e.g., Pseudomonas putida or Cupriavidus necator, formerly Alcaligenes eutrophus) (Manara et al., 2012;Nies, 1995) or in soil microcosms (Cabral et al., 2016), cobalt-zinc-cadmium efflux system proteins have been shown to provide tolerance to heavy metals. Thus, it is possible that the presence of these genes plays the same role in the isolates documented in this study.
There is a long history of research linking metal tolerance and antibiotic resistance (Baker-Austin et al., 2006;Calomiris, Armstrong & Seidler, 1984;Henriques et al., 2016;Seiler & Berendonk, 2012). In this study, 13 of the 16 isolates showed tolerance to ampicillin, chloramphenicol, and vancomycin (Table 3). Specifically, 13 out of 16 isolates were able to grow in the presence of ampicillin with tolerances ranging from 1.5 to 16 mg ml -1 (Table 3). Chloramphenicol tolerance was found in 15 out of 16 isolates, of which four isolates, K24, K35, K36, and K5D, tolerated 24 mg ml -1 of the antibiotic (Table 3). Microbacterium isolates have been previously documented to tolerate various antibiotics. Microbacterium isolates from fish mucus have shown high antibiotic resistance to both ampicillin (>1,600 mg ml -1 ) and chloramphenicol (> 960 mg ml -1 ) (Ozaktas, Taskin & Gozen, 2012). Human isolated Microbacterium spp. have been reported to grow in the presence of vancomycin in concentrations ranging from 0.25 to 15 mg ml -1 (Gneiding, Frodl & Funke, 2008). Other clinical Microbacterium isolates had MIC for ampicillin from 1 to 1.5 mg ml -1 and vancomycin from 3 to 4 mg ml -1 (Laffineur et al., 2003). A 2015 study found 26% of Microbacterium isolates were resistant to vancomycin (Bernard & Pacheco, 2015). Metal and antibiotic co-tolerance has been reported in a number of environmentally isolated bacteria. Bacteria isolated from drinking water with noted antibiotic resistance were also tolerant to high levels of Cu 2+ , Pb 2+ , and Zn 2+ (Calomiris, Armstrong & Seidler, 1984). Environmental isolates from wastewater treatment plants have been shown to tolerate various heavy metals and antibiotics (Shafique, Jawaid & Rehman, 2016. Bacillus spp. isolated from river water tolerated multiple heavy metals and antibiotics (Shammi & Ahmed, 2016). Co-tolerance to heavy metals, antibiotics, and polychlorinated biphenyls was also observed in isolates from Antarctic sediments (Giudice et al., 2013). Further, evidence of shared genetic tolerance mechanisms for metals and antibiotics was found in Staphylococcus aureus isolated from a polluted riverbank. S. aureus contained a novel emrAB operon encoding efflux pumps that were inducible by the heavy metals Cr(VI) and manganese and the antibiotics ampicillin and chloramphenicol (Zhang et al., 2016).
While this study does not prove function, the data suggest the antibiotic tolerances found in these isolates could be related to the antibiotic resistance genes documented in the genomes. For example, 15 out of 16 isolates were able to tolerate chloramphenicol matching the 15 genomes that contained an annotated chloramphenicol resistance gene (cmlR, cmx) (Table S4), a gene previously shown to provide Corynebacterium striatum with chloramphenicol tolerance (Schwarz et al., 2004;Tauch et al., 1998). It is possible cmx could therefore provide tolerance to the Microbacterium isolates. Similarly, all the Microbacterium isolates had an annotated class A beta-lactamase gene (penP), a gene that provides ampicillin tolerance to numerous bacteria (Bush & Jacoby, 2010), though not all isolates were resistant to ampicillin (Table 3).
Implications to community functions
All Microbacterium spp. were isolated from the same contaminated soil and have highly similar 16S rRNA genes (99-100% BLAST identity). Yet, these isolates displayed both genomic variation and varying abilities to tolerate multiple heavy metals and antibiotics. Other studies have shown that closely related isolates can exhibit variable ranges of genomic and physiological differences (Coleman & Chisholm, 2010;Henson et al., 2015;Hunt et al., 2008;Martiny, Coleman & Chisholm, 2006;Meyer & Huber, 2014;Qamar, Rehman & Hasnain, 2017;Rocap et al., 2003;Simmons et al., 2008;Welch et al., 2002). A study of 78 Myxococcus xanthus isolates from a small soil plot found 21 different genotypes (Vos & Velicer, 2006). Thus, it is speculated that genomic and physiological diversity would allow populations to differentiate, potentially increasing resistance and/or resilience that would aid the community to thrive during metals and other contaminant stress. Further support for this could be found when examining the Cr(VI) reduction and resistance data. While none of the isolates had an annotated Cr(VI) resistance efflux pump (chrA), they all contained a Cr(VI) reductase (chrR), yet there was variability found in their ability to reduce and resist Cr(VI). As the isolates from this study were from a soil with multiple contaminants, specific isolates that more efficiently reduce Cr(VI) (making it less bioavailable) may allow other community members (without resistance genes) to thrive and possibly degrade other contaminants. Thus, the variation observed in | 5,734.4 | 2019-01-14T00:00:00.000 | [
"Environmental Science",
"Biology",
"Chemistry"
] |
Debaryomyces hansenii, Stenotrophomonas rhizophila, and Ulvan as Biocontrol Agents of Fruit Rot Disease in Muskmelon (Cucumis melo L.)
The indiscriminate use of synthetic fungicides has led to negative impact to human health and to the environment. Thus, we investigated the effects of postharvest biocontrol treatment with Debaryomyces hansenii, Stenotrophomonas rhizophila, and a polysaccharide ulvan on fruit rot disease, storability, and antioxidant enzyme activity in muskmelon (Cucumis melo L. var. reticulatus). Each fruit was treated with (1) 1 × 106 cells mL−1 of D. hansenii, (2) 1 × 108 CFU mL−1 of S. rhizophila, (3) 5 g L−1 of ulvan, (4) 1 × 106 cells mL−1 of D. hansenii + 1 × 108 CFU mL−1 of S. rhizophila, (5) 1 × 108 CFU mL−1 of S. rhizophila + 5 g L−1 of ulvan, (6) 1 × 106 cells mL−1 of D. hansenii + 1 × 108 CFU mL−1 of S. rhizophila + 5 g L−1 of ulvan, (7) 1000 ppm of benomyl or sterile water (control). The fruits were air-dried for 2 h, and stored at 27 °C ± 1 °C and 85–90% relative humidity. The fruit rot disease was determined by estimating the disease incidence (%) and lesion diameter (mm), and the adhesion capacity of the biocontrol agents was observed via electron microscopy. Phytopathogen inoculation time before and after adding biocontrol agents were also recorded. Furthermore, the storability quality, weight loss (%), firmness (N), total soluble solids (%), and pH were quantified. The antioxidant enzymes including catalase, peroxidase, superoxide dismutase, and phenylalanine ammonium lyase were determined. In conclusion, the mixed treatment containing D. hansenii, S. rhizophila, and ulvan delayed fruit rot disease, preserved fruit quality, and increased antioxidant activity. The combined treatment is a promising and effective biological control method to promote the shelf life of harvested muskmelon.
Introduction
The muskmelon (Cucumis melo L.), belonging to the family Cucurbitaceae, is an important horticultural crop cultivated in temperate to arid regions in Asia (74%), America (11.9%), and Europe (7.2%), with a global production of 31,166,896 tons [1]. However, muskmelon is a climacteric ripening fruit, which deteriorates rapidly after harvesting because of pericarp browning and postharvest disease primarily induced by Alternaria alternata, Rhizopus stolonifer, Trichothecium roseum, and Fusarium spp. [2]. Postharvest fruit rot caused by Fusarium spp. is considered one of the main diseases that negatively impacts the quality, and influences the commercial acceptability and saleable stock of muskmelon [3]. Thus, muskmelon has a limited shelf life, which further limits their storage, transportation, and marketing [4]. Therefore, handling postharvest muskmelon, which is a key production concern, necessitates further research.
Many synthetic fungicides, such as acibenzolar-S-methyl, azoxystrobin, copper sulfate, imazalil, iprodione, and thiabendazole, are the most common commercial methods employed in muskmelon postharvest handling to retard fruit decay and prolong storage life [5,6]. Nonetheless, their indiscriminate use has led to residue accumulation in fruit, environmental pollution, carcinogenic risk to consumers, and pathogen resistance [7]. In addition, there is a trend to consume residue-free fruits, with stricter government regulations regarding agrochemical products [8]. Consequently, there is an essential need to find alternative methods such as biological control to inhibit decay in harvested fruit. Previous studies have shown that biological control by applying microbial antagonists, such as Bacillus subtilis [9], Burkholderia sp. [10], and Pseudomonas graminis [11], or by applying secondary metabolites such as phenylethyl alcohol from Trichoderma asperellum [12] and lactic acid from Lactobacillus plantarum [13], is a promising method for managing decay in harvested muskmelon.
Most microbial antagonists have been sourced from the fruit surface (epiphytic), but they can also be isolated from other nearby related areas, i.e., soil, roots, and the phyllosphere [14], or distant sources such as extremophile environments [15]. The marine yeast Debaryomyces hansenii has shown significant results as a biocontrol agent by diverse mechanisms of action, such as competition for space (i.e., inhibition of spore germination) and nutrients, and secondary metabolite excretion (i.e., volatile organic compounds and lytic enzymes) [16,17]. The marine bacteria Stenotrophomonas rhizophila have shown significant results as biocontrol agents by direct inhibition, excretion of volatile organic compounds, nutrient competition, and lytic enzymes [18,19]. Moreover, previous studies have demonstrated that D. hansenii and S. rhizophila are safe to humans [17,20].
However, microbial antagonists applied as a single treatment considerably vary in their efficiency and are inconsistent at high levels (>95%) of disease control than that of chemical fungicides [14]. Thus, the integrated approaches could be the key in the successful development of safe and sustainable alternatives for effective postharvest disease management in fruits [21]. Ulvan, a polysaccharide isolated from the green algae Ulva spp., has been demonstrated to induce resistance with no direct activity against other microorganisms such as D. hansenii, S. rhizophila, and Fusarium proliferatum [19]. However, the effects of individual or mixed postharvest treatment with D. hansenii, S. rhizophila, and ulvan on the quality and storability of harvested muskmelon have not been studied before.
In this study, the effects of D. hansenii, S. rhizophila, and ulvan as individual or mixed treatments on fruit rot disease, storage quality, and antioxidant enzyme activity in muskmelon (Cucumis melo L. var. reticulatus) was investigated. The aim of this study was to develop an effective and safe biological control strategy for inhibiting fruit decay and prolonging the shelf life of muskmelon.
In Vivo Control Assay and Microscopic Visualization
The mixed treatment of D. hansenii, S. rhizophila, and ulvan significantly reduced the lesion diameter (3.5 mm) and significantly improved DC (73.5%) of fruit rot induced by F. proliferatum in muskmelon compared to that of the individual treatments, and observed better results than those treated with benomyl ( Figure 1). Nevertheless, muskmelon fruit inoculated with only ulvan presented the highest lesion diameter (16.3 mm) and the lowest DC (14.3%). By applying Abbott's formula, it was inferred that in comparison with single treatments, all the mixed treatments exhibited a synergistic effect on DC (Table 1). inoculated with only ulvan presented the highest lesion diameter (16.3 mm) and the low est DC (14.3%). By applying Abbott's formula, it was inferred that in comparison wit single treatments, all the mixed treatments exhibited a synergistic effect on DC (Table 1 The mixed treatment with D. hansenii, S. rhizophila, and ulvan demonstrated the highes predicted synergistic effect. Scanning electron micrograph imaging demonstrated that the spores and mycelia of F. proliferatum appeared and grew normally on muskmelon fruit in the control treatment ( Figure 2a). When treated with biological control agents (BCAs) as a single treatment, F. proliferatum cells developed adhesion capacity ( Figure 2b) with limited ( Figure 2c) mycelial growth. In the mixed treatment with D. hansenii and S. rhizophila, the mycelial surface appeared abnormally shaped and notably damaged (Figure 2c).
Effect of Biocontrol Treatment Time on Their Biocontrol Efficacy
The effect of D. hansenii, S. rhizophila, and ulvan treatment time after or be inoculation of F. proliferatum significantly affected DC (Table 2) and lesion diamete 3). All muskmelon fruits treated before inoculating F. proliferatum had the highest smaller lesion diameter than those treated after inoculating the phytopathog longer the treatment time of the BCAs and ulvan before F. proliferatum inocula higher the DC and the smaller the lesion diameter. The fruit inoculated with mix ment of D. hansenii, S. rhizophila, and ulvan 24 h before the inoculation of F. pro presented the best results in DC (87.6%) and reduction in lesion diameter (1.7 m fruits treated with ulvan 24 h after inoculating F. proliferatum had the lowest DC largest lesion diameter (27.7 mm). The DC and lesion diameter of muskmelon frui with benomyl before or after F. proliferatum did not differ significantly. The results that D. hansenii, S. rhizophila, and ulvan are effective as preventive treatments rat curative treatments.
Effect of Biocontrol Treatment Time on Their Biocontrol Efficacy
The effect of D. hansenii, S. rhizophila, and ulvan treatment time after or before the inoculation of F. proliferatum significantly affected DC (Table 2) and lesion diameter ( Table 3). All muskmelon fruits treated before inoculating F. proliferatum had the highest DC and smaller lesion diameter than those treated after inoculating the phytopathogen. The longer the treatment time of the BCAs and ulvan before F. proliferatum inoculation, the higher the DC and the smaller the lesion diameter. The fruit inoculated with mixed treatment of D. hansenii, S. rhizophila, and ulvan 24 h before the inoculation of F. proliferatum presented the best results in DC (87.6%) and reduction in lesion diameter (1.7 mm). The fruits treated with ulvan 24 h after inoculating F. proliferatum had the lowest DC and the largest lesion diameter (27.7 mm). The DC and lesion diameter of muskmelon fruit treated with benomyl before or after F. proliferatum did not differ significantly. The results showed that D. hansenii, S. rhizophila, and ulvan are effective as preventive treatments rather than curative treatments.
Efficacy of Biocontrol Treatments on Natural Fruit Rot Development and Fruit Quality Parameters
Muskmelon fruits were dipped in either single or mixed treatments containing D. hansenii, S. rhizophila, and ulvan to assess natural fruit rot development and quality parameters. After 7 d of storage, DI significantly reduced with all treatments in comparison with the control treatment (70%) ( Table 4). Muskmelon fruit immersed in the mixed treatment of BCAs and ulvan had the lowest DI (8.3%), which was even lower than that of benomyl (10%). All the mixed treatments had lower DI values than that of the single treatments. Regarding quality parameters, muskmelon fruit immersed in benomyl lost a significant amount of weight (0.68 g) and firmness (4.1 N) in comparison with those immersed in BCAs and ulvan as mixed or single treatments. Furthermore, TSS observed no significant difference between muskmelon treatments. Muskmelon fruit immersed in solutions of mixed treatments and a single treatment containing ulvan had the lowest pH values. Table 4. Efficacy of D. hansenii, S. rhizophila, and ulvan on natural fruit rot development and fruit quality parameters.
Antioxidant Enzymatic Activity on Muskmelon Fruit after Biocontrol Treatments
The antioxidant enzymatic activity in muskmelon was measured after treating with single or mixed solutions containing D. hansenii, S. rhizophila, and ulvan ( Figure 3). SOD activity increased significantly in muskmelon fruit after 4 and 6 d of inoculation with the mixed treatment of BCAs and ulvan (Figure 3a), respectively. In all muskmelon fruits, SOD activity decreased considerably during the first 2 d of incubation and increased to the maximum activity level after 6 d of incubation. CAT activity in muskmelon fruits significantly decreased in all the treatments during the first 6 d of incubation and slightly increased after 8 d (Figure 3b). However, single and mixed treatments with D. hansenii, S. rhizophila, and ulvan maintained a higher CAT activity than that of the control treatment. The POX activity in muskmelon significantly increased with the single BCAs treatment after 6 d of inoculation in comparison with the control treatment (Figure 3c). The highest POX activity was quantified 4 d after inoculating muskmelon fruit with mixed treatment of D. hansenii, S. rhizophila, and ulvan. In all treatments, POX decreased gradually after incubating for 4 d. PAL activity significantly increased in all muskmelon fruits compared with that of the control treatment (Figure 3d). The highest PAL activity was quantified 2 d after inoculating the mixed treatment containing D. hansenii, S. rhizophila, and ulvan, which was maintained throughout the incubation period. after 6 d of inoculation in comparison with the control treatment (Figure 3c). The highest POX activity was quantified 4 d after inoculating muskmelon fruit with mixed treatment of D. hansenii, S. rhizophila, and ulvan. In all treatments, POX decreased gradually after incubating for 4 d. PAL activity significantly increased in all muskmelon fruits compared with that of the control treatment (Figure 3d). The highest PAL activity was quantified 2 d after inoculating the mixed treatment containing D. hansenii, S. rhizophila, and ulvan, which was maintained throughout the incubation period.
Discussion
Since publishing the first report on using Bacillus subtilis to treat brown rot caused by Monilinia fructicola on peaches in 1985, the use of microbial antagonists (i.e., yeast, bacteria, and fungi) as BCAs have been promoted as an alternative to chemical products [22]. Nonetheless, BCAs exhibit certain limitations because they are usually effective against specific
Discussion
Since publishing the first report on using Bacillus subtilis to treat brown rot caused by Monilinia fructicola on peaches in 1985, the use of microbial antagonists (i.e., yeast, bacteria, and fungi) as BCAs have been promoted as an alternative to chemical products [22]. Nonetheless, BCAs exhibit certain limitations because they are usually effective against specific hosts and well-defined phytopathogens, and are also affected by adverse environmental conditions [23]. Moreover, BCAs individually cannot eradicate established infections and cannot provide a broad-spectrum DC compared with that of chemical fungicides [24]. Additionally, BCAs must demonstrate a control efficiency comparable to that of conventional fungicides to be considered as a promising alternative [25]. Thus, combining BCAs with its compatible physical or chemical treatments is being investigated in recent years to enhance their individual performance through a synergistic effect [8]. Previous studies have developed several alternatives and compatible treatments, including physical treatments [26], resistance inducers [27], food additives [28], essential oils [29], low fungicidal doses [30], and mixed antagonist cultures [31].
In this study, the results demonstrated that mixed treatments containing BCAs and ulvan significantly enhanced the biocontrol effect of fruit rot disease in muskmelon compared with that of the single treatments. Mixed treatments with resistance inducers have been evaluated previously to enhance the activity of BCAs [27,[32][33][34][35]. In a previous study, methyl jasmonate was inoculated as a mixed treatment to enhance the biocontrol effect of Meyerozyma guilliermondii, which reduced the disease incidence using the combined treatment (21%) in comparison with that of the individual yeast treatment (42%), which further affected the fungal morphology and upregulated resistance-related enzymes. The mixed treatments that include BCAs and resistance inducers are better than individual BCAs treatments because of their wide spectrum of action, and better efficiency for an expanded disease control under wide environmental conditions [36]. In this study, the compatible activity of the mixed treatment could be attributed to different ecological requirements of both BCAs [37], with ulvan not directly affecting these microorganisms [38].
Furthermore, the inoculation time of the BCAs in this study indicated that the reduced lesion size and DC are related to their high reproduction rate compared to that of the phytopathogen, which rapidly colonize the tissue during pre-treatment [37,39]. Ulvan inoculation time indicates that the reduction in lesion size and the decrease in disease are related to its ability to induce resistance and priming in fruits [40]. Therefore, the BCAs proposed in this study should be used in pretreatment to counteract melon fruit rot caused by F. proliferatum, Zhao et al. ref. [41] obtained similar results, wherein the efficiency of Pichia guilliermondii against Rhizopus nigricans was better when tomato fruits were treated 24 h before inoculating the phytopathogen. Besides, Lima et al. [42] reported that the combination of Wickerhamomyces anomalus and Meyerozyma guilliermondii inoculated 12 and 24 h before Colletotrichum gloeosporioides inoculation, reduced the disease incidence by 13.8% and 30%, respectively.
BCAs colonize more effectively the fruit host and limit the space and nutrients availability when they are inoculated before the phytopathogen (Figure 1). Thus, studying the effect of timing inoculation on the effectiveness of BCAs is essential to develop postharvest control strategies [14]. The time-related experiments in this study demonstrate the importance of applying BCAs immediately after fruit harvesting to control postharvest diseases and to preserve their quality parameters. The ability of BCAs as a preventive treatment rather than a corrective one is closely related to nutrient competition mechanisms [22,[43][44][45].
The innate resistance to postharvest fungal decay is closely related to certain physiological parameters, such as senescence, which is remarkably decreased [46]. In a previous study, the effectiveness of Pichia membranifaciens as antagonist against Penicillium expansum in peach fruit could be enhanced by adding 0.2 g L −1 of benzo-(1,2,3)-thiadiazole-7-carbothioic acid S-methyl ester without reducing its quality parameters [47]. In this study, the mixed treatments of BCAs and ulvan significantly decreased the natural disease incidence, and preserved the firmness and weight of muskmelon. Initially, the TSS content in the fruit increased, probably due to the degradation of the non soluble polysaccharides to simple sugars, which later decreased with increase in storage time, and related to the increased respiration rate [48]. Furthermore, the respiration rate in muskmelon was delayed by the mixed treatment of BCAs and ulvan because of the increase in TSS post-treatment. The pH of muskmelon fruit decreased from an initial pH of 5.3 to 6.8 during fruit ripening [46]. Moreover, the enzyme polygalacturonase is associated with F. proliferatum pathogenicity and virulence, which acts more efficiently after an increase in pH during muskmelon fruit ripening [49,50]. In the results obtained, the mixed treatment with BCAs and ulvan had the lowest pH values, which could be associated with the lowest DI and lesion diameter according to the data presented previously in Section 2.3.
The efficiency of D. hansenii, S. rhizophila, and ulvan as single or mixed treatment(s) to control muskmelon fruit rot caused by F. proliferatum could be related to the increase in the defense response mechanism in fruit (i.e., priming, PR protein synthesis, and oxidative burst) [51]. Debaryomyces hansenii reportedly induces resistance in citrus fruits by increasing the synthesis of phytoalexins [52], which produce molecules that confer resistance in fruits against fungal phytopathogens [53]. The results obtained in this study are the first to report the induction of antioxidant enzymes in muskmelon by S. rhizophila to reduce the rot caused by F. proliferatum. Dumas et al. [54] determined that the defense induction in Medicago truncatula by ulvan is mediated by the jasmonate signaling pathway. In rice and wheat, ulvan induces priming and increases the first oxidative burst, increasing resistance against mildew [55]. Cluzet et al. [56] concluded that using microarrays helps ulvan increase the expression of genes coding for phytoalexins, PR proteins, and structural proteins. In this study, ulvan moderately affected the control of disease incidence; however, its effect is attributed to the induction of systemic acquired response (SAR) and priming mechanism, which operate after induced systemic response (ISR) [54][55][56]. However, elucidating the mechanisms involved in resistance induction in melon fruits by D. hansenii, S. rhizophila, and ulvan, requires further exhaustive investigation.
In previous reports, resistance induction was evidently promoted in melon fruits [57][58][59]. One of the initial defense responses against pathogens is the oxidative burst, which increased the reactive oxygen species (O 2− and H 2 O 2 ) [60]. Although reactive oxygen species can contribute to defense in fruits, they can be degraded by antioxidant enzymes such as CAT, SOD, and POX [61]. CAT converts H 2 O 2 to O 2 and H 2 O, and POD degrades H 2 O 2 by oxidizing phenolic compounds [62]. High levels of these enzymes are associated with reduced oxidative damage and delayed senescence [63]. PAL activity can be increased as part of the response mechanisms to numerous stress factors in the fruit [64]. According to Jetiyanon [65], the increase in PAL activity obtained by using the control can sufficiently inhibit pathogen invasion, and reduce disease incidence and lesion diameter [65].
Marine Microbial Antagonists Source and Concentration
Debaryomyces hansenii and S. rhizophila were obtained from the Phytopathology laboratory of Centro de Investigaciones Biologicas del Noroeste (CIBNOR), La Paz, Baja California Sur, Mexico. Debaryomyces hansenii and S. rhizophila were maintained and stored in potato dextrose agar (PDA; 39 g L −1 ) and trypticase soy agar (TSA, 40 g L −1 ) plates, respectively, at 4 • C. Liquid cultures of D. hansenii and S. rhizophila were prepared in 250 mL Erlenmeyer flasks containing 50 mL of potato dextrose broth (PDB, 39 g L −1 ) and trypticase soy broth (TSB, 40 g L −1 ), respectively, and were incubated at 27 • C for 24 h on a rotary shaker set at 180 rpm. Debaryomyces hansenii concentration was adjusted to 1 × 10 6 cells mL −1 using a hemocytometer, and S. rhizophila concentration was adjusted to 1 × 10 8 CFU mL −1 using a UV/Vis spectrophotometer (HACH, Dusseldorf, Germany) at 660 nm and absorbance of 1. Debaryomyces hansenii and S. rhizophila were adjusted to these concentrations prior to use in each of the following experiments.
Chemical Treatments Source and Concentration
Ulvan (OligoTech ® , Elicityl Ltd., Crolles, France) solution was prepared by dissolving 5 g L −1 ulvan in sterile deionized water. The chemical fungicide benomyl was used at 1000 ppm. Ulvan and benomyl were adjusted to these concentrations prior to use in each of the following experiments.
Fusarium proliferatum Source and Concentration
Fusarium proliferatum was isolated from infected muskmelon fruit (Cucumis melo L. var. reticulatus) [38], and provided by CIBNOR. The fungus was stored on PDA at 4 • C. Prior to use, the culture was reactivated, and its pathogenicity was assessed by re-inoculating into wounded melon fruits, which was subsequently re-isolated onto PDA after establishing infection. Spore suspensions were obtained from 10-d old cultures maintained on PDA at 25 • C, and spore concentration was determined using a hemocytometer and adjusted to 1 × 10 4 spores mL −1 with sterile distilled water (SDW) containing 0.05% (v/v) Tween 80. Fusarium proliferatum was adjusted to this concentration prior to use in each of the following experiments.
Muskmelon Fruit Source and Pre-Treatment
Muskmelon (Cucumis melo L. var. reticulatus) fruit were sampled at El Pescadero, Baja California Sur, Mexico from a commercial orchard. Fruits without mechanical injury, disease symptoms, physiological maturity, and of uniform size were chosen for the experiments. The fruit surface was disinfected with 1% sodium hypochlorite, washed with SDW, and air-dried at 27 • C.
In Vivo Biocontrol Assay and Microscopic Visualization
The biocontrol activity of D. hansenii, S. rhizophila, and ulvan was tested according to the method described by Zhang et al. [66]. Six equidistant wounds of 3-mm diameter were created on each fruit and inoculated with 20 µL of the following treatments: (1) D. hansenii, (2) S. rhizophila, (3) ulvan, (4) D. hansenii + S. rhizophila, (5) D. hansenii + ulvan, (6) S. rhizophila + ulvan, (7) D. hansenii + S. rhizophila + ulvan, and (8) benomyl. The fruits were dried for 2 h and then each wound was inoculated with 20 µL of a suspension adjusted of F. proliferatum. The treatments concentration was adjusted as described in Section 4.1, Section 4.2 and Section 4.3. Fruit were incubated at 27 • C and 90% relative humidity (RH) for 7 d. Disease control (DC) and lesion diameter (mm) were measured. The DC was calculated using the following formula: where Fi = is the number of infected fruits in each treatment, and Tf = is the total number of infected fruits in the control treatment.
The advantage of in vivo mixed biocontrol treatments were assessed with respect to the individual treatments (D. hansenii, S. rhizophila, and ulvan) and the type of interactions (additive, synergistic, or antagonistic). The synergy factor (SF) was calculated according to de Abbott's formula [67]: where DC and DCE are the observed and expected disease control (%) of the mixed treatments, respectively. DCE was calculated using the following formula: where DCa, DCb, and DCc are the DCs of postharvest D. hansenii, S. rhizophila, and ulvan as single treatments, respectively. For microscopic visualization, tissue samples of approximately 0.5 cm 2 were collected from in vivo biocontrol assay and fixed as described by Rivas-Garcia et al. [37]. Samples were examined by scanning electron microscopy (SEM) (Hitachi ® , S-3000 N, Tokyo, Japan). Each treatment was represented by five replicates with three fruits per replicate.
Effect of Biocontrol Treatment Time on the Control of Fruit Rot Disease
The in vivo effect of treatment time of D. hansenii, S. rhizophila, and ulvan on the suppression of F. proliferatum on muskmelon was assessed following the method described by Zhimo et al. [68] with some modifications. Muskmelon fruits were collected and prepared as described in Section 2.4 and inoculated with 20 µL suspensions of the following treatments: (1) D. hansenii, (2) S. rhizophila, (3) ulvan, (4) D. hansenii + S. rhizophila, (5) D. hansenii + ulvan, (6) S. rhizophila + ulvan, (7) D. hansenii + S. rhizophila + ulvan, and (8) benomyl either prior to (2, 12, and 24 h) or after (12 and 24 h) inoculating 20 µL of F. proliferatum. The treatments concentration was adjusted as described in Sections 4.1-4.3. The experiments were performed as previously described in Section 4.5. The fruits were dried for 2 h and then incubated at 27 • C and 90% RH for 7 d. The DC and lesion diameters (mm) were measured. Each treatment was represented by five replicates with three fruits per replicate.
Efficacy of Biocontrol Treatments on Natural Fruit Rot Development and Fruit Quality Parameters
Muskmelon fruits were collected, and immersed without a pre-treatment (Section 2.4) into 2 L plastic containers with the following treatments: (1) D. hansenii, (2) S. rhizophila, (3) ulvan, (4) D. hansenii + S. rhizophila, (5) D. hansenii + ulvan, (6) S. rhizophila + ulvan, (7) D. hansenii + S. rhizophila + ulvan, and (8) benomyl, for 2 min. The fruits were dried for 2 h and then each wound was inoculated with 20 µL of a suspension adjusted of F. proliferatum. The treatments concentration was adjusted as described in Sections 4.1-4.3. Fruits were incubated at 27 • C and 90% relative humidity (RH) for 7 d. The percentage of disease incidence (DI) was calculated using the formula: The quality parameters measured in muskmelon included weight loss (%), fruit firmness (N), total soluble solids (%), and pH. For weight loss estimation, muskmelon fruit was weighed before and after storage. Firmness was measured by compressing the muskmelon fruit on two opposite sides along the equatorial region, after applying a load of 9.8 N with a texture analyzer. For total soluble solids (TSS) and pH, 10 g of muskmelon fruit was macerated to obtain fruit juice. TSS was determined using a digital Abbe refractometer (PR-32, Atago Co., Tokyo, Japan) at room temperature. The pH was measured using a digital pH meter (PHS-550; Lohand Co., Hnagzhou, China). Each treatment was represented by five replicates with three fruits per replicate.
Antioxidant Enzymatic Activity on Muskmelon Fruit
Muskmelon fruits were collected and prepared as described above (Section 2.4). Six equidistant wounds with 3-mm diameter in each fruit were inoculated with 20 µL of the following treatments: (1) D. hansenii, (2) S. rhizophila, (3) ulvan, (4) D. hansenii + S. rhizophila, (5) D. hansenii + ulvan, (6) S. rhizophila + ulvan, and (7) D. hansenii + S. rhizophila + ulvan. SDW was used as the control. The treatments concentration was adjusted as described in Sections 4.1 and 4.2. The fruits were dried for 2 h, and incubated at 27 • C and 90% relative humidity (RH) for 8 d. Tissues adjacent to the inoculated area were sampled with a scalpel every 2 d (1 × 1 cm, length and depth), and were stored at −80 • C until enzymatic quantification. The collected samples were disrupted using liquid nitrogen and suspended in chilled phosphate buffer (0.1 M, pH 7.4) for catalase (CAT), peroxidase (POX), and superoxide dismutase (SOD) estimation, and suspended in chilled sodium borate buffer (0.1 M, pH 8) for phenylalanine ammonium lyase (PAL) quantification. The homogenate samples were centrifuged at 10,000× g for 20 min at 4 • C, and the supernatant was subjected to the enzymatic assay. CAT, POX, SOD, and PAL activities were measured using a commercial assay kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China).
Protein content was determined using the Bradford assay, with standard curve plotted using bovine serum albumin [69]. One unit of CAT activity is defined as the amount of enzyme that reacts with 1 nmol of formaldehyde per min and is expressed in min mg −1 of protein [70]. One unit of POX activity is defined as the amount of enzyme that causes the formation of tetra guaiacol in the presence of H 2 O 2 per min and is expressed in U mg −1 of protein [71]. One unit of SOD activity is defined as the amount of enzyme necessary to inhibit 50% of the O 2 reaction in the presence of nitro-blue tetrazolium reagent (NBT) and is expressed as U mg −1 of protein [72]. One unit of PAL activity is defined as µmol of cinnamic acid formed per minute per milligram of protein (min mg −1 of protein) [73]. Each treatment was represented by five replicates with three fruits per replicate.
Data Analysis
One-way analysis of variance (ANOVA) were performed to analyze the obtained data by using STATISTICA software (version 10.0; StatSoft, Tulsa, OK, USA). Post hoc least significant difference Fisher test (p ≤ 0.05) was used to compare means.
Conclusions
In this study, the mixed pre-treatment of D. hansenii, S. rhizophila, and ulvan enhanced the biocontrol effect on fruit rot disease in muskmelon, delayed natural fruit rot, lowered percentages of decay and weight loss, maintained higher antioxidant and defense-related enzymes (CAT, POX, SOD, and PAL), and preserved fruit quality (firmness, TSS, and pH). These results provide convincing evidence that postharvest treatment using 1 × 10 6 cells of D. hansenii, 1 × 10 8 cells of S. rhizophila, and ulvan displays higher disease resistance, better storability of harvested muskmelon fruit, and retains higher fruit quality, which suggests that postharvest mixed treatment containing BCAs and ulvan is a promising, safe, and effective biological control method in preserving the storage time of harvested muskmelon fruit. Omic technologies like metatranscriptomics and metagenomics analysis will be the future for the study of this complex tri-trophic interactions between microbial antagonists-fruit host-Pathogen. | 6,895 | 2022-01-01T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Molecular Characterization of Dengue Virus Serotype 2 Cosmospolitan Genotype From 2015 Dengue Outbreak in Yunnan, China
In 2015, a dengue outbreak with 1,067 reported cases occurred in Xishuangbanna, a city in China that borders Burma and Laos. To characterize the virus, the complete genome sequence was obtained and phylogenetic, mutation, substitution and recombinant analyses were performed. DENV-NS1 positive serum samples were collected from dengue fever patients, and complete genome sequences were obtained through RT-qPCR from these serum samples. Phylogenetic trees were then constructed by maximum likelihood phylogeny test (MEGA7.0), followed by analysis of nucleotide mutation and amino acid substitution. The recombination events among DENVs were also analyzed by RDP4 package. The diversity analysis of secondary structure for translated viral proteins was also performed. The complete genome sequences of four amplified viruses (YNXJ10, YNXJ12, YNXJ13, and YNXJ16) were 10,742, 10,742, 10,741, and 10,734 nucleotides in length, and phylogenetic analysis classified the viruses as cosmopolitan genotype of DENV-2. All viruses were close to DENV Singapore 2013 (KX380828.1) and the DENV China 2013 (KF479233.1). In comparison to DENV-2SS (M29095), the total numbers of base substitutions were 712 nt (YNXJ10), 809 nt (YNXJ12), 772 nt (YNXJ13), and 841 nt (YNXJ16), resulting in 109, 171, 130, and 180 amino acid substitutions in translated regions, respectively. In addition, compared with KX380828.1, there were 44, 105, 64, and 116 amino acid substitutions in translated regions, respectively. The highest mutation rate occurred in the prM region, and the lowest mutation rate occurred in the NS4B region. Most of the recombination events occurred in the prM, E and NS2B/3 regions, which corresponded with the mutation frequency of the related portion. Secondary structure prediction within the 3,391 amino acids of DENV structural proteins showed there were 7 new possible nucleotide-binding sites and 6 lost sites compared to DENV-2SS. In addition, 41 distinct amino acid changes were found in the helix regions, although the distribution of the exposed and buried regions changed only slightly. Our findings may help to understand the intrinsic geographical relatedness of DENV-2 and contributes to the understanding of viral evolution and its impact on the epidemic potential and pathogenicity of DENV.
In 2015, a dengue outbreak with 1,067 reported cases occurred in Xishuangbanna, a city in China that borders Burma and Laos. To characterize the virus, the complete genome sequence was obtained and phylogenetic, mutation, substitution and recombinant analyses were performed. DENV-NS1 positive serum samples were collected from dengue fever patients, and complete genome sequences were obtained through RT-qPCR from these serum samples. Phylogenetic trees were then constructed by maximum likelihood phylogeny test (MEGA7.0), followed by analysis of nucleotide mutation and amino acid substitution. The recombination events among DENVs were also analyzed by RDP4 package. The diversity analysis of secondary structure for translated viral proteins was also performed. The complete genome sequences of four amplified viruses (YNXJ10, YNXJ12, YNXJ13, and YNXJ16) were 10,742, 10,742, 10,741, and 10,734 nucleotides in length, and phylogenetic analysis classified the viruses as cosmopolitan genotype of DENV-2. All viruses were close to DENV Singapore 2013 (KX380828.1) and the DENV China 2013 (KF479233.1). In comparison to DENV-2SS (M29095), the total numbers of base substitutions were 712 nt (YNXJ10), 809 nt (YNXJ12), 772 nt (YNXJ13), and 841 nt (YNXJ16), resulting in 109, 171, 130, and 180 amino acid substitutions in translated regions, respectively. In addition, compared with KX380828.1, there were 44, 105, 64, and 116 amino acid substitutions in translated regions, respectively. The highest mutation rate occurred in the prM region, and the lowest mutation rate occurred in the NS4B region. Most of the recombination events occurred in the prM, E and NS2B/3 regions, which corresponded with the mutation frequency of the related portion. Secondary structure prediction within the 3,391 amino acids of DENV structural proteins showed there were 7 new possible nucleotide-binding sites and 6 lost sites compared to DENV-2SS. In addition, 41 distinct amino acid changes were found in the helix regions, although the distribution of the exposed and buried regions changed only slightly. Our findings may help to understand the intrinsic geographical relatedness of DENV-2 and contributes to the understanding of viral evolution and its impact on the epidemic potential and pathogenicity of DENV.
INTRODUCTION
Dengue virus (DENV) belongs to the Flavivirus genus and is transmitted by Aedes aegypti and Ae. Albopictus mosquitoes, found in tropical and subtropical regions of world (Bhatt et al., 2013). DENV annually infects approximately 50 million people in more than 100 countries (maybe mention here that DENV can be lethal due to hemorrhagic fever or cite number of deathsor the lack of effective vaccine or problem with antibodydependent enhancement due to serotypes) (San Martín et al., 2010). The WHO declared that along with climate change, economic integration and migration have contributed to the expanded geographical range of DENV over the past decade (WHO, 2015).
There are four serotypes (DENV-1/2/3/4) that are closely related but are nonetheless antigenically and genetically distinct. Each DENV serotype is further subdivided into several phylogenetically distinct genotypes (Weaver and Vasilakis, 2009). The serotypes were identified by the difference of antigenicity and the genotypes were identified by the phylogenetic tree of DENV gene sequences. The genome of DENV is a linear, nonsegmented, positive-sense strand of RNA of approximately 10.6-11 kb, and the MW is 4.2 × 10 6 (Dash et al., 2015), and the full-length polyprotein which is processed by viral and host proteases into seven non-structural proteins (NS1, NS2A, NS2B, NS3, NS4A, NS4B, and NS5) and three structural proteins (capsid, premembrane, and envelope) (Holmes and Twiddy, 2003). The 3' UTR lacks a poly real (A) tail; they have A non-coding regions in 5 ′ end and 3 ′ end (Markoff, 2003). The four DENV serotypes share 65-70% sequence homology and are further clustered into different genotypes on account of high mutation rates (Holmes and Twiddy, 2003;Anoop et al., 2012). Each of the four serotypes of DENV (DEVN 1-4) can cause a spectrum of illness in human from mild dengue fever (DF) aggravate to severe life-threatening dengue shock syndrome (DSS) and dengue hemorrhagic fever (DHF) (Rodenhuis-Zybert et al., 2010).
Xishuangbanna (N22 • 0 ′ 42.00 ′′ , E100 • 47 ′ 45.68 ′′ ) is located in the southernmost prefecture of Yunnan Province and is situated along a tropical rainforest area where dengue fever is endemic. A population of more than 1 million and the long summer without winter. Imported cases of DENV infection sporadically occur in bordering regions of Yunnan Province, such as Dehong and Xishuangbanna (Wang et al., 2015). The first outbreak of dengue fever in Yunnan was reported in 2008, with 56 confirmed cases (MOH, 2008). After the initial outbreak, larger epidemics have been regularly reported in Xishuangbannan. For instance, 1,538 infection cases were reported in 2013, 1,067 infection cases were reported in 2015, and 1,184 infected patients were detected by November 2017, indicating that dengue fever remains an epidemiological threat in Yunnan (Zhang et al., 2014;Wang et al., 2015Wang et al., , 2016Yang et al., 2015;Zhao et al., 2016). A previous study by Zhao et al. found that the DENV-2 epidemic of Xishuangbanna in 2015 was most similar to the Indian and Sri Lankan epidemics that occurred in 2001 and 2004, respectively (Zhao et al., 2016).
In 2015, the first case of dengue fever in Xishuangbanna was reported on July 13th and the epidemic continued to 15th of November, with more than 1,000 confirmed cases. So far, detailed genomic characterization and identification of molecular recombination events during this DENV-2 outbreak have not been completed. In this article, we report for the first time the complete genomic sequences and comprehensive genetic analyses of four DENV-2 isolates from the 2015 outbreak in Yunnan, China. These findings supplement our understanding of flavivirus genetics and endemic transmission of DENV originating from the border areas of China, Laos, Burma, and Vietnam.
Ethics Statement
Ethical approval was obtained from the Institutional Ethics Committee (Institute of Medical Biology, Chinese Academy of Medical Sciences, and Peking Union Medical College). The study protocol was in accordance with the Declaration of Helsinki for Human Research of 1974Research of (last modified in 2000. Written informed consent was received from each patient before sample collection.
Samples
During the dengue outbreak in the Xishuangbannan, Yunnan Province in 2015, the serum samples were collected from DENV-NS1 positive human patients at Xishuangbanna Dai Autonomous Prefecture People's Hospital (XDAPPH). A total of 852 DENV-NS1 positive serum samples were obtained. Sera of four patients were randomly selected for complete genomic analysis of DENV. The four patients ranged in age from 23 to 58 years old, without record of traveling abroad, and developed symptoms of fever, with fatigue and body rash.
ELISA Test
DENV IgG/IgM was detected in each sample using Dengue Virus IgG/IgM ELISA kit (Neobioscience Technology Co., Ltd., China). The assay was performed according to the operation manual.
Virus RNA Extraction, RT-PCR, and Genomic Sequencing Serum samples were separated from collected blood. Viral RNA was extracted from 150 µL of collected serum using the RNA mini kit (Qiagen, Hilden, Germany) and eluted in 50 µL of nuclease-free water. The extracted RNA was used for RT-qPCR amplification, and genomic sequencing was carried out as previously described (Drosten et al., 2002). The Onestep PrimeScriptTM RT-qPCR kit (TaKaRa Co., Ltd. Dalian, China) was used to amplify two overlapping fragments in the virus gene by RT-qPCR with the following protocol: initial reverse transcription at 42 • C for 45 min; 35 cycles of denaturation at 94 • C for 30 s, annealing at 55 • C for 30 s, elongation at 72 • C for 1 min and a final elongation step at 72 • C for 5 min. Then, the PCR products were purified and sequenced by Sangon Biotech after identification with agarose gel electrophoresis (AGE).
The complete viral genomic sequences were sequenced in 22 fragments. The primer pairs were selected from Primer-BLAST in NCBI to amplify the DENV-2 genome based on the standard of M29095 (Irie et al., 1989). All primers were synthesized and purified through Sangon Biotech (Shanghai, China). A total of 22 overlapping amplifications spanning the complete genomic region were amplified using 44 primers ( Table 1). The amplification of various genomic fragments was implemented following the standard methods (Wang et al., 2015). The specific PCR products were purified using the gel extraction kit (Qiagen, Germany) followed by double pass sequencing (Sangon Biotech, Shanghai, China). The 5 ′ 22 nucleotides and 3 ′ 23 nucleotides were obtained from the NCBI database.
Genomic Characterization and Phylogenetic Analysis
The 22 sequences were assembled using DNASTAR version 7.0. The assembled nucleotide sequences and translated amino acid sequences were analyzed by BioEdit. Phylogenetic analysis, based on the complete genomes, was conducted using the Molecular Evolutionary Genetics Analysis (MEGA) software version 7.0 (maximum likelihood phylogeny test) and gamma-distributed rates among sites with 1,000 bootstrap replicates.
The reference DENV-2 complete viral genome sequences used to construct the distinct phylogenetic branches were obtained from the GenBank sequence database under the following country and accession numbers:
Recombination Analysis
Recombination and molecular evolution analysis was conducted with RDP4.56 package (Martin et al., 2010). The reference viral sequences used in the recombination analysis were obtained from the GenBank sequence database based on phylogenetic trees or geographically close viral sequences under the following accession numbers:
Secondary Structure Analysis of Complete Genome
PredictProtein (https://www.predictprotein.org/) was used to calculate the differences in the secondary structure between the structural and non-structural proteins of the DENV-2SS and 2015 Xishuangbanna epidemic viruses. The amino acid composition and potential RNA, DNA, nucleotide and protein binding sites were analyzed. The potential helical structure was also evaluated.
Laboratory Diagnosis
All 4 patients had typical dengue-like symptoms, including headache, fever, joint pain, myalgia, vascular leakage, pleural effusion, vomiting and nausea. Laboratory investigations of the patients revealed low platelet counts (<100 * 10 9 /L) and elevated liver enzyme levels (>100 U/L) (including alanine amino transferase and aspartate amino transferase). Further analysis showed that patients' sera were positive for anti-dengue IgM antibodies but tested negative for anti-dengue IgG antibodies, indicating an acute primary dengue infection. After a week of hospitalization, the four patients recovered and then were discharged.
Genome Phylogenetic Analysis of the YNXJ10, YNXJ12, YNXJ13, and YNXJ16 Sequences Genome phylogenetic analysis of YNXJ10, YNXJ12, YNXJ13, and YNXJ16 were performed by aligning these viruses against 43 other representative DENV-2 viruses of diverse geographical origins retrieved from GenBank. The result indicated that the YNXJ10, YNXJ12, YNXJ13, and YNXJ16 viruses clustered in the cosmopolitan genotype close to DENV-2 KX380828
The total number of amino acids in YNXJ10, YNXJ12, YNXJ13, and YNXJ16 was 3,392. As shown in Table 1, compared to standard viruses DENV-2SS (GenBank ID: M29095), the total numbers of base substitutions in YNXJ10, YNXJ12, YNXJ13, and YNXJ16 were 712, 809, 772, and 841 nt, respectively. The highest mutation rate was located at the coding region of structural protein prM, whereas the lowest mutation rate was found within the coding region of the non-structural protein NS4B.
Amino Acid Mutations in Structural Protein Regions
In the structural protein regions of the four viruses, the nucleotide sequence coding for C-prM/M-E was 2,322 nt in length and codes for a 774 amino acid sequence. The lengths of the capsid, premembrane, and envelope amino acid sequences in the four viruses were 113, 166, and 495, respectively. Compared to the DENV-2 standard virus M29095, the total numbers of FIGURE 1 | Complete genomic phylogenetic analysis of YNXJ10, YNXJ12, YNXJ13, and YNXJ16. Study sequences are labeled in the red circle. The DENV-2 standard M29095 are labeled with red diamond. Others are representative of DENV-2 from diverse geographical origins retrieved from GenBank. Phylogenetic analysis, based on the complete genomes, was conducted using the MEGA software version 7.0 (maximum likelihood phylogeny test) and gamma-distributed rates among sites with 1,000 bootstrap replicates.
As Figure 2 indicates, within Domain II of the E protein, a T to C substitution at position 1606 changes the amino acid S (Serine) to P (Proline) (amino acid position 536), which changes the polarity. Meanwhile, a G to A substitution at position 1612 was observed, and this mutation converted the negatively-charged amino acid E (Glutamic acid) to a positively-charged K (Lysine) (amino acid position 538).
Amino Acid Mutations in Non-structural Protein Regions
Within the non-structural protein region, the length of the NS1-NS2A-NS2B-NS3-NS4A-NS4B-NS5 sequence of the four viruses was 7,854 nt. Compared with M29095, there were 69-76 single nucleotide changes identified in the NS1 region, including 7-10 non-synonymous substitutions (Figure 2). The base substitution T2710C modified the non-polar amino acid I (Isoleucine) to polar T (Threonine). At amino acid position 1266, K (Lysine) changed to N (Asparagine), which converted a basic amino acid to an uncharged amino acid in the coil.
Compared with M29095, there were 76-86 base mutations found in the NS2A-NS2B region, and 10-16 were nonsynonymous substitutions; the total number of base substitution mutations in the NS3 region was 138-179, and the number of non-synonymous substitutions was 16-44; (Figure 2). There were 7 and 39 amino acid substitutions in NS4A and NS4B, respectively. For NS5, the total number of base substitutions was 168-208, and there were 24-29 non-synonymous substitutions found in this region, with a substitution rate of 2.66 ∼ 4.32% (Figure 2). In addition, compared with KX380828.1, there were 44, 105, and 64 amino acid substitutions in the translated regions of YNXJ10, YNXJ12, and YNXJ13, respectively (Figure 3).
Recombination Events of DENV-2 Genome
The predictive complete genomic mutation map was performed in comparison with the closely related viruses, Singapore 2013 (KX380828.1), China 2013 (KF479233.1), and India 2009 (JX475906.1). Some recombination events may have occurred between the four viruses from Xishuangbanna and the closely related viruses, KX380828.1-2013-Singapore and KF479233.1-2013-China. There were many suspected recombination mutation areas in the complete genome. The suspected recombination mutations of the YNXJ10 virus might related to YNXJ16, while the suspected recombination mutations of YNXJ13 virus might related to KX380828. Meanwhile, the prediction results showed that the most likely recombination events were located at the structural genes prM and E, and no recombination events were observed in the non-structural region of 2K and NS4B (Figure 4).
DISCUSSION
The occurrence of dengue fever has increased remarkably in China in recent decades due to urbanization, globalization, climate change, migration and other factors (Murray et al., 2013;Chen and Liu, 2015;Guzman and Harris, 2015). The epidemic area, Xishuangbanna, Yunnan is located in southwestern China where dengue fever has been prevalent since 2008 (MOH, 2008). Since then, epidemics have been regularly reported in Xishuangbanna, Yunnan. A serious outbreak of DENV-3 occurred in 2013, with 1,538 infected individuals (Zhang et al., 2014;Wang et al., 2015Wang et al., , 2016Yang et al., 2015). In 2015, Xishuangbanna experienced a large DENV-2 outbreak, which was the largest dengue epidemic in the past few years. Although the cause of this outbreak is not clear, it is coincident with the increasing global trend (Qin and Shi, 2014;Zhao et al., 2016). In recent years, the incidence of dengue fever in China's neighboring countries, such as Indonesia, Myanmar, Singapore and Malaysia, has been higher than in previous years (Dash et al., 2013;Ng et al., 2013). Xishuangbanna has close contact with Laos, Thailand and Myanmar. Furthermore, Xishuangbanna is a tourist destination FIGURE 5 | Secondary structure prediction of the structural and non-structural proteins for DEN2SS M29095 and YNXJ10, YNXJ12, YNXJ13, and YNXJ16. The purple dots denote the RNA-binding region, the black dots denote the nucleotide-binding region, the red rhombuses denote the protein-binding region, and the yellow dots denote the DNA-binding region. Red and blue in the first line represent the strand and helix regions, respectively. Yellow and blue in the second line represent the buried and exposed regions, respectively. Purple in the third line indicates the helical transmembrane regions, and green in the fourth line represents the disordered regions. The first map is M29095, and the second, third, fourth, and fifth maps are YNXJ10, YNXJ12, YNXJ13, and YNXJ16. and attracts more than 14 million tourists from around the world annually, resulting in an increased risk of DENV epidemic (Lowe et al., 2014).
The molecular characterization of DENV-2 at the genomic level is very important to understand the spread of dengue fever in Xishuangbanna. In this article, we were interested in extending previous studies and elucidating the genetic relationship between circulating DENV-2 viruses in southwest China and other parts of the world. Phylogenetic analysis and sequence alignment of the full-length genomes of YNXJ10, YNXJ12, YNXJ13, and YNXJ16 showed a close relationship with KX380828.1-2013-Singapore and KF479233.1-2013-China that are clustered in the Asian genotype.
Comparing the four Xishuangbanna DENV-2 sequences to KX380828.1-2013-Singapore and KF479233.1-2013-China, the greatest number of mutations occurred in the structural protein gene prM, while no mutation was observed in the nonstructural gene NS4B. More mutations occurred in structural genes than in non-structural genes, indicating that structural genes are more variable while the non-structural genes are more stable under selective pressure. During host-pathogen interaction, the structural protein is located in the envelope region that interacts with host cell surface, which is under more selective pressure; however, the non-structural protein is located in the interior of the virion, which allows for minimal adaption from the host. This phenomenon coincided with the mutation patterns of YNXJ10, YNXJ12, YNXJ13, and YNXJ16.
The emergence of recombinant viruses could have a great impact on epidemiological and clinical outcomes. Interestingly, recombination events were observed between YNXJ13 and the KX380828.1-2013-Singapore. According to our prediction of possible recombination events, most recombination events were predicted to occur in the structural gene prM/E and nonstructural gene NS2B/NS3. NS2B/NS3 helps the virus escape from the host immune system by cutting antiviral protein STING. As a protease, NS2B/NS3 also plays an essential role during flaviviral polyprotein processing. Thus, amino acid substitution in both prM/E and NS2B/NS3 proteins may greatly affect the efficiency of viral replication.
In summary, we report the first complete genome sequences of DENV-2 from Xishuangbanna, Yunnan, China. There were extensive outbreaks of dengue virus of different serotypes and genotypes in surrounding areas, such as Singapore, Taiwan, Guangdong, Vietnam, Burma, and Laos, in 2015. Hence, the origin of the Xishuangbanna epidemic is difficult to pinpoint with certainty. This study could help identify the role of geography and human migratory patterns that ultimately act in concert with intrinsic viral adaptive capabilities to result in largescale outbreaks, and could offer further insight into DENV-2 pathogenicity, infectivity, and vaccine development. | 4,722.2 | 2018-06-27T00:00:00.000 | [
"Biology"
] |
Kinetic Study on Induced Electron Transfer Reaction in Pentaamminecobalt ( III ) Complexes of α-Hydroxy Acids by Pyridinium Fluorochromate in Micellar Medium
Pyridinium fluorochromate (PFC) oxidation of pentaamminecobalt(III) complexes of α-hydroxy acids in micellar medium yielding nearly 100% of carbonyl compounds are ultimate products. The decrease in UV-visible absorbance at λ=502 nm for Co(III) complex corresponds to nearly 100% of the initial absorbance. The stoichiometry of unbound ligand and cobalt(III) complex is accounting for about 100% reduction at the cobalt(III) centre. The kinetic and stoichiometric results have been accounted by a suitable mechanism.
INTRODUCTION
Nicotinium DiChromate (NDC) is an efficient reagent for oxidation of primary and secondary alcohols to carbonyl compounds.A large class of organic compounds were oxidized by NDC has been reported1-5.Since induced electron transfer in pentaamminecobalt(III) complexes of α-hydroxy acids with various oxidants have been studied6-11.The extent of NDC oxidation of pentaamminecobalt(III) complexes of α-hydroxy acids in micellar medium as an oxidis able hydroxyl group is separated from carboxyl bound to Co(III) centre by a saturated fragment namely C-C bond12.The cation radical is formed due to the oxidation of hydroxyl group by NDC is nearly in a synchronous fashion of electron transfer resulting in a C-C, O-H bonds scission and reduction at cobalt(III) center.
Preparation of Nicotinium Dichromate.(NDC).
Nicotinium acid (7.38g , 60ml) was added to Chromium trioxide CrO3 (12g, 120 mol), dissolved in water (12ml at 0ºC ice water bath ) with vigorous stirring.After 15 min Acetone (100ml at 0.5º C) was added to the resulting red orange suspension and the mixture was stirred at 0º C ---5º C for 15 min.The Product was filtered off end washed with Acetone (4×50ml) and Dichloromethane (25 ml) affording Nicotinium dichromate (11g) as usage yellow solid.M.P.217 ºC.α-Hydroxy acids employed as ligands.The monomeric cobalt(III) complexes of αhydroxy acids were prepared as their perchlorates by the method of Fan and Gould 13.
Kinetic measurements
The tris (α-hydroxo) complex; (NH 3 ) 3 Co (OH) 3 Co (NH 3 ) 3 (ClO 4 ) 3 (triol) has been prepared by the procedure of Siebert and Co workers 14,15.And unbound ligands in the presence of micelles were carried out at 32±0.2 0C in an electrically operated thermostat bath.The concentrations of unreacted NDC was determined iodometrically.The disappearance of Co(III) was followed spectrophotometrically by following the decrease in absorbance at 470 nm.(For the monomeric Co(III) complex).Ionic strength was maintained by the addition of suitable quantities of NaClO 4 .The specific rates estimated from the optical density measurements agree with the values from the volumetric procedure within ±7% curiously, the change in absorbance observed at 470 nm Co(III) complexes of αhydroxy acids corresponds to nearly 100% of the initial concentration of Co(III), while the change in optical density at 374 nm for NDC corresponds to nearly 100% of [Co(III)] initial .Co(II) was estimated after completion of reaction, by diluting the reaction mixture 10-fold with concentrated HCl, Allowing the evolution of chlorine gas to cease and then measuring the absorbance of yellow Nicotinium complex of Co(II) at 660 nm (e =560 dm3 mol-1cm-1)16,17.The amount of Co(II) estimated in all these cases corresponds to nearly 100% of [Co(II)]initial.After 48 h, the product was extracted with diethyl ether and analyzed iodometrically for the amount of benzaldehyde formed was determined by measuring absorbance at 229 nm [e=11,400 dm3 mol-1 cm-1]18,19.The yield of benzaldehyde in all these cases was nearly 100% [Co(III)] initial Table 1 & 2. After neutralization of the reaction mixture with sodium bicarbonate, the pH of the aqueous layer was adjusted to about 6.0 and the aqueous layer was separated by filtration in the case of both free ligands and corresponding complexes.On evaporation of water under reduced pressure, the product separated and the percentage yield was calculated.Though the yield of cobalt(II) was 100%, the estimation of cobalt(II), Cr(V) and carbonyl compounds were quantitative, In both the cases the IR spectra of the product agreed with IR spectra of authentic samples.
RESULTS AND DISCUSSION
Table 3 summarizes the kinetic data for the NDC oxidation of free α -hydroxy acids with 1N HClO 4 in presence of anionic and cationic micelles at 32±0.2 0C.The reaction exhibits total second order dependence on [Cobalt(III)] as well as [α-hydroxy acids].Based on the oxidation of NDC with α-hydroxy acids the following rate law has been deduced.
Rate=k[α-hydroxy acid] [NDC]
Table 4 lists the formation constants for NDC co-complexes of α-hydroxy acid along with the specific rates.Such complex formation seems to be absent when the carboxyl and it is tied up by Co(III) and the reaction between NDC and Co(III) complexes of α-hydroxy acids exhibit uncomplicated second order kinetics.
From a comparison, the specific rates for NDC oxidation of the respective Co(III)complexes and the dimeric cobalt(III) glyoxalato complex, one can infer that the oxidation rates of α-hydroxy acids are not significantly affected by complex formation.This may be due to the point of attack lies away from the Co(III) centre so that its electrostatic influence is less felt.There is, however a considerable change in the specific rate of NDC oxidation of the Co(III) keto acid complex as the two Co(III) centers can exert greater electrostatic influence over the reacting centre.This suggests that NDC attacks the O-H centre in the slow step of the reaction, leading to ligand oxidation takes place.The rate of the reaction is increased by the addition of both Ammonium Lauryl Sulfate (ALS) and Dimethyl Diocta decyl Ammonium Chloride (DDAC).A plot of specific rate constant versus micellar concentration is sigmoidal in shape the catalytic effect is more Dimethyl Diocta decyl Ammonium Chloride (DDAC) than Ammonium Lauryl Sulfate (ALS).
The specific rate of the lactato complex is more when compared to both the rate of unbound ligand and mandelato complex is due to the ligation of lactic acid to cobalt(III) centre has probably increased its reactivity towards NDC and this effect seems to be more specific for this ligands only.In NMR spectrum of lactato complex the alpha methine proton has undergone considerable downfield shift compared to the alpha C-H proton of the unbound ligand [d C-H=1.62 ppm in lactic acid and d=C-H 2.19 ppm in lactato complex where as d= C-H 4.62 ppm in mandelic acid d=C-H=3.73ppm in the respective complex].Suggesting an increase in acidic nature of methine proton of lactic acid is due to ligation to metal centre.If the reaction proceeds through a performed chromate ester, then the rate of alpha C-H will been hanced, resulting in an increased rate of oxidation of lactato complex such a precursor complex may be satirically hindered in the case of mandelato and glycolato complexes.
The stoichiometric results indicate that for 1 mole of cobalt(III) complex, about 0.65 mole of NDC is consumed, whereas with the unbound ligands for 1 mole of a-hydroxy acids about0.92mole of NDC is consumed (Table 3 & 4).The stoichiometric results coupled with kinetic data and product analysis can be accounted for by the following the reaction Scheme 1.The reaction scheme proposes that NDC oxidizes OH center of the cobalt(III) bound α-hydroxy n acids at a rate of comparable to that of the unbound ligand and there is 100%reduction at the cobalt(III) centre, forms a chromate ester with cobalt(III) glyoxalato complex which can decompose in a slow step, proceeds through C-C bond fission leading to the formation of cobalt(II), carbonyl compounds and carbon dioxide.As 1 mole of cobalt(III) glyoxalato complex consumes 0.65 mole of NDC yielding nearly 100% of Co(II)and 100% carbonyl compounds.Similarly 1 mole of unbound α-hydroxy acid consumes nearly 0.92 mole of NDC, yielding 100% of carbonyl products and CO2. | 1,745.8 | 2011-01-01T00:00:00.000 | [
"Chemistry"
] |
Far-field polarization-based sensitivity to sub-resolution displacements of a sub-resolution scatterer in tightly focused fields
We present a system built to perform measurements of scattering-angle-resolved polarization state distributions across the exit pupil of a high numerical aperture collector lens. These distributions contain information about the three-dimensional electromagnetic field that results from the interaction of a tightly focused field and a sub-resolution scatterer. Experimental evidence proving that the system allows for high polarization-dependent sensitivity to sub-resolution displacements of a subresolution scatterer is provided together with the corresponding numerical results. ©2010 Optical Society of America OCIS codes: (290.5820) Scattering measurements; (260.5430) Polarization; (120.5410) Polarimetry. References and notes 1. E. Wolf, “Electromagnetic diffraction in optical systems. I. An integral representation of the image field,” Proc. R. Soc. Lond. A Math. Phys. Sci. 253(1274), 349–357 (1959). 2. B. Richards, and E. Wolf, “Electromagnetic diffraction in optical systems. II. Structure of the image field in an aplanatic system,” Proc. R. Soc. Lond. A Math. Phys. Sci. 253(1274), 358–379 (1959). 3. E. Wolf, and Y. Li, “Conditions for the validity of the Debye integral representation of focused fields,” Opt. Commun. 39(4), 205–210 (1981). 4. M. Mansuripur, “Distribution of light at and near the focus of high-numerical-aperture objectives,” J. Opt. Soc. Am. A 3(12), 2086–2093 (1986). 5. R. Kant, “An analytical solution of vector diffraction for focusing optical systems with Seidel aberrations I. Spherical aberration, curvature of field, and distortion,” J. Mod. Opt. 40(11), 2293–2310 (1993). 6. D. P. Biss, and T. G. Brown, “Primary aberrations in focused radially polarized vortex beams,” Opt. Express 12(3), 384–393 (2004). 7. P. Török, P. Varga, and G. , “Analytical solution of the diffraction integrals and interpretation of wavefront distortion when light is focused through a planar interface between materials of mismatched refractive indices,” J. Opt. Soc. Am. A 12(12), 2660–2671 (1995). 8. H. Ling, and S. W. Lee, “Focusing of electromagnetic waves through a dielectric interface,” J. Opt. Soc. Am. A 1(9), 965–973 (1984). 9. P. Török, P. Varga, Z. Laczik, and G. R. Booker, “Electromagnetic diffraction of light focused through a planar interface between materials of mismatched refractive indices: an integral representation,” J. Opt. Soc. Am. A 12(2), 325–332 (1995). 10. V. Delaubert, N. Treps, G. Bo, and C. Fabre, “Optical storage of high-density information beyond the diffraction limit: A quantum study,” Phys. Rev. A 73(1), 013820 (2006). 11. J. M. Brok, and H. P. Urbach, “Simulation of polarization effects in diffraction problems of optical recording,” J. Mod. Opt. 49(11), 1811–1829 (2002). 12. P. Török, and M. Gu, “High-numerical-aperture optical microscopy and modern applications: introduction to the feature issue,” Appl. Opt. 39(34), 6277–6278 (2000). 13. P. Török, and P. R. T. Munro, “The use of Gauss-Laguerre vector beams in STED microscopy,” Opt. Express 12(15), 3605–3617 (2004). 14. Q. Zhan, and J. R. Leger, “Measurement of surface features beyond the diffraction limit with an imaging ellipsometer,” Opt. Lett. 27(10), 821–823 (2002). 15. A. Rohrbach, and E. H. K. Stelzer, “Optical trapping of dielectric particles in arbitrary fields,” J. Opt. Soc. Am. A 18(4), 839–853 (2001). 16. K.-H. Shuster, “Radial polarization rotating optical arrangement and microlithographic projection exposure system incorporating said arrangement”, US Patent 6191880B1 (2001). 17. D. McGloin, “Optical tweezers: 20years on,” Philos. Trans. R. Soc. Lond. A 364(1849), 3521–3537 (2006). #124017 $15.00 USD Received 9 Feb 2010; revised 27 Feb 2010; accepted 28 Feb 2010; published 3 Mar 2010 (C) 2010 OSA 15 March 2010 / Vol. 18, No. 6 / OPTICS EXPRESS 5609 18. R. L. Eriksen, V. R. Daria, and J. Gluckstad, “Fully dynamic multiple-beam optical tweezers,” Opt. Express 10(14), 597–602 (2002). 19. F. Kulzer, and M. Orrit, “Single-molecule optics,” Annu. Rev. Phys. Chem. 55(1), 585–611 (2004). 20. W. Chen, and Q. Zhan, “Three-dimensional focus shaping with cylindrical vector beams,” Opt. Commun. 265(2), 411–417 (2006). 21. J. T. Fourkas, “Rapid determination of the three-dimensional orientation of single molecules,” Opt. Lett. 26(4), 211–213 (2001). 22. A. De Martino, S. Ben Hatit, and M. Foldyna, “Mueller polarimetry in the back focal plane,” Proc. SPIE 6518, 65180X (2007). 23. S. Ben Hatit, M. Foldyna, A. De Martino, and B. Drévillon, “Angle-resolved Mueller polarimeter using a microscope objective,” Phys. Status Solidi., A Appl. Mater. Sci. 205(4), 743–747 (2008). 24. In the figure, the dashed lines box represents a part of the experimental setup that is orthogonal to the plane of the optical bench. 25. F. Delplancke, “Automated high-speed Mueller matrix scatterometer,” Appl. Opt. 36(22), 5388–5395 (1997). 26. D. Lara, and C. Dainty, “Axially resolved complete mueller matrix confocal microscopy,” Appl. Opt. 45(9), 1917–1930 (2006). 27. W. S. Bickel, and W. M. Bailey, “Stokes vectors, Mueller matrices and polarized scattered light,” Am. J. Phys. 53(5), 468–478 (1985). 28. E. Compain, S. Poirier, and B. Drévillon, “General and self-consistent method for the calibration of polarization modulators, polarimeters, and mueller-matrix ellipsometers,” Appl. Opt. 38(16), 3490–3502 (1999). 29. A. De Martino, Y.-K. Kim, E. Garcia-Caurel, B. Laude, and B. Drévillon, “Optimized Mueller polarimeter with liquid crystals,” Opt. Lett. 28(8), 616–618 (2003). 30. To speed-up the calibration of the system, and reduce the noise in the measurements, we applied a 4 × 4 binning to the original 1024 × 768 pixel images. This reduced the time spent in the calibration from ~6 hours to ~30 minutes and increased the pixel alignment accuracy. The same binning was applied to all our experimental data. 31. The Mueller matrices presented in this section were obtained from the average of 25 measurements, for each combination polarizer-analyzer, to minimize the effect of statistical errors. 32. J. D. Jackson, Classical Electrodynamics (John Wiley & Sons, 1999). 33. K. Lindfors, A. Priimagi, T. Setälä, A. Shevchenko, A. T. Friberg, and M. Kaivola, “Local polarization of tightly focused unpolarized light,” Nat. Photonics 1(4), 228–231 (2007). 34. This approximation is commonly used in the analysis of the image formation of a point-scatterer. 35. C. J. R. Sheppard, and T. Wilson, “The Image of a Single Point in Microscopes of Large Numerical Aperture,” Proc. R. Soc. Lond. A Math. Phys. Sci. 379(1776), 145–158 (1982). 36. The results for the gold nano-sphere were obtained as the average of 5 measurements for each combination polarizer-analyzer to minimize the effect of statistical errors.
Introduction
When a beam of light is brought to a focus by an optical system with sufficiently high numerical aperture (NA), the electromagnetic (EM) focused field may possess a significant component parallel to the direction of propagation of the original beam (e.g.along the z-axis), in addition to the two components orthogonal to this direction (e.g.x and y).In other words, at every position inside the focal region the EM field can have components oscillating in three mutually perpendicular directions.This property of focused fields became well known since the publication of the vectorial theory of diffraction [1,2], where Richards and Wolf analyzed the field focused by an aplanatic system.Subsequent work by other authors proved that this is true as long as the NA is sufficiently large [3,4], even if the system is not aplanatic, for example in the presence of optical aberrations [5][6][7], or when the light is focused through a dielectric interface [7][8][9].The vectorial structure of the EM field in the focal region of a lens, as predicted by the vectorial theory of diffraction [1,2], also depends on the polarization state of the incident light.This property becomes important as the NA is increased, and has been studied for several years now.Its most common intended applications reside in the areas of optical storage [10,11], microscopy and scanning optical microscopy [12][13][14], photonic force microscopy [15], lithography [16], laser micro-fabrication, particle guiding or trapping [17,18], and single molecule imaging [10,[19][20][21].
When a small sample interacts with a tightly focused field, the EM field that results is also a three-dimensional vectorial field.Clearly, the sample will affect all three components of the focused field.For example, in the case of a single molecule the resulting EM field will depend on the orientation of the molecule's electric dipole moment, and in the case of a data storage medium, sub-diffraction-limit features of the recorded medium will also interact with any or all of the three vectorial components of the illumination.The characterization of three-dimensional scattered fields is important in the analysis of light-matter interaction in the focal region of high-NA imaging systems.
In this paper, we present a system built to measure the scattering-angle-resolved polarization state distribution of the three-dimensional field scattered by a sub-resolution specimen in the focal region of a high-NA lens.A similar kind of analysis has been reported in the past that uses incoherent illumination to estimate the critical dimensions of diffraction gratings [22,23].Our approach makes use of coherent laser illumination, which, at the focus, produces a coherent superposition of all contributions from different positions of the pupil of the focusing system.Hence, the distributions measured with the system presented herein are related to the three-dimensional field scattered by the sub-resolution object via a threedimensional to two-dimensional transformation.Figure 1 is a schematic diagram of the working principle of our system.Incident beam-like field, E (0) , is focused by a high-NA focusing lens originating a non-negligible longitudinal component in its focal region, E z (1) , in addition to the two transversal components, E x (1) and E y (1) .For clarity this latter component is not shown in the diagram.After the interaction of the focused field with a sub-resolution specimen in the focal region, a three-dimensional scattered field, E (s) , is produced.The scattered field then propagates to a high-NA collector lens that collimates it, creating a beamlike field, E (2) , with negligible longitudinal component.The longitudinal component of the scattered field, E z (s) , is combined with the transversal components by the collector lens depending on the scattering angle.Therefore, by analyzing the scattering-angle-resolved polarization state distribution across the exit pupil of the collector lens, it is possible to reconstruct, up to an arbitrary phase, the scattered field distribution just before the collector lens, which carries polarization information that originates from the three dimensional nature of the focused field.In the following sections, we present numerical and experimental results showing that, when the complete three-dimensional nature of the focused field is considered, high sensitivity to sub-resolution displacements of a sub-resolution specimen is achieved by measuring the scattering-angle-resolved polarization state distribution across the exit pupil of the high-NA collector lens.The paper is organized as follows: in section 2 we present the experimental setup and the generation of the incident polarization states.In section 3 we discuss the calibration of the system.In section 4 we describe briefly the numerical analysis done to assess the performance of our system.In section 5 we present numerical and experimental results for sub-resolution displacements of a point-scatterer.Finally, in section 6, we present our conclusions and remarks on the future of this research.
Experimental setup
The system built in this work is a spatially resolved Mueller matrix polarimeter with two Pockels cells as polarization state generator (PSG) and four CCD cameras as simultaneous division-of-amplitude polarization state analyzer (PSA).We refer to it as the vectorial polarimeter.Figure 2 is a schematic diagram of the system.The total irradiance of the light coming from laser LS (MellesGriot 85-GCA-005, = 532nm), is controlled using two neutral density filters, NDF and NDFW, before a Glan-Taylor vertical polarizer (MellesGriot 03PTA401), P1.Mirror M1 was introduced to bend the light path towards the longest side of the optical bench.The light transmitted by P1 is then sent to a pair of Pockels cells (Linos LM0202), PC1 and PC2, which can produce any homogeneous incident polarization state over the Poincaré sphere (section 2.1).Then, the light beam is sent to a spatial filter, formed by objective OBJ1 (Linos 038722) and pinhole PH, where it is also expanded and later collimated by lens L1.Aperture stop IRIS3, with a diameter of ~7mm, blocks the rim of the beam to produce an incident beam with a gaussian profile flatter than the original laser beam.The light beam is then reflected by beam-splitter BS1 towards mirror M2, at 45° from the horizontal, where the light path is bent downwards towards the high-NA objective, OBJ2, which focuses the light onto the specimen SMP.After the interaction of the three-dimensional focused field with the specimen, the three-dimensional scattered field propagates to the collector lens where it is transformed in a beam-like scattering-angle-resolved distribution.Since the system was built to work in the reflection configuration, the high-NA objective, OBJ2, acts as the collector lens as well.Finally, the exit pupil of the collector lens is imaged onto the position of the four simultaneous linearly independent polarization detectors, CCD1-CCD4 (Point Grey Research, Inc. Flea2-08S2M-C).The four images of the ~7mm pupil are produced by the relay optical system formed by lenses L2-L5 and mirrors M3-M4.The magnification of this last optical relay is 0.57.Mirror M5 and apertures IRIS1-IRIS2 and IRIS4 are used as aids in the alignment of the system.IRIS4 is also used to block backreflections and prevent them from reaching the CCD cameras.The shaded part of Fig. 2 represents an auxiliary system used to position the sub-resolution specimen in the focal region of the high-NA objective [24].This part of the system is explained in section 4.
The division-of-amplitude PSA formed by non-polarizing beam-splitters BS2-BS3, polarizing beam-splitter PBS, quarter-wave-plate QWP, and linear polarizers P2-P3, measures the linear vertically and horizontally polarized components of the scattered light, CCD1 and CCD2, respectively, the component linearly polarized at + 45°, CCD3, and the left-circular component, CCD4.This configuration was chosen to obtain simultaneous measurements of the spatially resolved Stokes vectors with the four CCD cameras, and it has been used in the past, albeit with point detectors, with excellent results [25,26].
Modulation of the Pockels cells
To generate the incident polarization states of the beam to be focused, we modulated the voltage sent to the Pockels cells via a digital-to-analog board (IOtech DaqBoard/2000).Since the Pockels cells do not exhibit diattenuation, and they are non-depolarizing, they can be considered as linear retarders with retardance proportional to the voltage applied.PC1 and PC2 were oriented such that their fast axis was at 45° and 0°, respectively, from the horizontal.Six different incident polarization states were used in the measurements presented here.The pairs of retardances introduced generated the following polarization states: linear horizontal (H), linear vertical (V), linear at + 45° ( + ), linear at 45° (-), right-circular (R), and left-circular (L).Table 1 shows the actual values, in wavelengths, of the six retardance pairs introduced with the Pockels cells and the corresponding incident polarization states used in the measurements with our system; 1 and 2 are the retardance introduced in PC1 and PC2, respectively.Fig. 2. Diagram of the vectorial polarimeter.The light path in green is the path followed by the light that forms the image of the exit pupil in cameras CCD1-CCD4.The shaded part of the system, which corresponds to the red light path, is the path followed by the light used to produce the auxiliary image discussed in section 5. CCD2 is at 45° from the direction of the beam incident on the polarizing beam-splitter, PBS, because the beam-splitter is designed to give this angular separation between the vertical and horizontal polarization components.
Table 1. Pairs of retardance introduced in the Pockels cells in wavelength units.
Polarization state
Detection of the spatially resolved Stokes parameters
The experimental data obtained with the vectorial polarimeter can be used to calculate the back focal plane Mueller matrix of the specimen using the simple arithmetic relations between the measured irradiances and the elements of the matrix involved for each combination polarizer-analyzer [27].For each of the six polarization states generated by the PSG, the full Stokes vector was measured at every pixel in the image of the pupil/back-focalplane of the collector lens.The actual expressions used to calculate the Mueller matrix elements are shown in Fig. 3.As the figure indicates, the elements of the Mueller matrix are obtained as the combination of different irradiance measurements -on different cameras and for different incident polarization states.These expressions are applied at every pixel on the pupil of the collector lens after subtracting the corresponding background dark image.Thus, the alignment of the CCD cameras, pixel by pixel, is paramount in the correct calculation of the Mueller matrix distribution.The CCD cameras were aligned before including the objective in the system, using the image of a circular aperture at the position where the objective's pupil is located when the system is completely mounted.The aperture was illuminated by an extended incoherent light source pointing from the position of the specimen towards the relay optical system that images the exit pupil of the collector lens.The alignment was made using the crosscorrelation between the image of the circular aperture on one camera (set as the reference), and the corresponding images on the rest of the cameras.The position of the peak of the cross-correlation showed how accurately the images on the cameras overlapped.The correct azimuthal orientation of the cameras was also verified using a cross-correlation method.A circular aperture with a thin vertical wire along its diameter was used to identify rotation errors.The precision of the alignment was calculated to be ± 1/2 of a CCD pixel (4.65 microns), however, the experimental images were pixel-binned with a 4 × 4 kernel.The effective precision of the correspondence of the data pixels was therefore ± 1/8 of a data pixel.
System calibration
Robust calibration of the experimental setup is critical in our measurements.Because of the influence of Fresnel parameters in the polarization, and spurious inhomogeneities of the polarization components, every single optical element in an experimental setup may introduce polarization artifacts.This limits the accuracy with which the Mueller matrix can be measured, and needs to be addressed carefully.Fig. 3. Measurements and operations necessary to obtain the 16 elements of the Mueller matrix for every data pixel in our system.From left to right, the first symbol, and subscript of I, represents the polarization state of the incident light whereas the second symbol, and subscript, represents the analyzer used in the corresponding measurement.The convention followed for the subscripts is the same as in section 2.1.The unpolarized component, for the incident light, and the total irradiance, for the analyzer, were obtained as the incoherent superposition of the H and V components accordingly.
The method used to calibrate our system is an extension of the eigenvalue calibration method (ECM) to be used in double-pass (ECM-DP) [28,26].The calibration samples used are the four optimum calibration samples given by De Martino et al. [29], namely: a linear polarizer at two different orientations (0° and 90°, with respect to the horizontal, B 1 and B 2 , respectively), a wave-plate with retardance of /4 in double-pass (oriented at ~30°, with respect to the horizontal, B 3 ), and free space (B 0 ).The calibration method was implemented pixel by pixel across the aperture of the incident beam to calibrate the whole area of interest [30].Since the calibration of our system with the ECM-DP requires the measurement of the Mueller matrix of the four calibration samples, it is impractical to do the calibration with the high-NA objective in place.This is due to the typically short focal length, and even shorter working distance, of high-NA microscope objectives.Thus, the objective was removed during this part of the calibration.A flat auxiliary mirror was placed, just after the calibration samples, normal to the incident light path.After the first-pass through the calibration sample, the light was reflected back to the optical system passing a second time through the calibration sample.This double-pass configuration introduces an ambiguity in the definition of the polarization orientation, and handedness, due to the change in the direction of propagation.However, as discussed in [26], once a reference orientation has been chosen, whether it corresponds to the first-pass or the second-pass, the effect of the reflection can be included in the system calibration matrices.As an example of the accuracy of the system, Fig. 4 shows the calibrated Mueller matrix for a linear horizontal polarizer [31].Despite some residual artifacts present in the elements of the Mueller matrix as interference fringes, the calibrated Mueller matrix of the horizontal polarizer is in agreement with its theoretical counterpart.Table 2 contains the mean and standard deviation of each coefficient of the Mueller matrix distribution in Fig. 4 next to its corresponding theoretical matrix.The table shows this comparison for the four samples used in the calibration.The experimental measurements used for Table 2 were not part of the calibration routine, they were independent measurements made after calibration.The exact values of the transmittance (), angle of orientation (), retardance (), and diattenuation () of the corresponding theoretical matrices were not known.They were fitted with the average parameters obtained during the ECM [26,28,29].The difference between both matrices, for each calibration sample, was found to be below one standard deviation.As error metric, we subtracted the mean experimental matrices from the theoretical ones, and calculated the RMS of the 16 coefficient differences for each sample.This number is also shown in Table 2.
0.44 0.44 0.00 0.00 0.43 0.44 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.43 0.43 0.00 0.00 0.43 0.43 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Two sets of interference fringes can be observed in Fig. 4, the calibrated Mueller matrix of the horizontal polarizer.The alignment of the cameras was verified to exclude orientation artifacts as the origin of the fringes.The low frequency fringes originate in unavoidable backreflections in the pane that covers the CCD detector in the cameras.The effect of these backreflections can be reduced, for instance, using a CCD with no pane.In this case, the CCD detector would be exposed to the environment and it might get dusty easily.Another alternative is to change the PSA into a sequential Stokes polarimeter with a single CCD, but the measurement of the Stokes vector would not be simultaneous as in a division-ofamplitude polarimeter.The dichroic linear polarizer at + 45° in front of CCD3 is responsible for the high frequency fringes.To get rid of them, the dichroic polarizer may be substituted by a better quality crystal polarizer such as a Glan-Taylor.
The objective used in the measurements with our system (Olympus UPLSAPO 100x oil immersion, NA = 1.4 in n = 1.518 immersion oil) was removed during the calibration with the ECM-DP.However, it is important to quantify the polarization artifacts that might be introduced by the objective.A 0.99 NA concave spherical reference surface (custom made by IC Optical Systems Ltd.) was used to assess the effect of the objective on the polarization.The sphere was made of BK7 glass and had a surface quality of /2 over the whole NA.We measure the Mueller matrix of the reference sphere in the following way: when the center of curvature of the reference surface coincides with the focus of the objective, each ray exiting the objective impinges normally upon the surface of the reference sphere, and is reflected in the same direction of the incident ray.The ray is only partially reflected because the reference sphere lacks of any high reflectivity coating.Since the reflection on the sphere is normal for each ray, it introduces the same phase shift for all rays.Therefore, any inhomogeneous variation in the polarization state of the reflected light, measured across the exit pupil of the objective, would be introduced by the objective itself.Since it was impractical to fill the reference sphere with immersion oil for the characterization of the objective, a high-NA dry objective (Olympus MPLAPO 100x, NA = 0.95 in air) was used in this part of the calibration.According to the manufacturer both objectives have similar polarization properties, they were made with the same quality standards, and thus, the same order of polarization artifacts measured for the dry objective was assumed for the oil immersion objective.This assumption appears to be appropriate according to our experimental results; this point is further discussed in section 5. Figure 5 is the Mueller matrix of the spherical reference surface as measured with our system using the MPLAPO dry objective.The matrix matches the Mueller matrix of free space within one standard deviation.The reduced transmittance is due to the low reflectance of the uncoated reference sphere.This indicates that, in double-pass, the polarization artifacts introduced by the objective are, at least at a first approximation, negligible.Note that the Mueller matrix of the spherical reference surface corresponds to free space instead of a mirror because we chose the first-pass coordinate system to represent our double-pass measurements [26].Fig. 5. Mueller matrix of the reference sphere used to characterize the polarization artifacts introduced by the high-NA microscope objective.The spot in the lower-left corner of the pupil is due to a scratch in the surface of the sphere produced accidentally during the alignment.Note that the images correspond to a zoom-in on the region of interest, i.e. the exit pupil.
Numerical analysis
For the sake of comparison with our experimental results, and to assess the reliability of the measurements, we modeled the performance of our system using analytical and numerical methods readily available, an applied them to a point-scatterer as specimen.This case, of course, has no experimental counterpart.However, if the scatterer is much smaller than the wavelength ( = 532nm in this case), it is appropriate to consider it as a point-scatterer [32].The specimen used in the measurements presented herein is an 80nm diameter gold nanosphere.This kind of specimen has been modeled before as a point-scatterer, at visible frequencies, with good results [33].
First, the field distribution in the focal region was calculated from direct integration of Debye-Wolf integrals [1,2].Then, the scattered field was calculated using the analytical solution for the field radiated by an electric dipole [34], with dipole moment proportional to the incident EM field.Finally, the effect of the collector lens was modeled with the geometrical transformation given by Sheppard and Wilson [35].Alternatively, the so-called generalized Jones matrices, introduced by Török et al. [9], may be used to model the effect of the collector lens -both are analytically equivalent.The E x (2) and E y (2) field components obtained this way were then used to calculate the Stokes parameters distribution across the exit pupil of the collector lens, which are the results reported in the following section, using the definitions given in Eq. (1), where E k * is the complex conjugate of E k for k = x,y.
Numerical and experimental results for a point-scatterer
An important practical issue in the measurement of the exit pupil polarization state distributions of the light scattered by the gold nano-sphere is to position the nano-sphere in the focal region of the high-NA objective.The auxiliary system shown in Fig. 2, formed by laser ALS (Spectra Physics He-Ne 196-1, = 633nm), lenses L2-L3 and L6, mirror M6 and camera CCD5, was implemented for this purpose.Laser ALS impinges upon the specimen from one side of the slide with the nano-spheres, rather than from the top or the bottom.This illumination configuration is equivalent to darkfield illumination in the sense that the light collected by the objective corresponds to purely scattered light with no light coming from a specular reflection or a transmission, see Fig. 6(a).Mirror M6 is in a flip-in mount such that when the mount is in the upright position the light collected by the high NA objective is deviated towards L6, which then forms an image of the specimen in camera CCD5.When the flip-in mount is lying down, the light follows its usual path in the vectorial polarimeter to form an image of the exit pupil in cameras CCD1-CCD4.Before presenting the results obtained for the gold nano-sphere, is convenient to describe briefly the preparation of the specimen.A drop of 80nm gold nano-spheres solution was put over a clean cover glass and spun around to spread the nano-spheres across its surface.The cover glass was then placed over a microscope slide with a drop of immersion oil between them to stick them together with an index matching medium.Next, an immersion oil drop was put over the cover glass with the gold nano-spheres, and it was covered with a clean cover glass.The prepared specimen was then placed on a piezo nano-stage (PiezoSystems Jena, Tritor100SG) and positioned under the microscope objective for observation.Figures 6(a) and 6(b) are a diagram of the specimen prepared for positioning with the auxiliary darkfield system and the corresponding image obtained with CCD5, respectively.In this configuration, the nano-spheres act as point-like sources, and CCD5 shows the PSF of the auxiliary positioning system.
Although the image of the specimen in CCD5 was useful to find a gold nano-sphere and position it within the focal region of the high-NA objective, a fine adjustment was necessary to guarantee that the nano-sphere was on the optical axis.This adjustment was done by maximizing the irradiance distribution in the image of the exit pupil in one of the cameras of the system.Fig. 7. Experimental Mueller matrix of the on axis 80nm gold nano-sphere measured with our system.
Our system measures complete scattering-angle-resolved Mueller matrices.Figure 7 shows the Mueller matrix distribution measured with our system for an on axis 80nm gold nano-sphere.The result shown here is the average of 5 frames on each CCD, for each incident polarization state, but no other digital smoothening filtering or post processing was applied.Each exposure was approximately 2s duration.The current limitations in our system's speed originate in the specifications of the equipment currently used.The measuring method itself is not limited to static specimens.In an application with particles in motion, for instance, much faster CCDs can be employed in combination with high-speed hardware data processing to adapt the system to the requirements of the application.A large amount of polarization information is contained in this experimental result.However, in the following subsections, to simplify the presentation of the results, we limit our discussion to Stokes parameters distributions that can be extracted from these experimental Mueller matrices.Thus, we present the Stokes parameters distribution in the exit pupil of the collector lens for three different positions of a gold nano-sphere in the focal plane along the x-axis (see Fig. 8): on axis, + /3, and -/3.The length of the off axis displacements was chosen arbitrarily with the only condition of it being a sub-resolution distance.The shortest measurable displacement of the gold nano-sphere is currently being explored in our laboratory.We show results for two homogeneous incident polarization states before focusing, namely: linear horizontal and leftcircular.
Fig. 8. Diagram of the focal plane with marks at the three positions of the point-scatterer (gold nano-sphere) analyzed.The origin of the xy coordinate system corresponds to the on axis position.The z-axis (not shown) is the optical axis of our system.
Incident light linearly polarized
Figure 9(a) shows the experimental results obtained for the on axis gold nano-sphere with incident light horizontally polarized [36].Figure 9(b) shows the corresponding numerical results for a point-scatterer under the same oil immersion and illumination conditions.These results are normalized with respect to the maximum S 0x .The basic structure of the experimental polarization state distribution is clear and in agreement with the results presented in Fig. 9(b), despite the noise in the experimental results, and the slight horizontal asymmetry of S 0x and S 1x , not present in the numerical results.The noise in the experimental results is consistent with the noise in the Mueller matrix of the calibration samples.For instance, the distribution of S 3x is the same as the spurious interference fringes discussed in section 3. Thus, this distribution does not contain information about the specimen under observation but about the current imperfections of our optical system.
Figures 10(a show that, despite noise in the measurements, the general form of the measured distributions is the same as in the numerical calculations.Furthermore, the mirror symmetry, along the horizontal direction, predicted for S 3x between the x = -/3 and x = + /3 positions of the scatterer, is clearly present in the experimental results.Our results show that it is possible to determine if the sub-resolution scatterer has undergone a sub-resolution displacement, as well as the direction of the sub-resolution displacement, by analyzing the polarization state distribution in the exit pupil of the high-NA collector lens.Once more, the basic structure of the Stokes parameters distribution in the exit pupil of the collector lens is apparent and in agreement with the numerical results.The noise in Fig. 12(a) is also consistent with the expected residual polarization artifacts that remain after the calibration.
As shown in the results presented in subsection 5.1, the sensitivity to the direction of the sub-resolution displacement of the point-scatterer is limited to S 3x in the case of incident light linearly polarized.The other three Stokes parameters exhibit the same distribution independently of the direction of the displacement.However, as will be explained below, the sensitivity of our system depends on the electric field distribution in the focal region.The use of incident light circularly polarized allows us to achieve the high sensitivity in any of the four Stokes parameters.Figures 13(a) and 14(a) show the experimental results obtained for circularly polarized light.The corresponding numerical results are shown in Figs.13(b) and 14(b).The agreement between numerical and experimental results is clear, in most of the Stokes parameters.The element with the worst agreement with its numerical counterpart, especially in the results for the gold nano-sphere at x = -/3, is S 2r .Thus, due to the relatively high noise level in the experimentally determined S 2r , no conclusive remarks, based on this element, can be made concerning the displacement of the gold nano-sphere.Nevertheless, as can be seen in Figs.13(a) and 14(a), the direction of the sub-resolution displacement of the gold nano-sphere is encoded in the other three Stokes parameters, as predicted by the numerical calculations.
Discussion
We presented an instrument that measures the scattering-angle-resolved Mueller matrix across the exit pupil of a high-NA collector lens as a means to analyze the three-dimensional field resulting from the interaction of a three-dimensional focused field and a sub-resolution specimen.The instrument is a scattering-angle-resolved Mueller matrix polarimeter that allows for high sensitivity to sub-resolution displacements of a sub-resolution scatterer.First experimental evidence showing the high sensitivity of the system, together with its corresponding numerical calculations, has been provided.Our experimental results agree with the numerical calculations performed to assess the feasibility of our system.The experimental distributions of the Stokes parameters on the back focal plane of a high-NA system correspond to our numerical predictions, that use established methods.As in any experimental system, our measurements contain noise and errors.The accuracy of our measurements was verified by measuring known samples in double-pass, and the results were always within 3% of the largest Mueller matrix coefficient (transmittance) of the mean value of the pupil distribution.Light scattered by an 80 nm gold nano-sphere was measured and calibrated successfully.
Our experimental results show that the residual polarization artifacts are caused by the interference fringes discussed in section 3. We showed that our assumption of no polarization artifacts introduced by the objective, at a first approximation, is appropriate within the limitations of our experimental setup.Modifications to reduce the error in the measurements may be required to improve the reliability of the system when applied to small scatterers.One possibility is to replace the current simultaneous PSA for a single CCD system with sequential polarization analyzers.The low contrast interference fringes in the pupil images would become common to all polarization signals, provided the sample does not move between measurements.In this way, we could apply a post-processing filter to reduce the effect of these fringes.We are aware that a thorough error analysis of the system we present is required, but this falls beyond the scope of this work.We have used robust calibration methods and are confident that our results are reliable, at least to a first approximation.This was verified by our numerical results.
We have analyzed numerical values of the components of the electric field in the focal region of the high-NA focusing lens.Our preliminary study revealed that, for incident light linearly polarized in the horizontal direction, the y component of the electric field is zero along the x-axis, whereas the x component has the same value at both off axis positions of the scatterer.The value of the z component at x = -/3, on the other hand, is the negative of its value at x = + /3.Similarly, for the case of incident light circularly polarized, the x and y components of the electric field are equal, and different from zero, for both off axis positions.For circularly polarized light the z component at x = -/3 is also the negative of its value at x = + /3.It seems that the asymmetry in the focused field distribution, associated to the z component, is the key to differentiate between the two off axis positions.
In its current state, our method does not provide a quantitative measure of the magnitude of the displacement undergone by the sub-resolution scatterer, although it provides information on the direction of the displacement.The definition of a suitable metric, with an appropriate threshold, based on the statistical properties of the polarization distribution across the exit pupil of the collector lens is work in progress and we intend to report it in a future stage.
Fig. 1 .
Fig. 1.Schematic diagram of the working principle of our system.The longitudinal component of the scattered field, Ez (s) , is combined with the transversal components by the collector lens.The crosses indicate sample points in the exit pupil where the polarization state is analyzed.
Fig. 4 .
Fig. 4. Calibrated Mueller matrix of the horizontal polarizer, within the aperture of the incident beam, as obtained with our system.
Fig. 6 .
Fig. 6.(a) Diagram showing the preparation of the specimen and positioning with the auxiliary system; the specimen is being illuminated by the auxiliary laser, ALS.(b) Corresponding image of the nano-spheres as obtained by CCD5 with the flip-in mount mirror in the upright position.
Figure9(a) shows the experimental results obtained for the on axis gold nano-sphere with incident light horizontally polarized[36].Figure9(b) shows the corresponding numerical results for a point-scatterer under the same oil immersion and illumination conditions.These results are normalized with respect to the maximum S 0x .The basic structure of the experimental polarization state distribution is clear and in agreement with the results presented in Fig.9(b), despite the noise in the experimental results, and the slight horizontal asymmetry of S 0x and S 1x , not present in the numerical results.The noise in the experimental results is consistent with the noise in the Mueller matrix of the calibration samples.For instance, the distribution of S 3x is the same as the spurious interference fringes discussed in section 3. Thus, this distribution does not contain information about the specimen under observation but about the current imperfections of our optical system.Figures10(a) and 11(a) show the experimentally determined Stokes parameters distribution in the exit pupil for an off axis point-scatterer located in the focal plane at x = -/3 and x = + /3, respectively, with incident light linearly polarized in the horizontal direction.The corresponding numerical results are shown in Figs.10(b) and 11(b).The previous figuresshow that, despite noise in the measurements, the general form of the measured distributions is the same as in the numerical calculations.Furthermore, the mirror symmetry, along the horizontal direction, predicted for S 3x between the x = -/3 and x = + /3 positions of the scatterer, is clearly present in the experimental results.Our results show that it is possible to determine if the sub-resolution scatterer has undergone a sub-resolution displacement, as well as the direction of the sub-resolution displacement, by analyzing the polarization state distribution in the exit pupil of the high-NA collector lens.
Fig. 9 .
Fig. 9. (a) Experimentally obtained Stokes parameters distribution in the exit pupil of the collector lens for an on axis gold nano-sphere with incident light linearly polarized in the horizontal direction.(b) Corresponding numerical results for a point-scatterer.The Stokes parameters are normalized with respect to the maximum S0x.
Fig. 10 .
Fig. 10.(a) Experimentally obtained Stokes parameters in the exit pupil of the collector lens for a gold nano-sphere in the focal plane at x = -/3 with incident light linearly polarized in the horizontal direction.(b) Corresponding numerical results for a point-scatterer.The Stokes parameters are normalized with respect to the maximum S0x.
Fig. 11 .
Fig. 11.(a) Experimentally obtained Stokes parameters in the exit pupil of the collector lens for a gold nano-sphere in the focal plane at x = + /3 with incident light linearly polarized in the horizontal direction.(b) Corresponding numerical results for a point-scatterer.The Stokes parameters are normalized with respect to the maximum S0x.
Fig. 12 .
Fig. 12.(a) Experimentally obtained Stokes parameters distribution in the exit pupil of the collector lens for an on axis gold nano-sphere with incident light circularly polarized to the left.(b) Corresponding numerical results for a point-scatterer.The Stokes parameters are normalized with respect to the maximum S0r.
Fig. 13 .
Fig. 13.(a) Experimentally obtained Stokes parameters in the exit pupil of the collector lens for a gold nano-sphere in the focal plane at x = -/3 with incident light circularly polarized to the left.(b) Corresponding numerical results for a point-scatterer.The Stokes parameters are normalized with respect to the maximum S0r.
Fig. 14 .
Fig. 14.(a) Experimentally obtained Stokes parameters in the exit pupil of the collector lens for a gold nano-sphere in the focal plane at x = + /3 with incident light circularly polarized to the left.(b) Corresponding numerical results for a point-scatterer.The Stokes parameters are normalized with respect to the maximum S0r. | 9,592.8 | 2010-03-15T00:00:00.000 | [
"Physics"
] |
The Mechanism of Signal Processing of Solar Radio Burst Data in E-CALLISTO Network (Malaysia)
Solar space weather events like Coronal Mass Ejections and solar flares are usually accompanied by solar radio bursts, which can be used for a low-cost real-time space weather monitoring. In order to make a standard system, a CALLISTO (Compound Astronomical Low-cost Low-frequency Instrument for Spectroscopy in Transportable Observatory) spectrometers, designed and built by electronics engineer Christian Monstein of the Institute for Astronomy of the Swiss Federal Institute of Technology Zurich (ETH Zurich) have been already developed all over the world since 2005 to monitor the solar activities such as solar flare and Coronal Mass Ejections (CMEs). Up to date, there are 25 sites that used the same system in order to monitor the Sun within 24 hours. This outstanding project also is a part of the United Nations together with NASA initiated the International Heliophysical Year IHY2007 to support developing countries participating in ‘Western Science’. Beginning February 2012, Malaysia has also participated in this project. The goals of this work is to highlight how does the signal processing of solar radio burst data transfer from a site of National Space Centre Banting Selangor directly to the Institute of Astrophysics Switzerland. Solar activities in the low region, focusing from 150 MHz to 400 MHz is observed daily beginning from 00.30UT 12.30 UT. Here, we highlighted how does the signal processing work in order to make sure that the operation is in the best condition. Although the solar activities have experienced rapid growth recently, high-level management of CALLISTO system has remained successfully manage the storage of data. It is also not easy to maintain the future data seems the number of sites are also growing from time to time. In this work, we highlighted the potential role of Malaysia as one of the candidate site that possible gives a good data and focusing on a few aspects such as optimization, and performance evaluation data and visualization.
INTRODUCTION
Space weather is defined as a study of the conditions in the solar wind that can affect life on the surface of the Earth, particularly the increasingly technologically sophisticated devices that are part of modern life [1]. The solar space weather events like Coronal Mass Ejections and solar flares are usually accompanied by solar radio bursts, which can be used for a low-cost real-time space weather monitoring [2]. There are many reviews of the history LTD, 2014 of these phenomena, specifically, by Kundu [3] and McLean and Labrum [4] for observational reviews, Zheleznyakov [5], Melrose [6] and Benz [7] for the theory of radio emission in these bursts, and Pick [8] and Schwenn [9] for the relationship with space weather. Over periods of increased solar activity, several Coronal Mass Ejections (CMEs) can be launched by the same or nearby active regions [10]. It could not be denied that solar activities play a crucial role in global circulation and climate changes. Due to the fact, we focus a solar radio monitoring at low frequency region (45 MHz to 870 MHz). However, most of the solar radio flare signals are characterized by complex variability patterns including both non-stationarities and nonlinearities. In 2007, the United Nations together with NASA initiated the International Heliophysical Year IHY2007 to support developing countries participating in 'Western Science'. All activities of IHY2007 closed in 2009 by the UN office in Vienna, but immediately continued in the new project ISWI (International Space Weather Initiative), again supported by the UN and NASA. During the past years, many CALLISTO spectrometers have been deployed and set into operation in different countries all over the world [11]. These projects target to achieve two purposes:Firstly, the scientific objective of 24h/7d coverage of solar observations; and, secondly, the political objective to support developing countries. Currently, there are more than 20 instruments are installed, whereas about 11 regularly supply data to the central archive in Switzerland. However, there are still some gaps in the Pacific region during winter-season.
In order to make a standard system, a CALLISTO (Compound Astronomical Low-cost Low-frequency Instrument for Spectroscopy in Transportable Observatory) spectrometers, designed and built by electronics engineer Christian Monstein of the Institute for Astronomy of the Swiss Federal Institute of Technology Zurich (ETH Zurich) have been already developed all over the world since 2005 to monitor the solar activities such as solar flare and Coronal Mass Ejections (CMEs). Initiated by Institute of Astrophysics Switzerland, there are 25 sites that used the same system in order to monitor the Sun within 24 hours. This system was particularly useful for studying the large explosions in the Sun's atmosphere known as solar flares. The radio emissions from these events are important for understanding the dynamics of the solar corona. This outstanding project also is a part of the United Nations together with NASA initiated the International Heliophysical Year IHY2007 to support developing countries participating in 'Western Science'. Beginning February 2012, Malaysia has also participated in this project [12]. The goals of this work is to highlight how does the signal processing of solar radio burst data transfer from a site of the National Space Centre Banting Selangor directly to the Institute of Astrophysics Switzerland. Based on this project, how to transfer and manage the data become an important role to be considered. In this case, signal processing that deals with operations on our analysis of signals, or measurements of time-varying or spatially varying physical quantities is required. We are interested a signal can include sound, images due to the specific range of frequency. We also concerned the noise of RFI sources, and analysis from continuously has been done [13]. In general, the allocation for radio astronomy are at the frequency of 13.36-13.41 MHz (solar), (25.55-25.67) MHz (Jupiter) and (37.50-38.25) MHz [14].
International Letters of Chemistry, Physics and Astronomy Vol. 34
THE RADIO FREQUENCY INTERFERENCE MONITORING AND THE CALLISTO SYSTEM
Malaysia also involved this research since February 2012 and routinely monitor 12 hours per day at National Space Centre, Selangor [15]. We started by proposing this research in early 2011, through the National Space Agency of Malaysia (ANGKASA), University of Malaya (UM), MARA University of Technology (UiTM) and National University of Malaysia (UKM) [16]. Seems we are in the equatorial country, the range of temperature varied from 26-31° Celcius throughout a year. Generally, the signal from the log periodic dipole antenna antenna is directed to the (Compound Astronomical Low-cost Low-frequency Instrument for Spectroscopy in Transportable Observatory) CALLISTO spectrometer, which is housed in a steel case, via a low loss coaxial cable, LMR-240 [17]. Similar to another site, the instrument holders have set up an FTP-server at their host-site with a local archive. All data are stored with a scale factor and an offset applied so that the measured ADC digits range fits into the byte data range (0 -255) [18]. Figure 1 shows the location of our site and instrumentation setup of CALLISTO system. associated with solar flare type M level 2.0 occurred from 0412UT. Due to the effect, strong bursts that caused by extraordinary solar flares due to magnetic reconnection effect potentially induced in the near-Earth magneto tail. Once per day a cron-job (PERL) gets in contact with the instrument holder, reads the directory, selects files of interest and uploads them to the main server in Switzerland. Data will be saved based on the duration of period of daylight in Universal Time (UT) and compile every 15 minutes. All data can be achieved from e -CALLISTO websites initiated by Institute of Astrophysics, Switzerland. This observational data are limited only a good range of frequency with minimum interference. In order to keep data only with high probability of containing solar radio flares, a filter method also be used from time to time. This data can also be compared with the National Oceanic and Atmospheric Administration (NOAA) list is in an updated state. Data archive allows to store up to 10 TeraBytes of FIT-files. The archive is physically located at FHNW (Fachhochschule Nordwestschweiz) and managed from ETH (Eidgenössisch Technisch Hochschule in Zurich). There are a few software that helps to transfer the data, such as FTP-watchdog could replace a function of FTP server and make the process of the data can be automatically transferred into a table based on the timing of the solar activities. For the basic data reduction purpose, we can use a JAVA software to subtract the data from background noise. All data can be analyzed in detail with IDL (Interactive Data Language) program.
The solar with RFI monitoring should be done at the same time [19]. So far, we also have investigated the RFI at different altitude [20]. Our site has become the 19th site that participates the solar burst monitoring and an observational of Radio Frequency Interference (RFI) been done [21]. It should be noted that the Earth environment has a close connection with Sun activities [22]. Therefore, it is very important to study the solar radio burst data associated with solar flares and Coronal Mass Ejections (CMEs) [23][24][25]. These phenomena are due to the behaviour of the sunspots or active region [26]. The solar activities indirectly affected the conditions of earth's climate and space weather [27,28].
RESULTS AND ANALYSIS
Solar activities in the low region, focusing from 150 MHz to 400 MHz is observed daily beginning from 00.30UT 12.30 UT. Figure 3 shows the coverage of solar monitoring in Malaysia and the total of coverage of the e-CALLISTO network.
ILCPA Volume 34
Up to date, this project has successfully achieved more than 90 percent target in order to monitor the Sun within 24 hours. Malaysia also contributed almost 50 percent of the daily data. In fact, we are the second earliest site that monitors a solar activity every day after Alaska. Moreover, we are also one of the equatorial sites that possible gives an early indicator in terms of solar variability. So far, we possible to contribute a good data.we found that we possible to detect solar burst type III almost every day if there is an active region. We hope that we could improve the system on a few aspects such as amplification of the signal, the storage of the data and also the technical part of the antenna. Figure 4 illustrates the schematic diagram the mechanism of signal processing of solar radio burst data of e-CALLISTO network.
CONCLUDING REMARKS
Although the solar activities have experienced rapid growth recently, high-level management of CALLISTO system has remained successfully manage the storage of data. Further actions need to be taken to prove solar monitoring studies are very important and should be extended. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in this system in order to improve their management. It is also not easy to maintain the future data seems the number of sites are also growing from time to time. In this work, we highlighted the potential role of Malaysia as one of the candidate sites that possible gives a good data and focusing on a few aspects such as optimization, and performance evaluation data and visualization. Our next plan is to discover the potential for dynamic optimization of an array's of antenna with a focus on mitigation of fault conditions
CALLISTO Spectrometer
Observation PC Data Acquisition Server
Data server
International Letters of Chemistry, Physics and Astronomy Vol. 34 and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as the type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presented here will be sufficient to offset the small additional cost of a sensing system, and that such systems will become common in the near future. | 2,730 | 2014-05-30T00:00:00.000 | [
"Engineering",
"Physics",
"Environmental Science",
"Computer Science"
] |
Transposition-Based Method for the Rapid Generation of Gene-Targeting Vectors to Produce Cre/Flp-Modifiable Conditional Knock-Out Mice
Conditional gene targeting strategies are progressively used to study gene function tissue-specifically and/or at a defined time period. Instrumental to all of these strategies is the generation of targeting vectors, and any methodology that would streamline the procedure would be highly beneficial. We describe a comprehensive transposition-based strategy to produce gene-targeting vectors for the generation of mouse conditional alleles. The system employs a universal cloning vector and two custom-designed mini-Mu transposons. It produces targeting constructions directly from BAC clones, and the alleles generated are modifiable by Cre and Flp recombinases. We demonstrate the applicability of the methodology by modifying two mouse genes, Chd22 and Drapc1. This straightforward strategy should be readily suitable for high-throughput targeting vector production.
Introduction
In mouse, conditional gene knockout (cko) strategies have provided means to study the effects of gene inactivation in a tissuespecific manner and/or at a defined time period [1]. The most widely applied strategies for conditional mutagenesis take advantage of site-specific recombinases; particularly the Cre/loxP system of bacteriophage P1 [2] has been used extensively for the purpose [3].
Engineering the targeting vector construction is arguably one of the most time-consuming steps in cko strategies. Conventional methods for the construction of cko vectors include finding appropriate restriction enzyme cleavage sites in the genome, and several cloning steps to insert loxP sites as well as positive and negative selection markers into the construction. To complement the conventional methods, alternative strategies have recently been developed to construct cko targeting vectors. These strategies employ PCR in combination with homologous recombination (recombineering) [4][5][6], or they are based on the utilization of in vitro reactions of several DNA transposition systems, such as Ty1 [7], phage Mu [8][9][10], or Tn5/Tn7 [11]. Although the currently available cko vector construction strategies are adequate for many transgenic projects, a general strategy that would further streamline the vector engineering procedure would be highly beneficial. In addition, the strategy should preferentially incorporate two commonly used site-specific recombination systems, Cre/ loxP [2] and Flp/FRT [12], to allow versatile possibilities for the removal of selection cassettes in mouse ES cells or animals [3].
Mu DNA transposition reaction is one of the best-characterized transposition reactions [13], and a minimal version of it can be reproduced in in vitro conditions using MuA transposase, transposon DNA, and target DNA as the only macromolecular components [14]. Importantly, the relatively random transposon insertion spectrum allows near-saturating mutagenesis whereby insertions can be targeted to almost every residue in the target sequence [15]. The minimal in vitro reaction has recently been used in a variety of molecular biology, protein engineering, and genomics applications [16][17][18][19][20][21][22][23][24]. We have shown earlier that Mu in vitro transposition can be used to produce several types of mouse gene targeting constructions, including those generating null, hypomorphic, or conditional alleles [8]. This proof of principle study, employing mouse DNA subcloned in a plasmid vector, established the basic methodology and prompted us to examine possibilities for ever more advanced utilization of the technology.
Here, we describe a highly efficient Mu in vitro transpositionbased approach to generate cko targeting vectors directly from BAC clones. By targeting the mouse Cdh22 [25] and Drapc1 [26] loci, we show that the strategy provides a straightforward means to produce conditional alleles.
Results
MuA transposase catalyzes transposon integration into any DNA in an in vitro reaction and produces essentially random distribution of transposon insertions along the target sequence [14]. We describe a strategy for the generation of cko targeting plasmids by employing two successive Mu transposition reactions to introduce a loxP site on one side of the exon of interest and a marker gene flanked by loxP sites and FRT sites on the other side of that exon. Initially, we constructed a universal cloning vector (pHTH22) suitable for essentially any cko project in mouse ( Figure 1A). For negative selection in mouse ES cells [27], it contains the HSV TK gene flanked by the mouse PGK promoter and terminator. For cloning and linearization, it contains an array of unique restriction sites ( Figure 1B), including several 8-cutter sites, and a homing endonuclease site (PI-SceI). We also constructed two selectable marker-containing mini-Mu transposons ( Figure 1C). The first transposon, Kan/Neo-loxP-Mu, contains a bacterial promoter, the SV40 early promoter, the Kan/Neo resistance-encoding gene from Tn5, and the polyadenylation signal from HSV TK gene. This resistance cassette is flanked by two loxP direct repeats and embedded between two 50bp inverted repeat Mu R-end sequences. The second transposon, Kan/Neo-loxP-FRT-Mu, contains two additional FRT sites as direct repeats. The functionality of the transposons, particularly Mu R-ends, was confirmed by in vitro transposition into an external target plasmid (data not shown). The functionality of the Cre/loxP and Flp/FRT recombination systems was tested by introducing appropriate transposon-containing plasmids into E. coli strains expressing Cre or Flp recombinase. As a result, all expected deletion derivatives of plasmids were formed, verifying the functionality of the two recombination systems (data not shown).
Targeting vector for mouse Cdh22 gene
We applied the devised strategy (Figures 2) to generate a conditional Cdh22 gene (MGI:1341843) knockout allele. As the initial transposition reaction target DNA, we used a KpnI digest of the identified BAC clone (110 kb), which contained the exon 3 of the mouse Cdh22 gene in a 9 kb fragment. As the donor DNA, we used Kan/Neo-loxP-Mu transposon, containing the selectable Kan/Neo gene cassette between two loxP sites. Using selection for ampicillin and kanamycin resistance, transposition reaction products were cloned into the targeting vector as a pool, generating a plasmid library of ,3,100 Amp R /Kan R clones. We pooled 300 transformant clones in six pools, each containing 50 colonies, isolated plasmid DNA from these pooled samples, and screened the DNA by PCR for suitably located transposon insertions ( Figure 3A). Three of these pools produced a prominent PCR product, the length of which fell within the desired size range (100-1,000 bp). Original clones from these three pools were then analyzed individually by colony PCR. Three clones, one from each pool, generated a PCR product that matched in size those products observed with pooled samples, indicating the presence of the transposon in the vicinity of the exon 3 in these three clones.
Most of the transposon DNA was then eliminated from these three integrant clones by introducing the plasmids into the E. coli strain expressing Cre recombinase. Site-specific recombination in vivo removed the selection cassette, leaving behind the Mu R-ends and one loxP site; and this was confirmed by sequencing. Sequencing also identified the exact location of the insertion in the selected three plasmid clones: 211, 368, and 462 bp upstream from the exon 3.
Next, the second transposon, Kan/Neo-loxP-FRT-Mu, with two additional FRT sites (''floxed'' and ''flrted'' Kan/Neo gene cassette) was integrated in vitro into the construct that contained remnants of the first transposon 368 bp from the exon 3. In this case, we first assembled DNA transposition complexes by incubating the transposon with MuA transposase and then included the target DNA, followed by the addition of Mg 2+ ions.
The assembly of Mu transposition complexes is a slow process when compared to target capture and strand transfer steps of transposition. Thus, the use of pre-assembled complexes with a short incubation time thereafter prevents the target DNA from acting as a transposon, a formal possibility, as the target plasmid contains two Mu R-ends left over from the first integrated transposon. Transposition reaction products were then electroporated into E. coli DH10B cells, and proper clones were selected on plates containing ampicillin and kanamycin. We screened 80 transformants by colony PCR (Figure 3A), and six of the clones apparently contained a suitably located transposon, as indicated by a PCR product of appropriate size. One clone was selected, and subsequent sequence analysis confirmed a correct orientation and location of the second transposon 1,618 bp downstream from the exon 3.
Targeting vector for mouse Drapc1 gene As another example, we generated a conditional knockout allele for Drapc1 gene (MGI:3513977). In the initial transposition reaction, we used an NsiI restriction digest of the BAC clone containing exons 2 and 3 as the target DNA, and Kan/Neo-loxP-Mu transposon as the donor. The desired NsiI fragment was 15.1 kb in length, containing the exons 2 and 3 with an intervening 2.8 kb intron and 5.2 kb upstream and 6.6 kb downstream regions. The reaction products were cloned into the targeting vector as a pool, generating a plasmid library of ,6,700 Amp R /Kan R transformants. The library was divided into forty pools (,170 colonies per pool), and these pools were screened by PCR ( Figure 3B). In this case, two of the pools generated a PCR product of appropriate size. Next, individual clones from one of the identified pools were analyzed by colony PCR for the respective transposon integration. A proper candidate clone was found; sequencing verified the transposon orientation and confirmed its location 1,220 bp upstream from the exon 2. Most of the transposon was again removed in the Cre-expressing E. coli strain. Next, the other transposon, Kan/Neo-loxP-FRT-Mu, was introduced by in vitro transposition as described for Cdh22. A total of 150 transformants were analyzed for PCR products as pools ( Figure 3B), each pool containing ten transformants, and one of the pools generated a PCR product of appropriate size. The colonies from this pool were then analyzed individually by colony PCR, and a colony was identified that evidently was responsible for the abovementioned PCR product. Sequencing of this clone confirmed the correct orientation of the transposon and identified its exact location 964 bp downstream from exon 3.
Functionality of the loxP and FRT sites
To verify experimentally the orientation and functionality of the loxP and FRT sites, the targeting constructs for Cdh22 and Drapc1 were introduced into E. coli strains 294-Cre and 294-Flp. These strains thermoinducibly express Cre and Flp recombinases, respectively, and can be used to monitor recombinationproficiency of plasmids [28]. In all four cases, site-specific recombination proceeded as expected (Figure 4), verifying authenticity of the constructions.
Gene targeting in mouse ES cells
The final constructions for Cdh22 and Drapc1 were linearized with NotI and SacII, respectively, and subsequently electrotransfected into mouse ES cells, selecting for the marker residing within the transposon (G418) and for the loss of the TK marker (ganciclovir). The selected ES clones were screened by Southern analysis with appropriate 59 and 39 probes to verify correct targeting. Six correctly targeted clones were identified for Cdh22 gene and four for Drapc1 gene, representing a targeting efficiency of 5.4 and 5.7%, respectively. Figures 5 and 6 show the restriction maps and Southern analyses of the wt as well as mutated Cdh22 cond and Drapc1 cond alleles. The correctly targeted ES cell clones were used to generate mouse chimeras, which transmitted the targeted Cdh22 cond and Drapc1 cond alleles through the germ-line (Figures 7 Figure 2. Flowchart for the construction of conditional knockout vectors. Part A. (i) an entire BAC clone is initially digested with an appropriate restriction endonuclease, and (ii) the ensuing fragment pool is then used as a target for the first transposition reaction. (iii) The fragment pool is subsequently ligated into a suitable vector plasmid, and those clones that include transposon-containing BAC fragments are selected using both the transposon marker and vector marker. Next, (iv) a plasmid clone that contains a transposon insertion in a desired location is identified by a PCR screen with appropriate primers, one transposon specific and the other target specific. (v) The chosen plasmid is then introduced into an E. coli strain that expresses Cre recombinase. The selectable marker is eliminated by Cre recombination in vivo, leaving a single loxP site in the construction. Part B. The second transposition reaction (vi) using pre-assembled transposition complexes (vii) introduces into the construction a marker gene that is flanked by loxP and FRT sites. (viii) As above, a suitable clone is screened by PCR to identify a plasmid, which contains the second transposon inserted in a suitable location and orientation on the opposite side (to the first transposon) of the exon of interest. Genomic DNA is shown with a black line, and the orange rectangle denotes an exon. The transposons are shown with black bars featuring the Kan/Neo cassette (green arrow), loxP sites (white triangles), FRT sites (white pentagons), and transposon ends (black rectangles). doi:10.1371/journal.pone.0004341.g002 and 8). To demonstrate recombination between the LoxP sites and generate a null allele of Cdh22, mice carrying the Cdh22 cond allele were crossed with mice carrying PGK-Cre transgene. In the double heterozygous (Cdh22 cond /+; PGK-Cre/+) offspring, the Cdh22 cond allele was efficiently converted by Cre-mediated recombination to Cdh22 del allele, where the sequence between the LoxP sites was deleted (Figure 7). Similarly, also Drapc1 del allele was produced ( Figure 8).
Discussion
We present a new strategy to produce gene targeting vectors for the generation of conditional knockout mutations in the mouse. To validate the methodology, we used the strategy to construct targeting vectors for Cdh22 and Drapc1 genes. These vectors were used for gene targeting in mouse ES cells, and the resulting conditional alleles were transmitted through the germ line. The strategy includes a cloning vector with several desirable characteristics. For the selection against random integration in mouse ES cells, it contains the HSV TK gene. The array of multiple restriction endonuclease sites provides versatile possibilities for the choice of the genomic restriction fragments utilized and for the final linearization procedure prior to the delivery into ES cells. In addition, a homing endonuclease site is included for the linearization, and it can be used if none of the restriction enzyme sites is acceptable for that. The multicopy nature of the vector ensures a convenient plasmid production.
The strategy also includes two transposons, both of which contain the antibiotic-resistance cassette (Kan/Neo), selectable both in bacteria and mammalian cells and removable by Cre recombination. One of the transposons also contains a pair of FRT sites, enabling the excision of the resistance cassette by Flpmediated recombination. Since the first report of the phage Cre/ loxP system functioning in mammalian cells [29], both the Cre/ loxP system and later the yeast Flp/FRT system have been shown to be effective in removing selectable markers in several types of mouse cells, including ES cells [30][31][32][33]. As the presence of a selectable marker within the modified locus can potentially influence the expression of the targeted gene [34,35], a marker removal system is essential in any gene targeting strategy. In the described system, the most straightforward option to remove the positive selection cassette is the utilization of Flp recombination, as only two FRT sites are present in the modified locus. Although Cre recombination may also be used for the marker removal, the presence of three loxP sites necessitates a somewhat more meticulous screening phase. A marker removal in this system leaves behind a DNA segment that contains a pair of 50 bp Mu Rend sequences as an inverted repeat and an intervening sequence of 67 bp (Kan/Neo-loxP-Mu) or ,120 bp (Kan/Neo-loxP-FRT-Mu) including the recombination signal(s). As these segments are relatively short, and they do not contain detectable splicing signals or polyadenylation sites, their presence in an intron is expected to have no or negligible effects on the level of gene expression from the modified loci. However, a degree of caution with this respect is warranted, as unpredictable locus-specific differences may possibly exist. We have shown that the marker-removed configurations are stable in plasmids, as no secondary rearrangements were observed in those plasmids that were isolated from recombinase-expressing E. coli strains (Figure 4). We have also shown that Cre-recombined alleles are stable in the mouse, as clear signals of stable loci were seen in the Southern analyses of the deletion allele mice (Figures 7 and 8). Given that Flp-recombined configurations are stable in plasmids (Figure 4), and because the respective alleles in mice would be very similar to those rearranged by Cre, we believe that Flp-recombined alleles are stable in mice, although this has not been verified experimentally.
In general, the use of DNA transposition strategies alleviates the requirement of finding appropriate restriction sites close to an exon of interest. In addition, with transposon strategies several constructions aimed at targeting different exons can be generated simultaneously. In Mu transposition, the target DNA sites are selected with a low stringency of sequence specificity [15], yielding essentially random distribution of integrations along longer regions of DNA [14,16]. Accordingly, the target site selection of Mu transposition is optimally suited for the purpose established in this study.
In contrast to several published transposon protocols [6,10,11], the manipulation in our strategy is initiated directly from a BAC clone, and no prior subcloning of a particular restriction fragment is needed. During the transposon-assisted cloning step, a convenient double selection for the transposon (Kan R ) and vector (Amp R ) markers ensures that all cloned fragments surely contain the transposon. The probability of obtaining plasmid clones with two or more transposon insertions is very low because, in the reaction conditions used, most of the target DNA fragments do not experience even a single transposon integration [14,16]. Nevertheless, because double integrations would be detrimental to the strategy, appropriate restriction analysis, or preferably sequencing with transposon-specific outwards-reading primers, should be used to ensure single-copy transposon integration. As illustrated in our study, sequencing also indicates the exact site of the transposon integration.
Following the initial transposon-assisted cloning step, the most convenient means to identify transposon-containing recombinant clones for further manipulation is arguably the use of PCR-based screening with sample pooling [36,37]. Even in cases where the exon of interest resides in a sizeable restriction fragment, a suitable clone can be identified with a reasonable effort. In this study the targeted exon of the Cdh22 gene was in a 9 kb KpnI fragment, a medium size fragment among those produced by the respective BAC clone. In this case, the screening of 300 colonies identified three suitable clones for further manipulation. In the Drapc1 BAC clone, the two exons of interest were resident in a 15.1 kb NsiI fragment, which is one of the largest NsiI fragments of this BAC, generating a more challenging test case for the utility of the strategy. Even with this large fragment, two potentially suitable integrant clones were identified among 6,700 colonies with relative ease by the use of 40 pooled samples in the initial screening phase. Thus, it is straightforward to obtain desired transposon-containing recombinant plasmids with genomic inserts of 10-15 kb, a size range typically used for standard gene targeting projects. As a last step during the screening phase, we successfully used colony PCR, and this method proved to be highly effective and fast in identifying the final targeting constructions. The size range of PCR products that would identify a suitably located transposon is critically dependent on the resolution of the gel used and the performance of the DNA polymerase applied. In our study, PCR products ranging in size from 285 bp to 1,934 bp identified suitably located transposons (Figure 3).
Our strategy complements other targeting vector construction methods and offers certain key advantages: (1) The cloning vector and two transposons are generally usable and can be applied to modify any mouse gene. Recently, two groups have applied transposon techniques for gene-targeting vector construction. Zhang et al. [10] used mini-Mu transposons, and Aoyama et al. [11] used a combination of two transposons, Tn5 and Tn7. Many features in these two strategies do share similarities with our strategy. However, important differences also exist. For example, in the abovementioned two strategies, the modified gene region is initially subcloned into a plasmid vector. Negative selection was not employed in the Mu methodology, and two different transposon systems were used in the Tn5/Tn7 methodology.
One of the advantages in our strategy, avoiding laborious initial cloning steps in gene-targeting vector construction, has been achieved also with recombineering, i.e. the use of homologous recombination in E. coli [5]. This method has been used to introduce loxP and FRT sites, and selection markers into genomic DNA [6]. Compared to transposon strategies, because of the PCR amplification step to insert homology regions, recombineering requires relatively long specific primers for each individual targeting construct. However, the advantage is that each particular locus can be modified very accurately as desired.
We have developed a fast and efficient Mu in vitro transpositionbased procedure to construct targeting vectors directly from BAC clones without the need for prior subcloning. A general-purpose cloning vector and two custom-designed transposons provide a tool set for any conditional knockout project in mouse. The data indicate that the strategy described here is an easy, efficient, and versatile method for generating conditional knockout alleles. Furthermore, the procedure should be readily suitable for highthroughput targeting vector production.
Ethics statement
All the experiments involving animals were approved by the committee of experimental animal research of the University of Helsinki.
DNA techniques. Standard enzymes for DNA work [38] were from New England Biolabs. DyNAzyme II DNA polymerase (for cloning), Phusion DNA polymerase (for colony PCR), and MuA transposase were from Finnzymes. Enzymes were used as recommended by the suppliers. The BACs used were screened from the 129S6/SvEvTac mouse BAC library RPCI-22 (BACPAC Resources). Qiagen kits were used for DNA isolation. Standard DNA techniques were performed as described [38]. The colony PCR method was modified from a previously published protocol [39]. Briefly, a single colony was picked from a selection plate and suspended in 50 ml of water. One microliter of this suspension was then used as a template in PCR amplification, using Phusion DNA polymerase according to the recommendations of the supplier. Each PCR amplification included one transposon-specific and one locus-specific primer ( Figure 3).
Cloning vector. The herpes simplex virus (HSV) thymidine kinase (TK) gene cassette including the mouse phosphoglycerate kinase (PGK) promoter and terminator was cloned from plasmid pPNTloxP [40], as an EcoRI-HindIII fragment, into plasmid pUC19 (New England Biolabs) that had been cleaved with the same two enzymes, yielding pHTH21. A polylinker was then generated by annealing and ligating oligonucleotides HSP478 through HSP484 (Table 1). The polylinker was PCR-amplified using primers HSP485 and HSP486; and cloned, as an EcoRI fragment, into the EcoRI site of pHTH21. Finally, one of the EcoRI sites was eliminated by partial digestion, end-filling with Klenow enzyme, and ligation to generate plasmid pHTH22 ( Figure 1A and 1B).
Transposons. Two related mini-Mu transposons ( Figure 1C) were constructed using standard PCR and cloning procedures. Similar to the transposons in earlier studies [14,20], these transposons were produced as a segment of their pUC19-derived carrier plasmids. Plasmids pHTH19 and pHTH24 carry Kan/ Neo-loxP-Mu and Kan/Neo-loxP-FRT-Mu transposons, respectively. Both of these transposons contain the selection cassette from plasmid pIRES2-EGFP (Clontech), including two promoters, prokaryotic (p bact ) and eukaryotic (p SV40 ), in addition to the Kan/Neo resistance-encoding gene from Tn5 and polyadenylation signals from the HSV TK gene. The Kan/Neo marker gene can be used both with E. coli (kanamycin selection) and mammalian cells (G418 selection). Flanking the selection cassette, the transposons contain a transposon-specific set of Cre and Flp recombination sites, and the extreme transposon termini each include 50 bp from Mu R-end as an inverted repeat to provide critical MuA transposase binding sites. The transposons were released from their carrier plasmids by BglII digestion and purified using anion exchange chromatography as described [14].
Introduction of Kan/Neo-loxP-Mu. The initial in vitro transposition reaction mixture (20 ml) contained 0.185 pmol (250 ng) transposon DNA (Kan/Neo-loxP-Mu), 500 ng target BAC DNA digested with an appropriate restriction enzyme, 2.46 pmol (176 ng) MuA, 25 mM Tris-HCl, pH 8.0, 0.05% (w/v) Triton X-100, 10% glycerol (v/v), 120 mM NaCl and 10 mM MgCl 2 . Reaction was carried out for 4 h at 30uC. The resulting transposon-containing fragment pool, purified by phenol and chloroform extractions and ethanol precipitation, was ligated to 4 mg of the plasmid pHTH22 linearized with the same enzyme as was used for the initial BAC digestion. The ligation mixture was purified by phenol and chloroform extractions and ethanol precipitation, and dissolved in 20 ml of water. Several aliquots (4 ml) were electroporated into electrocompetent DH10B cells (50 ml) prepared as described [14] using Genepulser II (Bio-Rad) and 0.2 cm electrode spacing cuvettes (Bio-Rad) as described [14]. Insert-containing plasmid clones were selected on LB/ampicillin/ kanamycin plates and screened by colony PCR for the presence of the transposon in the desired DNA region using a pair of appropriate primers, one transposon-specific and one locusspecific. The exact location of each identified transposon was confirmed by initial restriction analysis and subsequent sequencing.
Removal of Kan/Neo selection marker. The selected plasmid was electroporated into strain 294-Cre [28] for in vivo recombination, after which the plasmid was reisolated; recombination between the two loxP sites was confirmed by restriction analysis and sequencing. To prevent any potential Cre-mediated further rearrangements and for the production of good quality plasmid DNA for further manipulation, the accurately Cre-recombined plasmid was introduced into E. coli DH10B for isolation. Note! PCR-based DNA sequencing strategies may not yield sequencing reads across two Mu ends in inverse relative orientation, possibly due to intramolecular hybridization problems. Good quality reads can be obtained by linearizing the DNA between Mu ends prior to sequencing reactions (e.g. using EcoRI or XbaI).
Introduction of Kan/Neo-loxP-FRT-Mu. Mu transposition complexes were initially assembled at 30uC in a 20 ml reaction volume for 2 h. The assembly mixture contained 4.7 ml (1.1 pmol; 1590 ng) transposon DNA (Kan/Neo-loxP-FRT-Mu), 4 ml 56 assembly buffer (750 mM Tris-HCl, pH 6.0, 0.125% (w/v) Triton X-100, 750 mM NaCl and 0.5 mM EDTA), 50% (v/v) glycerol, and 1 ml (4.9 pmol; 350 ng) MuA. The resulting complexes (1 ml) were added to the mixture (18 ml), containing 0.55 pmol plasmid DNA as a target, 25 mM Tris-HCl, pH 8.0, 110 mM NaCl, 0.05% Triton X-100 (w/v), 10% glycerol, and incubated for 5 min at 30uC to allow target capture. MgCl 2 (1 ml) was then added to the final concentration of 10 mM and incubation was continued for 2 min in a total volume of 20 ml. The transposition reaction mixture was electroporated into E. coli DH10B electrocompetent cells, and transposon-containing clones were selected on LB/ ampicillin/kanamycin plates and screened by colony PCR. For each candidate clone, the location and orientation of the transposon were determined by sequencing. Gene targeting. Targeting constructions were linearized and electroporated into mouse embryonic stem cells [27]. Desired clones were cultured using a medium containing G418 (136 mg/ ml) and ganciclovir (2.64 mM) for positive and negative selection, respectively. The selected ES clones were screened by Southern blotting with appropriate 59 and 39 probes to verify correct targeting. Recombinant ES cell clones were used for morula aggregations to produce chimeras, which transmitted the mutant alleles through the germ line.
Cre-recombination in mice. The mice carrying the targeted alleles were crossed with PGK-Cre mice [41]. The offspring was screened by Southern blotting with appropriate 39 probes using genomic DNA isolated from tails. | 6,279 | 2009-02-05T00:00:00.000 | [
"Biology"
] |
Implementing the analogous neural network using chaotic strange attractors
Machine learning studies need colossal power to process massive datasets and train neural networks to reach high accuracies, which have become gradually unsustainable. Limited by the von Neumann bottleneck, current computing architectures and methods fuel this high power consumption. Here, we present an analog computing method that harnesses chaotic nonlinear attractors to perform machine learning tasks with low power consumption. Inspired by neuromorphic computing, our model is a programmable, versatile, and generalized platform for machine learning tasks. Our mode provides exceptional performance in clustering by utilizing chaotic attractors’ nonlinear mapping and sensitivity to initial conditions. When deployed as a simple analog device, it only requires milliwatt-scale power levels while being on par with current machine learning techniques. We demonstrate low errors and high accuracies with our model for regression and classification-based learning tasks.
Bahadır Utku Kesgin & Uğur Teğin
Machine learning studies need colossal power to process massive datasets and train neural networks to reach high accuracies, which have become gradually unsustainable.Limited by the von Neumann bottleneck, current computing architectures and methods fuel this high power consumption.Here, we present an analog computing method that harnesses chaotic nonlinear attractors to perform machine learning tasks with low power consumption.Inspired by neuromorphic computing, our model is a programmable, versatile, and generalized platform for machine learning tasks.Our mode provides exceptional performance in clustering by utilizing chaotic attractors' nonlinear mapping and sensitivity to initial conditions.When deployed as a simple analog device, it only requires milliwatt-scale power levels while being on par with current machine learning techniques.We demonstrate low errors and high accuracies with our model for regression and classification-based learning tasks.
Current computing methods and hardware limit machine learning studies and applications regarding speed, data resolution and deployed platforms.Particularly, the power consumption of artificial neural networks started to raise questions regarding its impact on the environment.Recent studies indicate that the carbon emissions of training a complex transformer learning model are roughly equivalent to the lifetime carbon emissions of five cars 1 , and training a famous language model consumed the energy required to charge 13,000 electric cars fully 2 .Several computing paradigms are proposed for machine learning studies to decrease training times and, therefore, the energy consumption issue.Among them, reservoir computing 3,4 offers a promising path by using nonlinear systems with fixed weights to process information in high dimensional space.Various neuromorphic devices 5 were proposed to surpass chronic performance issues of conventional computing and high-power consumption issues.Optical computing methods [6][7][8] and electronic memristive devices [9][10][11] were introduced as powerful reservoir computing platforms.The concept of fixed nonlinear high-dimensional mapping is of usual practice in several areas of machine learning, such as extreme learning machines 12 and support vector machines 13,14 .
In machine learning studies, chaotic systems were mainly employed as targets to learn dynamical systems [15][16][17][18][19][20][21] .Chaos theory examines deterministic but unpredictable dynamical systems that are extremely sensitive to initial conditions.These systems commonly occur in nature, inspiring art, science, and engineering 22 .Also, chaotic spiking dynamics of neurons have inspired several neuromorphic machine learning applications 23,24 .In the past, chaotic systems were proposed for Boolean computation and data processing, forming the concept of chaos computing.Early chaos computing devices operated one-dimensional chaotic maps to perform logic operations 25,26 .These dynamical systems were also suggested for reservoir computing but used in a stable state just below the bifurcation point, where order transitions to chaos 27 .Operating in a stable state, such systems could not benefit from chaos in learning and information processing for machine learning purposes.Following these attempts, systems with weakly chaotic architecture were proposed 28,29 .However, these models and other similar approaches could not demonstrate competent performances 30 .
Here, we propose an analog computing method based on controllable chaotic learning operators to perform high-dimensional nonlinear transformations on input data for machine learning purposes.Our method benefits circuits designed to compute chaotic strange attractors for reservoir computing purposes, as demonstrated in Fig. 1.The methods section elaborately outlines the low-power computation of chaotic attractors using simple analog circuits.At present, solely chaotic attractors enable highperformance controllable analog machine learning in conjunction with milliwatt-scale power consumption.This advance originates from our method containing notable properties of chaos, thus eliminating the need for high power to process information.Since minor differences amplify and evolve in chaotic attractors, chaotic processors in our method evidently improve machine learning.
While previously reported physical reservoir computing hardware lacks flexibility, we introduce a controllable model by increasing overall versatility.Achieving this versatile platform allows us to enhance overall learning accuracy for various learning tasks through optimization.Our computing method intrinsically offers smaller footprints with power consumption levels as low as a milliwatt scale while preserving high accuracies.By providing complex and chaotic dynamics for the nonlinear transformation of data, our model performs on par with neural networks operating on conventional computing platforms.We present the generalizability of our approach by testing a wide variety of machine learning tasks, including image classification, and achieve high accuracies, reaching up to %99 for several tasks.To further show the efficacy of our approach, we also juxtapose our findings with a widely acknowledged reservoir computing technique and with memristive machine learning operators.Later, we explore how sensitivity to initial conditions in chaotic attractors improves learning accuracy and determines the power consumption required for training.Our method is a controllable, chaotic analog learning device that offers versatile and sustainable machine learning without compromising learning performance.
Results
Input/Output encoding and selection of the optimal attractor As illustrated in Fig. 1, our computing approach consists of input data, chaotic reservoir, and readout layers.We first apply a chaotic transformation to input data via a chaotic circuit and then utilize a simple digital readout layer to complete the learning process.Chaotic systems are extremely sensitive to initial conditions such that their response drastically changes even in nano-scale perturbations.Due to this sensitivity, it is crucial to set an encoding method that makes the attractor transform input samples in a way that makes it easier for the classification algorithm to make distinctions between classes.It is also important to preserve the integrity of the physical model to perform stable computation.We decide to input our data as initial conditions of the attractor.As we scale our data using z-score normalization, the initial conditions we use as inputs land in a scale that does not vitiate the physical model (see Supplementary Note 4 and Supplementary Fig. 4).After the chaotic transformation is applied to our samples, we feed the transformed matrix to the regression or classification algorithm (see Methods Numerical simulations for specific algorithms).
The pattern and average separation pace between each chaotic attractor's close points are distinctive properties.We select six different chaotic attractors to evaluate how these unique properties translate into machine learning.We employ a nonlinear regression task on a randomly generated Sinus cardinal dataset (see Methods Data preprocessing).We select the well-known Lorenz attractor 31 , Rössler attractor 32 , Burke-Shaw system 33 , Sprott attractor 34 , Chen's system 35 , and Chua's Circuit 36 for this test.Our test attractors transformed randomly generated points for one hundred iterations and tried to predict Sinus Cardinal function values corresponding to the transformed sample.After recording the lowest root mean squared error (RMSE) amongst iterations, we sort each result from smallest to largest RMSE value.Lorenz attractor was the most successful attractor with a RMSE of 0.143.We decide to proceed with further tests only using the Lorenz attractor and using the iteration with the lowest error after 100 iterations (see Supplementary Table 1 and Supplementary Note 4 for details).
Sinus cardinal regression
To assess the potential performance of machine learning with chaotic attractors, we run a simple regression task on a dataset of randomly generated samples and their values after the Sinus Cardinal function.Sinus Cardinal (Sinc), as a nonlinear function, is a commonly preferred initial test for extreme machine learning and reservoir computing.By evaluating the linear regression performance of the processed Sinc dataset, the presented models demonstrate whether the model can perform the nonlinear transformation of data.In aforesaid benchmarking tests, we measure the vanilla RMSE of Lorenz attractor as 0.143.We apply the Bayesian Optimization algorithm (see Methods Numerical simulations for details) to determine the best values for Lorenz system parameters to minimize error and improve model performance.After completing three separate optimizations, we select the values that lead to minimum error (σ = 10, β = 8/3, and ρ= 97).We use these coefficients in further tasks except the Abalone dataset, where we applied a separate optimization.After the optimization, an RMSE of 0.105 is achieved.
To further decrease learning error and test different configurations of our model, we add another layer that will apply the chaotic transformation to the input variable.First, two parallel Lorenz Attractor layers with different ρ values transform the same input simultaneously.These two distinct outputs are concatenated into a single matrix, and this matrix undergoes the learning process.Keeping σ and ρ as fixed variables, we apply Bayesian Optimization to determine optimal ρ values for our transformers.After the optimization process, we decrease model RMSE down to 0.03 (ρ1 = 94.087,ρ2 = 36.867).(see Supplementary Fig. 1) By using nested loops error of the model can be decreased notably due to the dimensionality expansion and impact of chaotic transformers with different chaos parameters (see Largest Lyapunov exponent and accuracy).In the remainder of our study we only test our model by deploying a single transformer per variable to accurately characterize the properties of our analog learner without adding another chaotically complex variable.
Abalone dataset
Moving on with a relatively more complex and multivariable regression task, we test our chaotic model in the abalone dataset.This dataset, taken from ref. 37, is composed of the eight physical measurements of sea snails and their ages.We normalize the ages on a scale between 0 and 1.We apply z-score normalization and deploy chaotic transformation with a single transformer to every single variable.We use Bayesian optimization to find the optimal parameters of the Lorenz transformer.After optimization, we achieve remarkable accuracy (RMSE 0.072884) with parameters: σ = 10, β = 2.667, ρ = 64.917.(see Supplementary Fig. 1 for the result and Methods Numerical simulations for optimization details).
Iris dataset
We move on with classification tasks to challenge our model.The Iris dataset is one of the classical datasets that assess linear and nonlinear classification abilities.The dataset from ref. 38 consists of four physical measures of iris flowers from three distinct species.While one class, iris-setosa, is linearly separable from the other two classes, iris-versicolor and iris-virginica require nonlinear applications to be separated.Thus, with a relatively small dataset we evaluate our model both on linear and nonlinear classification.We employ Ridge classification as the last layer because it is a simple and linear method that is fast to execute.Ridge regularization is important to prevent overfitting especially when mapping data into high dimensions.Changing the usual method for visualizing classifier decision boundary, we use Linear Discriminant Analysis (LDA) to raw and transformed data (see Fig. 2).Using LDA, we retrieve 2D matrices for raw and transformed data and perform Ridge classification to these 2D matrices.A high accuracy of 97.78% is achieved, gaining about 18% over model accuracy before chaotic transformation (80.00%).After chaotic transformation, samples that belong to linearly non-separable classes (iris-versicolor and iris-virginica) all clustered almost perfectly (see Fig. 2).As a result, the linear classifier we utilize can make an almost perfect classification.We also test other classifiers for benchmarking (see Methods Numerical simulations and Supplementary Table 2 for details).A drastic increase in test accuracy of linearly inseparable classes is demonstrated in confusion matrices (see Fig. 3).
Liver disorders dataset
For this classification task, we test our methods in the liver disorders dataset.This dataset, taken from ref. 39, comprises 12 features in blood samples taken from healthy people, hepatitis patients, fibrosis patients, or cirrhosis patients.After obtaining an even dataset (see Methods Data preprocessing for details), we employ the same chaotic transformation method to our features.With chaotic transformation, we report an increase in the ridge classifier accuracy by about 11% from 81.71% to 92.82% and achieve an accuracy of 98.84% with Linear SVM (see Supplementary Table boundary lines are easier to draw (see Fig. 2).Also, substantial improvement in the accuracies of every single class is displayed in confusion matrices (see Fig. 3).We also employ an extreme learning machine for benchmarking to this task.Extreme learning machines are the closest alternative to our model regarding the reservoir model and reduced trainable parameters.Our model surpasses extreme learning machines with the same number of learning parameters by 10% in test set accuracy (see Supplementary Fig. 6 and Supplementary Note 6 for confusion matrices and details).
MNIST dataset
We test our model for image classification after proving strong performance in numerical datasets.MNIST dataset 40 contains 70,000 samples (60,000 training, 10,000 testing) of 10 handwritten digit classes.For this task, 28 × 28 images are flattened without any normalization, and a fast algorithm for dimensionality reduction (see Methods Data preprocessing for details) is employed as a form of preprocessing.After reducing dimensions of each flattened images from 1 × 784 to 1 × 7, we perform classification and set a baseline accuracy.After chaotic transformation, the accuracy of this Ridge classifier increase 81.42% to 95.42%.We again utilize an extreme learning machine to this task for benchmarking.Our model surpasses extreme learning machines with the same number of learning parameters by 6.91% in test set accuracy.Our model also surpasses multi-memristive synapse neural network architecture by 6.32% in test set accuracy (see Supplementary Table 2 and Supplementary Fig. 7 for details).Such a drastic increase in accuracy highlights the effect of chaotic nonlinear transformation one more time.
Largest Lyapunov exponent and learning accuracy Next, we investigate the impact of sensitivity to initial conditions on our model's performance in machine learning tasks.We set the Largest Lyapunov Exponent (λ)(LLE) 41 to measure the pace of separation in a chaotic system.An LLE that is larger than 0 indicates a chaotic system, and a larger LLE corresponds to faster separating points.In this test using the Liver Disorder dataset, we study a chaotic transformation with ρ values ranging between 1 to 100.Then, we record the best accuracy of Linear SVM.We evaluate the LLE of Lorenz attractor with ρ value in the range from 1 to 100.When compared with a non-chaotic (ρ = 2) and chaotic but less sensitive model (ρ = 28), the optimized model (ρ = 97) demonstrates higher accuracies in every single class (see Fig. 4).
We also demonstrate a positive statistical relationship between the Largest Lyapunov Exponent and model accuracy after running Welch's t-test and Pearson's R-value test (see Methods Statistical tests and Supplementary Note 3 for details).We report this statistical correlation with a r-value of 0.84.Such a positive correlation explains our findings in Fig. 2. The spatial clustering of samples in a dataset is enhanced by the separation pace of close points and the chaotic nature of the transformer.As points separate relatively faster, it is also more probable to find a set of clusters that we can classify linearly compared to points that do not separate.This finding is also interesting because a similar study demonstrated the necessity of chaos on classification tasks performed with neural networks 42 .In our study, we validated that chaos should be present on learning tasks whether the learning is based on conventional neural networks or reservoir computers.It should be noted that as these dynamical systems evolve, while we benefit from the separation, as mentioned earlier at early iterations, the attractor transforms the data, becoming unlearnable after a particular stage (see Supplementary Figs. 8, 9 and Supplementary Note 5).
Circuit simulations
Encouraged by our model's impressive performances, we study its analog implementations with circuit simulations using a specific circuit designed for the analog computation of the Lorenz attractor.After running the circuit and performing chaotic transformation to the data, we use a decision layer like our previous tests.We tune the analog computing achieved via the circuit by changing the resistance of a resistor and adjusting the ρ value.Alternatively, a digital potentiometer can be utilized to actively set the effect of chaotic data transformation in the circuit.
Our circuit simulations delivered the same performances with numerical test results, thus proving the feasibility of our proposed analog learning model (see Supplementary Table 1).In circuit simulations, we calculate the total power consumption of our analog chaotic systems.
A single analog unit consumes about 350 milliwatts (to perform the chaotic transformation to data.Specifically, for the MNIST dataset, approximately 3.5 watts of power is needed for chaotic transformation.The aforementioned power consumption is two orders of magnitude smaller than conventional devices that perform machine learning (see Methods Physical model of reservoir and Supplementary Fig. 5 for details).
Discussion
The findings of this study present a promising computing platform for the field of machine learning.The study introduces an innovative method that has demonstrated effectiveness in various machine learning applications.It notably improves power consumption for image and numerical classification tasks, using a straightforward linear last layer following a chaotic nonlinear transformation.This methodology, showcased in the context of MNIST, Liver Disorder, Iris, Abalone, and Sinus Cardinal datasets, not only enhances accuracy but also maintains input data integrity and permits flexible adjustments of model parameters and architecture.Reservoir computing studies in the literature are mostly limited to temporal predictions.Our collection of tests and stepby-step approach in machine learning tasks enables us to gradually challenge our method with tasks from temporal prediction to linear and nonlinear classification.Such tasks form a foundation and a benchmark for future machine learning models that benefit from chaos.Similar to other reservoir computing research in the literature, further specialized research may be conducted on temporal prediction in light of the results presented in this study.As presented in Sinus Cardinal regression, other configurations of chaotic transformers besides a single chaotic transformer, such as nested chaotic transformers, may be employed to enhance learning accuracy in different datasets.One intriguing aspect of this study is the integration of circuit simulations to validate the practicality of the analog chaotic reservoir computing paradigm.This approach also enables an in-depth examination of the relationship between the Largest Lyapunov exponent of the chaotic transformer and overall model accuracy.Moreover, the circuit architecture's speed and power efficiency on a milliwatt scale hold promise, particularly in light of contemporary concerns regarding energy consumption in machine learning applications.
Another interesting point of this study is the flexibility of our methodology.The performance of the model can increase up to 6% by altering a single parameter in the chaotic reservoir within the boundaries of the physical model.It is also an important fact that the power consumption and integrity of the physical model are independent of parameter optimization if the parameter is not set to an extreme value (ρ = 200).Although we utilize a probabilistic method of optimization to adjust our parameters in some tasks, further research may focus on different techniques of optimization to enhance learning accuracy.The Lorenz attractor, serving as the primary chaotic transformer in this study, emerges as a noteworthy element, showcasing remarkable performance in clustering and pattern recognition.The potential for further research in related areas, particularly in image segmentation using chaotic pattern recognition, is a direction that warrants exploration.The study also highlights how optimizing the chaos parameter, ρ, can lead to modest yet appreciable increases in model test accuracy.The positive correlation between model accuracy and the Largest Lyapunov Exponent raises intriguing possibilities for future research.Our method opens the door to various opportunities for further investigation, particularly in the realm of neuromorphic architectures that can harness chaos as a computational element.Similar chaotic computing techniques can be realized with silicon-on-insulator technology for chip-size footprints.Such architectures may offer innovative solutions and insights for advancing the field of machine learning.
Physical model of chaotic reservoir
In our method, we compute the following set of ordinary differential equations for the Lorenz attractor to transform our data 31 : where we use coordinates x, y, and z for both input and output recording, and use parameter ρ to adjust chaos.We choose ρ because the Lorenz Attractor is more sensitive to variations in ρ, and the literature has extensively researched the attractor's state at various ρ values.This allows us to establish our parameter search range without jeopardizing the integrity of our physical model.Due to high dimensionality, each variable is given to the circuit as an (x, y, z) vector in (variable, 1.05, -variable) format.Unless otherwise is stated in the respective section we utilize the following parameters to compute Lorenz Attractor: σ = 10, β = 8/3, and ρ= 97.We initially establish nodes for x, y, and z to compute the Lorenz attractor using circuits.
We conduct the multiplication of nodes using analog multipliers.We initially scale the system to a large resistance to multiply nodes by their coefficients.When we intend to multiply a node by a coefficient, we add another resistor.In this manner, the value of the coefficient that multiplies the node equals the ratio between the scaling and added resistance.To iterate the system over discrete time we utilize capacitors.We are able to iterate our function across time since the voltages of capacitors are represented by a differential equation that depends on the capacitance of the capacitor.We complete the computation in a specific timestep by incorporating the resultant components via an operational amplifier within the capacitor loop.
To compute the Lorenz attractor, we utilize two units of analog multiplier AD633 and three units of operational amplifier TL074.When calculated in maximum supply current and voltages, a single AD633 consumes 108 mW (±18 V, 6 A) and a single LT074 consumes 45 mW (±18 V, 2.5 A).Considering the number of units, we use; a single chaotic transformer consumes 351 mW.During all circuit simulations, we verified that supply currents, input voltages and output voltages lie within the range of the physical model and electronic components.
Numerical simulations
For the circuit simulations, we modified the schematic of the circuit that performs the analog computation of the Lorenz system to be able to input initial conditions.We then converted this schematic to a netlist file that we will feed to LTSpice (see Supplementary Figs. 2 and 3).This netlist file consists of the circuit structure and the commands to regulate the circuit simulations.Identical to the numerical simulations, we set the timestep of the circuit simulation to 10 µs and iterated the circuit one thousand times.Afterward, we created a Python code to work simultaneously with the LTSpice simulation engine and perform parallel circuit simulations.For every variable in a sample in the dataset, this code initiates a circuit simulation after modifying the initial conditions as the variable's value.Results of the simulations are stored in a file that will require another Python code to extract output values.This code we created retrieves one thousand iterations of every sample out of the result files and creates a matrix of output values.
To complete the learning process, values are sliced iteration-by-iteration from the matrix, and the same final layers in the numerical simulations are applied to the sliced values.We retrieved power consumption data by slicing power dissipation data from the same result files.
For simulations, we created a Python code using the NumPy 43 library that will iterate the ordinary differential equations of chaotic strange attractors in time using the Runge-Kutta method 44,45 .Identical to our circuit methodology, each variable is given to the simulation code as an (x, y, z) vector in (variable, 1.05, -variable) format.This code is then used to perform reservoir computation on the given input.Due to the high dimensionality of chaotic strange attractors, every one-dimensional predictor is transformed into a three-dimensional vector.Besides the exception of the Iris dataset, all the output vectors are used for the learning process.In the Iris dataset, after the transformation of all samples is complete, the Linear Discriminant Analysis method is applied to data before the final layer to demonstrate the decision layers and the learning process is consistent.A timestep of 10 −2 is used to simulate strange chaotic attractors.Unless stated otherwise, the coefficients of used attractors were in their author-suggested values for the attractor benchmark test.We utilized MATLAB package of Bayesian Optimization to optimize our chaos parameters.All optimization values were used in their default state.Bayesian Optimizer minimized the prediction error.Chaos parameters are evaluated by the prediction accuracies for 100 iterations.Bayesian Optimizer scanned 20 chaos parameters to find the optimal chaos parameter.
For classification tasks (MNIST, Liver Disease, and Iris), following the same transformation process, the Ridge Classification, Linear Kernel Support Vector Machine (SVM), Polynomial Kernel SVM, Gaussian Kernel SVM, K-Nearest Neighbors, and Multilayer Perceptron Classifier algorithms are used as the last layer.All the last layers are implemented using the Scikit-learn 46 package.Unless stated otherwise in the results or methods section, all classifiers are utilized using their default method in the scikitlearn package.The multilayer perceptron classifier utilized in the study comprises a learning rate of 10 −3 , a tangent hyperbolic (tanh) activation function, and three hundred hidden neurons.
Data preprocessing
Excluding Sinus Cardinal data, every other dataset used was normalized using z-score normalization with a standard deviation of samples equal to one before being transformed with chaotic attractors.The Sinus Cardinal dataset is synthetically created and not normalized, with the predictor being randomly generated 2048 samples in the range of [+π, -π] and target values being the Sinus Cardinal function of generated samples.The Liver Dataset is an uneven dataset, which may result in imbalanced learning, and to prevent this, we used the Python implementation of the Synthetic Minority Oversampling Technique to upsample the Liver dataset evenly.In the MNIST dataset, we flattened the dataset and applied dimensionality reduction using Uniform Manifold Approximation and Projection for Dimension Reduction 47 (UMAP).UMAP reduced the predictor size to 1/112 of the original data (784 to 7).Dimensionality reduction lasted about two minutes.A ratio of 80% training set to 20% test set was used to divide the datasets into training and test sets.Only for the Iris dataset, a ratio of 70% training set and 30% test set was used to divide the dataset.In all displayed results, datasets are set to training and test tests using random state zero.
For the regression tasks (Abalone and Sinus Cardinal), predictors of every sample are transformed with our code, and a simple Linear Regression algorithm is implemented as the final layer that completes the learning process.Confusion matrices are normalized over true predictions (rowwise), and decimal numbers are rounded to the nearest whole.The table results show the standard deviation of accuracies after 20 separate dataset splits.
Statistical tests
For the Chaos and Learning test, we estimated LLEs using a well-known method in the literature 48 .We utilized the MATLAB built-in function for to estimation Lyapunov Exponents (see Supplementary Note 2 for details).We measured local LLE between iterations 1 and 100 as these iterations were our range.We decided to make parameters σ and β as fixed variates (σ = 10, β = 8/3).We decided to keep these parameters unchanged due to the high sensitivity to the initial conditions of the Lorenz Attractor, which would complicate testing.We employed the Linear SVM with MATLAB implementation for chaos and learning tests.We utilized the SciPy 49 library functions of the given statistical significance tests.
Fig. 1 |
Fig. 1 | Schematic displaying the architecture of our model.Input data is preprocessed and encoded as initial voltages for the circuit performing analog computation of Lorenz attractor.a Displays the chaotic reservoir.b Displays the chaotic transformation applied by the circuit.After the chaotic transformation of data, output voltages are transferred to a processing device as reservoir output.Via the device, the last layer is performed, mainly ridge regression and classification, completing the learning process. 2
Fig. 2 |
Fig. 2 | Impact of chaotic nonlinear transformation on the decision boundaries and the data points.Decision boundary of ridge classifier in Iris dataset before (a) and after (b) the chaotic transformation.Distribution of datapoints of Liver dataset before (c) and after (d) the chaotic transformation.
Fig. 3 |
Fig. 3 | Confusion matrices of Ridge classifier accuracy in three classification tasks before and after chaotic transformation.The subfigures a-c represent accuracies before chaotic transformation was applied and the subfigures d-f represent accuracies after chaotic transformation.Confusion matrices of each dataset are represented as follows: Iris dataset (a, d), Liver Disorders Dataset (b, e), and MNIST Dataset (c, f).Confusion matrices are normalized row-wise (see Methods Data preprocessing for details).
Fig. 4 |
Fig. 4 | Relation between sensitivity to initial conditions and model accuracy.a Confusion Matrices of Ridge classifier in Liver Disorder Dataset on three states of Lorenz transformer: non-chaotic (stable), less sensitive to initial conditions (ρ = 28), and more sensitive to initial conditions (ρ = 97) b Color map visualization of ρ value (x-axis) LLE (y-axis) and accuracy of Linear Kernel Support Vector Machine in liver disorder dataset (color values). | 6,627.6 | 2024-07-15T00:00:00.000 | [
"Computer Science",
"Engineering",
"Physics"
] |
Combined search for supersymmetry with photons in proton-proton collisions at $\sqrt{s} =$ 13 TeV
A combination of four searches for new physics involving signatures with at least one photon and large missing transverse momentum, motivated by generalized models of gauge-mediated supersymmetry (SUSY) breaking, is presented. All searches make use of proton-proton collision data at $\sqrt{s} =$ 13 TeV, which were recorded with the CMS detector at the LHC in 2016, and correspond to an integrated luminosity of 35.9 fb$^{-1}$. Signatures with at least one photon and large missing transverse momentum are categorized into events with two isolated photons, events with a lepton and a photon, events with additional jets, and events with at least one high-energy photon. No excess of events is observed beyond expectations from standard model processes, and limits are set in the context of gauge-mediated SUSY. Compared to the individual searches, the combination extends the sensitivity to gauge-mediated SUSY in both electroweak and strong production scenarios by up to 100 GeV in neutralino and chargino masses, and yields the first CMS result combining various SUSY searches in events with photons at $\sqrt{s} =$ 13 TeV.
In this Letter, a combination of four different searches focusing on GGM SUSY scenarios is presented. In GGM models, the gravitino ( G) is the lightest SUSY particle (LSP) and escapes undetected, leading to missing transverse momentum (p miss T ). For these scenarios, the experimental signature depends on the nature of the next-to-LSP (NLSP), which is an admixture of the SUSY partners of EW gauge bosons. The interpretation of the combination focuses only on bino and wino, which are the superpartners of the SM U(1) and SU(2) gauge eigenstates, respectively. In most GGM models, the NLSP is assumed to be a bino-or wino-like neutralino, or a wino-like chargino. In the models used in this analysis the lightest neutralino ( χ 0 1 ) corresponds to the NLSP, which decays to a G accompanied by a photon (γ) or a Z boson depending on its composition. The lightest chargino ( χ ± 1 ) is assumed to decay to a W boson along with a χ 0 1 or a G. The results are interpreted in a GGM signal scenario with photons in the final state varying the bino and wino mass parameters.
To provide results for a broader set of signal topologies, the results are also interpreted in the context of simplified model scenarios (SMS) [22]. In the case of strongly produced SUSY particles, gluino and squark decays result in additional jets in the final state along with the NLSP decay products. For both EW and strong SUSY production, the gaugino branching fractions are varied to probe a range of possible scenarios resulting in final states with photons, Z or W bosons.
All searches used in the combination are performed with pp collision data at √ s = 13 TeV, corresponding to an integrated luminosity of 35.9 fb −1 , collected with the CMS detector in 2016. In the combination, each search corresponds to a category of events. The first category requires the presence of two isolated photons (Diphoton category). This category is based on the search presented in Ref.
[18] and targets bino-like neutralino decays. Events with electrons (e ± ) or muons (µ ± ) are vetoed in this category. The Photon+Lepton category requires one isolated photon, as well as one isolated e ± or µ ± . This category is based on the search presented in Ref. [19] and targets wino-like chargino decays along with bino-like neutralino decays. The Photon+S γ T category requires the presence of at least one isolated photon and large p miss To ensure exclusive search regions for the combination, any overlapping kinematic regions in the four categories are combined such that a single event is only present in one category. For SUSY scenarios that are based on EW production, all four categories are used. For strong SUSY production, the Diphoton category is removed.
Signal scenarios
The SUSY scenarios considered in this Letter are sketched in Fig. 1; they include one GGM scenario (upper left), two EW SMS (upper right and lower left), and one strong production SMS (lower right). Figure 1: Diagrams of the SUSY processes considered in this Letter: one process within the GGM scenario (upper left), two EW SMS processes, with possible neutralino and chargino decays (upper right and lower left), and a strong SMS process based on gluino pair production (lower right).
For the GGM scenario, the squark and gluino masses are set to be large, rendering them irrelevant to the studied LHC collisions and ensuring that strong production is negligible and EW production of gauginos, namely χ ± 1 χ ∓ 1 , χ ± 1 χ 0 1 , and χ ± 1 χ 0 2 production, is dominant. The GGM framework used to derive the GGM scenario is suitable for unifying models of gaugemediation in a more general way with only a few free parameters [23][24][25]. For the GGM scenario considered in this Letter, the techniques of Ref.
[24] are used to reduce the 8-dimensional GGM parameter space to two gaugino mass parameters. The GGM scenario is defined by setting the GGM parameters as follows: All parameters are defined at the messenger scale, which is set to M mess = 10 15 GeV. The parameters M 3 and µ are the gluino and higgsino mass parameters, respectively, and the parameters m Q , m U , and m D are the sfermion soft masses. In this GGM scenario, the remaining bino (M 1 ) and wino (M 2 ) mass parameters are varied and the Higgs boson mass receives large radiative corrections from the heavy stops to yield the observed mass at the EW scale.
In GGM, the lifetime of the NLSP is a function of the NLSP and the gravitino masses. In order to ensure prompt decays of the NLSP in the detector, the gravitino mass is fixed to 10 eV. As was shown in Ref. [25], this implies heavy squarks (m q 3 TeV), which is consistent with the model used in this Letter.
One possible diagram for the GGM scenario is shown in Fig. 1 (upper left). The chargino always decays to the W boson along with the lightest neutralino, and the χ 0 2 could decay to a Z boson or an H boson along with the lightest neutralino. The branching fraction of the NLSP decaying into a photon and a gravitino is determined by the composition of the gauge eigenstates of the NLSP. As shown in Fig. 2 (left), the branching fraction of the NLSP changes across the parameter space. For large M 1 and medium M 2 , the NLSP is wino-like. This increases the branching fraction for χ 0 1 → Z G decays in the phase space of M 2 300 GeV where the NLSP mass exceeds the Z boson mass. In the remaining phase space, the NLSP is bino-like, which increases the χ 0 1 → γ G branching fraction. The different compositions of the NLSP can also be extracted from the dependence of the physical NLSP mass on the model parameters M 1 and M 2 , as shown in Fig. 2 (right). With a wino-like NLSP, the physical mass scales with M 2 , whereas, for the remaining phase space with bino-like NLSPs, the physical mass depends on M 1 . Based on EW production SMSs, two different branching fraction scenarios are constructed. For these scenarios, the chargino and neutralino masses are almost degenerate in mass, such that the W boson from the chargino decay is produced off-shell, resulting in low momentum (soft) particles that are outside the detector acceptance. In the case of the neutralino branching fraction scenario, χ ± 1 χ 0 1 and χ ± 1 χ ∓ 1 are probed as shown in Fig. 1 (upper right). In this scenario, the chargino always decays to the NLSP, whereas the branching fractions for the decay modes χ 0 1 → γ G and χ 0 1 → Z G are varied. In the chargino branching fraction scenario, shown in Fig. 1 (lower left), the chargino can decay to the LSP or NLSP, and the branching fractions for the decay modes χ ± 1 → W ± G and χ ± 1 → χ 0 1 (+soft) are varied. The decay mode of χ 0 1 → γ G is fixed. In this scenario only χ ± 1 χ ∓ 1 is produced. These scenarios probe a range of NLSP compositions with bino-and wino-like neutralinos, and wino-like charginos.
The strong production SMS, shown in Fig. 1 (lower right), is used for the nominal gluino scenario and the gluino branching fraction scenario. In both scenarios, gluino pair production is probed, assuming the gluino decays to a chargino or neutralino. The decay modes for the neutralino and chargino, which are assumed to be mass degenerate, are fixed to χ 0 1 → γ G and χ ± 1 → W ± G, respectively. In the nominal gluino scenario, the gluino branching fractions to either charginos or neutralinos are both set to 50%, and the gluino and NLSP masses are varied. Only light flavor quarks, udsc, are included from the gluino decay. This probes a range of scenarios where the gluino mass is small. In the gluino branching fraction scenario, the gluino branching fractions are varied along with the NLSP mass, and the gluino mass is set to 1950 GeV, which corresponds to the gluino mass where the largest gain from the combination is expected.
The production cross sections for all points in the GGM scenario are computed at next-toleading order (NLO) using the PROSPINO2 framework [26]. The uncertainties in the cross section calculation are derived with PROSPINO2 following the PDF4LHC recommendations [27] and using the parton distribution functions (PDFs) in the LHAPDF data format [28]. The simulation incorporates the NNPDF 3.0 [29] PDFs and uses PYTHIA8 [30] with the CUETP8M1 generator tune to describe parton showering and the hadronization [31]. The simplified model signals are generated with MADGRAPH5 aMC@NLO 2.2.2, including up to two additional partons, at leading order [32] and scaled to NLO and NLO + next-to-leading logarithmic accuracy [33][34][35][36][37][38][39][40][41]. All generated signal events are processed with a fast simulation of the CMS detector response [42]. Scale factors are applied to compensate for any differences with respect to the full simulation, which is based on GEANT4 [43].
The CMS detector
The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity (η) coverage provided by the barrel and endcap detectors. Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid.
The analysis only utilizes photons measured in the barrel section of the ECAL (|η| < 1.44). In this section, an energy resolution of about 1% is achieved for unconverted or late-converting photons in the tens of GeV energy range. The remaining barrel photons have a resolution of about 1.3% up to a pseudorapidity of |η| = 1.0, rising to about 2.5% at |η| = 1.4 [44].
A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [45].
Object reconstruction and identification
Photons, electrons, muons, and jets are reconstructed with the particle-flow (PF) event algorithm [46], which identifies particles produced in a collision combining information from all detector subsystems. The energy of photons is directly obtained from the ECAL measurement. Likewise, the energy of electrons is derived from a combination of the momentum measured in the tracker and the energy measured from spatially compatible clusters of energy deposits in the ECAL. The energy of muons is obtained from the curvature of the corresponding track. ECAL and HCAL energy deposits associated to tracks are reconstructed as charged hadrons; remaining energy deposits are reconstructed as neutral hadrons. Jets are reconstructed from PF candidates using the anti-k T clustering algorithm [47] with a distance parameter of 0.4.
The missing transverse momentum vector p miss T is computed as the negative vector sum of the transverse momenta of all the PF candidates in an event, and its magnitude is denoted as p miss T . The p miss T is modified to account for corrections to the energy scale of the reconstructed jets in the event [48].
Photons considered in this Letter are required to be isolated and have an ECAL shower shape consistent with a single photon shower. The photon isolation is determined by computing the transverse energy of all PF charged hadrons, neutral hadrons, and other photons in a cone centered around the photon momentum vector. The cone has an outer radius of 0.3 in ∆R =
√
(∆φ) 2 + (∆η) 2 (where φ is azimuthal angle in radians). The contribution of the photon to this cone is removed. Corrections for the effects of multiple interactions in the same or adjacent bunch crossing (pileup) are applied to all isolation energies, depending on the η of the photon. The Diphoton category [18] uses photon identification criteria to preserve an average photon selection efficiency of 80% while suppressing backgrounds from quantum chromodynamics (QCD) multijet events. The other three categories [19-21] use looser identification criteria to preserve a high photon selection efficiency of 90%. Only photons reconstructed in the barrel region (|η| < 1.44) are used, because the SUSY signal models considered in this combination produce photons primarily in the central region of the detector.
Reconstructed jets are used to compute the H T variable as well as the H γ T variable along with the selected photons. Jets reconstructed within a cone of ∆R < 0.4 around the leading photon are not considered in both variables. Jets with p T > 30 GeV and |η| < 3.0 are used. In case of the Photon+Lepton only jets with |η| < 2.5 are taken into account. The Diphoton category makes use of no jet variables.
Identification of electrons is based on the shower shape of the ECAL cluster, the HCAL-to-ECAL energy ratio, the geometric matching between the cluster and the track, the quality of the track reconstruction, and the isolation variable. The isolation variable is calculated from the transverse momenta of photons, charged hadrons, and neutral hadrons within a cone whose radius is variable depending on the electron p T [49], and which is corrected for the effects of pileup [50]. Hits in the pixel detector are used to distinguish electrons from converted photons.
A set of muon identification criteria, based on the goodness of the global muon track fit and the quality of the muon reconstruction, is applied to select the muon candidates. Muons are also required to be isolated from other objects in the event using a similar isolation variable as in the electron identification.
Event selection
Events are divided into the four categories shown in Table 1 To enable a statistical combination of the four categories, the overlap between the categories is removed by applying additional vetoes. Since the Diphoton and Photon+Lepton category show the highest sensitivities for the GGM scenario, these categories remain unchanged with respect to the initial searches. Events with leptons or two photons that are selected in the other two categories, but also match the requirements of the Diphoton or Photon+Lepton categories, are vetoed in the Photon+S The SM background in the Photon+S γ T and Photon+Lepton categories is dominated by vector boson production with initial-state photon radiation, denoted as "Vector-boson + γ", which is in each case estimated from simulation scaled in a particular control region [19,20]. For the Photon+H γ T category, on the other hand, the H γ T requirement implies hadronic activity leading to a dominant background from QCD multijet and γ + jet processes, which also holds for the Diphoton category. In both categories data-driven methods are used to estimate this background contribution [18,21]. Additional contributions arise from electrons which are misidentified as photons and jets which are misidentified as leptons. For both of these processes datadriven methods are utilized to estimate the contribution to the search regions. Furthermore, ttγ and diboson processes, summarized as "Rare Backgrounds", can contribute to all four categories and are estimated using simulation. Figure 3 and Table 2 The results of the combination are interpreted in terms of the GGM scenario and the simplified models introduced in Section 2. The 95% confidence level (CL) upper limits on the SUSY cross sections are calculated with the CL s criterion [51,52] using the LHC-style profile likelihood ratio as a test statistic [53] evaluated in the asymptotic approximation [54]. Log-normal nuisance parameters are used to describe the systematic uncertainties, which follow the treatment used in the initial searches. The systematic uncertainties on the cross-section for rare background processes as well as the uncertainties assigned to the electron-to-photon misidentification are treated as fully correlated between all four categories. While the first of these uncertainties is estimated to be 50% in all four categories, the latter uncertainty ranges from 8 to 50% depending on the category and p γ T . The uncertainties in the prediction of vector boson production in association with photons in the Photon+S γ T and Photon+Lepton categories, which can be as large as 20%, are treated as fully correlated, since similar prediction methods are used. In addition, the following sources of uncertainty on the simulation affect the background estimations and signal acceptance: photon identification and isolation efficiency, simulation of pileup, modeling of initial state radiation, determination of the integrated luminosity and jet energy scale. These uncertainties are also treated as fully correlated across search bins. Furthermore, all systematic uncertainties in the signal acceptance, which are mainly dominated by the fast simulation uncertainty (up to 36%) in p miss T , are assumed to be fully correlated among the four categories.
Results
Results for the GGM scenario are presented in the parameters that are scanned (M 1 and M 2 ) and in terms of physical mass parameters for the chargino and neutralino. Figure 4 (upper left) shows the combined expected exclusion limits at 95% CL for the GGM scenario, where the combination excludes almost all signal points up to M 2 = 1300 GeV across the full range of M 1 . The figure indicates which category is able to exclude a particular signal point. The grey areas labeled as "combination" show the phase space where only the combination of the categories is expected to exclude the signal points at 95% CL. The area at large M 1 values, which is only covered by the Photon+Lepton category, corresponds to signal points with a wino-like NLSP reducing the probability of a second high-energy photon in the event. Figure 4 (upper right) shows both the observed and expected exclusion for the combination in the GGM model parameters. Figure 4 (lower) shows the observed and expected exclusion limits as a function of the physical masses of the lightest chargino and the lightest neutralino. The exclusion limits of the Diphoton and Photon+Lepton categories are nearly independent of the neutralino mass since these categories have lower p miss T requirements. The higher p miss T regions used in the Photon+S γ T and Photon+H γ T categories mainly contribute closer to the mass diagonal at higher neutralino masses. The combination exceeds the sensitivity of the individual searches by around 100 GeV with respect to the wino mass parameter M 2 , which translates to an expected gain of up to 100 GeV for the lightest chargino mass limit. For low neutralino masses, the combination is able to improve the observed limit on the chargino mass by up to 30 GeV. For higher chargino masses, the combination does not improve the current best observed limit mainly because the Diphoton category, which shows an observed excess of about two sigma above the expectation, has large sensitivity in this phase space along with the Photon+Lepton category. Figure 4 also shows that at higher neutralino masses the expected exclusion limits from the Diphoton and Photon+Lepton categories cross as the branching fraction from photons decreases and the branching fraction to Z bosons increases as shown in Fig. 2. In the physical mass plane only signal points with a mass difference above 120 GeV are shown to enable a precise projection of the physical masses from the GGM model parameters. The band around the expected limit of the combination indicates the region containing 68% of the distribution of limits expected under the background-only hypothesis. The band around the observed limit of the combination shows the spread in the observed limit from variation of the signal cross sections within their theoretical uncertainties.
Photon+Lepton category to scenarios with large branching fractions of the decay χ 0 1 → γ + G especially arises from events where one photon is misidentified as a lepton. For the neutralino branching fraction scenario, which probes the χ ± 1 χ 0 1 and χ ± 1 χ ∓ 1 production, the combined expected exclusion limits for NLSP masses ranges from 1200 GeV for a branching fraction of 100% for the decay χ 0 1 → γ + G to 1000 GeV for 50%. For smaller branching fractions, the sensitivity for all categories drops since the probability of a final state with at least one photon Figure 5: The combined 95% CL NLSP mass exclusion limits for EW SMS production above 300 GeV. For the neutralino branching fraction scenario (left), the limit is shown as a function of the branching fraction χ 0 1 → γ + G, the other decay channel being χ 0 1 → Z + G. For the chargino branching fraction scenario (right), the limit is shown as a function of the branching fraction The full lines represent the observed and the dashed lines the expected exclusion limits, where the phase space below the lines is excluded. The band around the expected limit of the combination indicates the region containing 68% of the distribution of limits expected under the background-only hypothesis. The band around the observed limit of the combination shows the spread in the observed limit from variation of the signal cross sections within their theoretical uncertainties. decreases. This combined exclusion limit almost coincides with the exclusion limit based on the Photon+S γ T category. In case of the chargino branching fraction scenario only χ ± 1 χ ∓ 1 is produced, leading to a smaller signal cross section. Here, an expected limit on the NLSP mass of up to 1000 GeV can be achieved for high branching fractions for the decay χ ± 1 → χ 0 1 (γ G ) + soft. The largest gain in sensitivity from the combination is found at a branching fraction of 40%, where the sensitivity of Photon+S γ T , Photon+Lepton, and Diphoton categories is of the same order. The Photon+H γ T category shows no exclusion power for this scenario. Observed gaugino mass limits are set up to 1050 and 825 GeV in the neutralino and the chargino branching fraction scenarios, respectively.
The results from simplified topologies in strong production of gluinos are shown in Fig. 6. For these topologies the sensitivity of Diphoton category is reduced and therefore not included in the combination, which allows for a removal of the diphoton veto discussed in Section 5 and mainly increases the sensitivity of the Photon+H γ T category. Table 3 shows the data and the background prediction yields without the diphoton veto. In case of the nominal gluino scenario, introduced in Section 2, the combination shows an optimal expected exclusion compared to the different individual categories across a broad region of the mass parameter space. For NLSP masses below 1000 GeV, the sensitivity of the combination is dominated by the Photon+H γ T category, which mainly targets signal events with large hadronic activity. However, at NLSP masses above 1700 GeV, the Photon+S γ T category, which benefits from the smaller hadronic activity close to the mass diagonal, provides the highest sensitivity. The Pho-ton+Lepton category selects events where the W boson decays leptonically, leading to a reduced sensitivity compared to the inclusive categories. The largest improvement of the combination is achieved in the phase space where the sensitivity of both inclusive categories is of the same order. Here, the expected limit on the gluino mass is improved by 50 GeV. The right plot of Fig. 6 shows the limits for the same SMS topology with a fixed gluino mass of 1950 GeV but with the gluino branching fraction varied between its decays to qq χ ± 1 and qq χ 0 1 . Compared to the nominal gluino scenario similar behavior in the two inclusive categories is found.
95% CL exclusion limits Figure 6: The 95% CL exclusion limits for the nominal gluino scenario (left) assuming equal probabilities of 50% for the gluino decay to qq χ ± 1 and qq χ 0 1 . For the gluino branching fraction scenario (right) the ratio of the probabilities for both decays are scanned and the gluino mass is fixed to 1950 GeV. The Photon+Lepton category shows no exclusion power for the latter scenario. The full lines represent the observed and the dashed lines the expected exclusion limits, where the phase space below the lines is excluded. The band around the expected limit of the combination indicates the region containing 68% of the distribution of limits expected under the background-only hypothesis. The band around the observed limit of the combination shows the spread in the observed limit from variation of the signal cross sections within their theoretical uncertainties.
In most of the simplified topologies, the combination of the different categories outperforms the individual searches with respect to the expected limit. The right plot of Fig. 6 shows a slight degradation of the expected limit at medium branching fractions for the combination compared to the Photon+H γ T category. This is caused by the removal of the events with moderate H γ T and lepton events from the Photon+H γ T category, as explained in Section 5. This strategy is motivated by optimizing the sensitivity to the GGM scenario shown in Fig. 4. Small excesses in data with respect to the background prediction are found in each of the four categories, which give rise to differences in the observed and expected limits. As a result, only small improvements are made in the observed limits compared to the individual searches in all interpretations.
Summary
A combination of four different searches for general gauge-mediated (GGM) supersymmetry (SUSY) in final states with photons and a large transverse momentum imbalance was performed. Based on the event selection of the individual searches, four event categories were defined. Overlaps between the categories were removed by additional vetoes designed to maximize the sensitivity of the combination. Using data recorded with the CMS detector at the LHC at a center-of-mass energy of 13 TeV, and corresponding to an integrated luminosity of 35.9 fb −1 , the combination improves the expected sensitivity of the searches described in Table 3: Predicted pre-fit background yields, where the values are not constrained by the likelihood fit, the observed number of events in data, and the post-fit background yields after the constraint from the likelihood fit for all search bins used in the combination. In addition the range covered by each individual bin is shown. For these yields, the Diphoton category is not included and the Diphoton veto is removed to increase the sensitivity of the Photon+H γ T category to strong production of gluinos. The results are interpreted in the context of GGM SUSY and in simplified models. The sensitivity of the combination is also interpreted across a range of branching fractions, allowing for generalization to a wide range of SUSY scenarios. The results of the GGM scenario are expressed as limits on the physical mass parameters. Here, chargino masses up to 890 (1080) GeV are excluded by the observed (expected) limit across the tested neutralino mass spectrum, which ranges from 120 to 720 GeV. In electroweak production models, limits for neutralino masses are set up to 1050 (1200) GeV for combined χ ± 1 χ 0 1 and χ ± 1 χ ∓ 1 production, while for pure χ ± 1 χ ∓ 1 production these limits are reduced to 825 (1000) GeV. For a strong production scenario based on gluino pair production, the highest excluded gluino mass is at 1975 (2050) GeV. The combination improves on the expected limits on neutralino and chargino masses by up to 100 GeV, while the expected limit on the gluino mass is increased by 50 GeV compared to the individual searches. | 6,746.6 | 2019-07-01T00:00:00.000 | [
"Physics"
] |
Self-Calibration of an Industrial Robot Using a Novel Affordable 3D Measuring Device
This work shows the feasibility of calibrating an industrial robot arm through an automated procedure using a new, low-cost, wireless measuring device mounted on the robot’s flange. The device consists of three digital indicators that are fixed orthogonally to each other on an aluminum support. Each indicator has a measuring accuracy of 3 µm. The measuring instrument uses a kinematic coupling platform which allows for the definition of an accurate and repeatable tool center point (TCP). The idea behind the calibration method is for the robot to bring automatically this TCP to three precisely-known positions (the centers of three precision balls fixed with respect to the robot’s base) and with different orientations of the robot’s end-effector. The self-calibration method was tested on a small six-axis industrial robot, the ABB IRB 120 (Vasteras, Sweden). The robot was modeled by including all its geometrical parameters and the compliance of its joints. The parameters of the model were identified using linear regression with the least-square method. Finally, the performance of the calibration was validated with a laser tracker. This validation showed that the mean and the maximum absolute position errors were reduced from 2.628 mm and 6.282 mm to 0.208 mm and 0.482 mm, respectively.
Introduction
In the past two decades, metrology equipment and methods for industrial robot arm calibration [1] have progressed tremendously, fueled by an ever-increasing demand for higher accuracy. Most manufacturers no longer want to teach robot poses manually and rely solely on the high repeatability of industrial robots. This approach is inflexible and time-consuming.
One alternative to reduce the costs associated with this method is to use offline programming software to plan the robot movements. However, because industrial robots are precise but not accurate, this method often results in poor accuracy when the program is transferred to the real robot and requires numerous touchups. Therefore, a calibration procedure is required to increase the robot's accuracy. Unfortunately, most calibration methods involve expensive measuring devices such as laser trackers. However, this problem can be solved by designing new calibration instruments and methods that, hopefully, provides similar results after calibration at a low cost.
The ideal robot calibration method should be fully-automated, executable on-site, quick to set up and perform and, of course, highly effective. At the same time, the measuring instruments used for robot calibration should not only be accurate (volumetric accuracy better than 0.1 mm), but also easy to use and affordable. Therefore, the main novelty of our device and approach is the design and use of a special calibrator plate for defining this TCP.
The proposed measuring device and its accessories (ball plate, kinematic coupling platform) are affordable (cost about $5000) and inexpensive to repair (in the event of a collision), but can be used to position the robot's TCP onto the center of a datum sphere with an offset of less than the repeatability of the robot (in our study, 0.010 mm). This means that if the coordinates of sufficiently many datum spheres are measured on a CMM, our measurement scheme can be as accurate as when using a laser tracker. Of course, in practice, dozens of such spheres cannot be used, so an important objective is to find a reasonably low number of datum spheres. This keeps the performance of this approach comparable to the performance of robot calibration using a laser tracker.
This device was first described in [24], where it was used to calibrate a small industrial robot. However, the results for the robot's position accuracy after calibration were rather poor when validated in the robot's whole workspace. In this paper, there have been significant improvements compared to our previous work. Firstly, the post-calibration results are greatly improved due to the use of a comprehensive mathematical model for the robot. This mathematical model is more complete with the addition of a parameter for consecutive parallel axes. Using this parameter in combination with the level 3 non-kinematic parameters on joints 2-6 yields much better calibration results in the entire workspace of the robot. In other words, it is demonstrated that the measuring device, when used in combination with an appropriate mathematical model and an optimal set of robot configurations, can provide calibration results in the complete workspace of the robot that are more than five times better than the results that were presented in our previous paper. Secondly, a novel methodology was used to validate the performance of each set of parameters found, which is to do multiple identifications (i.e., more than 2000 identifications) of the robot's parameters. Usually, the authors present the results of their calibration with only one identification of the parameters, which might have been the best results achieved after multiple optimizations, or just plain luck. Besides, most authors present validation results in only a few poses (typically less than 100). This new method to characterize the calibration performance of a measuring instrument gives much more credibility to the results as it shows the impact of the number of configurations selected and what is the range of post-calibration accuracy that can really be achieved with such a device. When a device like the TriCal is used in industry, it is not possible to validate the performance of the calibration with a laser tracker due to budget constraints. Therefore, the engineer must rely on the probabilities that the calibration will be successful. Our new method provides insight into how to evaluate the probabilities of a successful calibration based on the number of measurements. This paper is structured as follows. First, the proposed device and reference artifacts, the measuring procedure and uncertainty estimation, and the method and the theory used for identifying the robot's parameters are described in Section 2. Section 3 presents the experimental results. Finally, the discussion and conclusion are presented in Section 4.
TriCal and Its Accessories
TriCal, the novel device used in our work (Figure 1), is mounted on the flange of a robot and measures the relative position of stationary 12.7 mm (0.5 in) precision balls ( Figure 2) with the help of three digital indicators. It is, however, important to understand that the accuracy of TriCal is not uniform. The device is highly accurate only when the center of the ball is in the vicinity of the TCP, where all digital indicators show no more than a few micrometers. In other words, TriCal is a device used only to bring the robot's TCP to a known position with respect to the robot's base.
TriCal ( Figure 1) consists of three Mitutoyo ID-C112XB indicators (Kanagawa Prefecture, Japan). Each indicator has an accuracy of 0.003 mm and a measuring range of 12.7 mm. The indicators are supported by an aluminum conical bracket and are orthonormal to each other. Finally, three magnetic Each digital indicator is connected through a statistical process control (SPC) cable to a Mitutoyo U-WAVE-T wireless transmitter which transmits the measurements to a Mitutoyo U-WAVE-R wireless receiver. The receiver, which is connected through a universal serial bus (USB) cable to a personal computer (PC), stores the data from the transmitters as soon as a measurement changes. This outcome is achieved by setting the transmission parameters to "Event Driven Mode." In order to retrieve the measurements stored in the receiver's memory, an American Standard Code for Information Interchange (ASCII) string is sent from MATLAB 2014a. The information acquired in MATLAB is then sent to the robot's controller via a local area network. The communication setup between the robot, the PC and TriCal is presented in Figure 3. Each digital indicator is connected through a statistical process control (SPC) cable to a Mitutoyo U-WAVE-T wireless transmitter which transmits the measurements to a Mitutoyo U-WAVE-R wireless receiver. The receiver, which is connected through a universal serial bus (USB) cable to a personal computer (PC), stores the data from the transmitters as soon as a measurement changes. This outcome is achieved by setting the transmission parameters to "Event Driven Mode." In order to retrieve the measurements stored in the receiver's memory, an American Standard Code for Information Interchange (ASCII) string is sent from MATLAB 2014a. The information acquired in MATLAB is then sent to the robot's controller via a local area network. The communication setup between the robot, the PC and TriCal is presented in Figure 3. Each digital indicator is connected through a statistical process control (SPC) cable to a Mitutoyo U-WAVE-T wireless transmitter which transmits the measurements to a Mitutoyo U-WAVE-R wireless receiver. The receiver, which is connected through a universal serial bus (USB) cable to a personal computer (PC), stores the data from the transmitters as soon as a measurement changes. This outcome is achieved by setting the transmission parameters to "Event Driven Mode." In order to retrieve the measurements stored in the receiver's memory, an American Standard Code for Information Interchange (ASCII) string is sent from MATLAB 2014a. The information acquired in MATLAB is then sent to the robot's controller via a local area network. The communication setup between the robot, the PC and TriCal is presented in Figure 3. As already mentioned, TriCal is used to bring a virtual TCP to a specific position. Therefore, a crucial step is the ability to precisely define this TCP with respect to the TriCal's body. This is achieved through the use of a special kinematic coupling platform (Figure 4), which is essentially a star-shaped aluminum fixture holding a magnetic nest in its center (from Hubbs Machine and Manufacturing, Cedar Hill, AL, USA) and three vee-blocks (from Bal-tec, Los Angeles, CA, USA) at its extremities. The purpose of the kinematic platform is to locate a 12.7 mm precision ball at the TCP of the device described in the previous section, in a highly repeatable manner. Once the kinematic platform is positioned over the measuring device, as shown in Figure 4, all three digital indicators are zeroed (with their "set" buttons). It has been demonstrated that this TCP position is highly repeatable by coupling and decoupling the kinematic platform from the TriCal's body multiple times and reading the measurement values on the three digital indicators. Those values were still 0.000 mm every time the kinematic coupling platform was constrained between the vee-grooves. The same configuration, out of the three possible mating configurations, must be used for the repeatability to be 0.000 mm. Therefore, the location of the TCP can be measured precisely by a CMM by making sure that the same configuration is used. As already mentioned, TriCal is used to bring a virtual TCP to a specific position. Therefore, a crucial step is the ability to precisely define this TCP with respect to the TriCal's body. This is achieved through the use of a special kinematic coupling platform (Figure 4), which is essentially a star-shaped aluminum fixture holding a magnetic nest in its center (from Hubbs Machine and Manufacturing, Cedar Hill, AL, USA) and three vee-blocks (from Bal-tec, Los Angeles, CA, USA) at its extremities. The purpose of the kinematic platform is to locate a 12.7 mm precision ball at the TCP of the device described in the previous section, in a highly repeatable manner. Once the kinematic platform is positioned over the measuring device, as shown in Figure 4, all three digital indicators are zeroed (with their "set" buttons). It has been demonstrated that this TCP position is highly repeatable by coupling and decoupling the kinematic platform from the TriCal's body multiple times and reading the measurement values on the three digital indicators. Those values were still 0.000 mm every time the kinematic coupling platform was constrained between the vee-grooves. The same configuration, out of the three possible mating configurations, must be used for the repeatability to be 0.000 mm. Therefore, the location of the TCP can be measured precisely by a CMM by making sure that the same configuration is used. As already mentioned, TriCal is used to bring a virtual TCP to a specific position. Therefore, a crucial step is the ability to precisely define this TCP with respect to the TriCal's body. This is achieved through the use of a special kinematic coupling platform (Figure 4), which is essentially a star-shaped aluminum fixture holding a magnetic nest in its center (from Hubbs Machine and Manufacturing, Cedar Hill, AL, USA) and three vee-blocks (from Bal-tec, Los Angeles, CA, USA) at its extremities. The purpose of the kinematic platform is to locate a 12.7 mm precision ball at the TCP of the device described in the previous section, in a highly repeatable manner. Once the kinematic platform is positioned over the measuring device, as shown in Figure 4, all three digital indicators are zeroed (with their "set" buttons). It has been demonstrated that this TCP position is highly repeatable by coupling and decoupling the kinematic platform from the TriCal's body multiple times and reading the measurement values on the three digital indicators. Those values were still 0.000 mm every time the kinematic coupling platform was constrained between the vee-grooves. The same configuration, out of the three possible mating configurations, must be used for the repeatability to be 0.000 mm. Therefore, the location of the TCP can be measured precisely by a CMM by making sure that the same configuration is used. Finally, an arbitrary number of datum spheres is required for gathering measurements with the TriCal. In this paper, for simplicity, only three datum spheres are used, because their relative positions can be measured promptly using a ballbar from Renishaw (Gloucestershire, UK). Furthermore, the placement of those three datum spheres is not optimized with respect to the robot's base. The problem of choosing the optimal number of datum spheres and their optimal placement will be studied in the future.
The ball plate ( Figure 2) used in this study is composed of an aluminum triangular platform with three magnetic nests for 12.7 mm balls mounted on risers from Renishaw. The plate is mounted on an articulating platform from Thorlabs (Newton, NJ, USA), which allows the operator to vary the orientation of the platform. The nests are placed approximately 300 mm apart so that the exact distance between them can be measured with a telescoping ballbar from Renishaw. Furthermore, nests and balls are utilized instead of tooling balls, so that they are used to easily validate our work with a ION laser tracker from FARO (Lake Mary, FL, USA), by replacing the balls with 12.7 mm spherically mounted reflectors.
Measurement Procedure
The measurement procedure can be divided into two main operations: semi-automated steps and a fully-automated centering procedure. Note that several semi-automated steps are needed only the first time the device is used on a particular robot cell. The whole calibration process was executed on an ABB IRB 120 robot equipped with the new measuring device, as depicted in Figure 2. This particular setup will be referred to as setup 1 (measurements for identification).
The measuring device is used to gather measurements, but before it can be used safely and in an automated fashion, three semi-automated steps should be executed. The first semi-automated step is performed to define a reference position on each of the digital indicators by using the kinematic platform. To do so, three 12.7 mm precision balls are positioned on the magnetic nests of the measuring device. Then, the vee-grooves of the kinematic platform are mated to the precision balls. The operator resets each indicator to zero (0.000) when the platform is fully-constrained onto the measuring device. This step takes approximately one minute to complete.
The second semi-automated step is performed to identify the position of the TCP with respect to the wrist of the robot. This procedure requires four robot configurations. To register each of the required joint targets, the operator must jog the robot until the three stems of TriCal are in contact with any one of the three precision balls of the ball plate. Then, the operator can start an automated centering procedure. This procedure is programmed both in MATLAB and in RAPID and is explained later. Its purpose is to move the measuring device until all three digital indicators display "0.000". Once reached, the current joint target (robot configuration) is saved in the robot's controller. This process is repeated three times on the same precision ball that was selected for the first measurement. The position of the TCP with respect to the wrist can be found by minimizing the Cartesian errors at the end-effector. This can be accomplished by using the forward kinematic equations and an approximate TCP position, as is usually done by industrial robot manufacturers to identify the TCP location. The second semi-automated step takes approximately 15 min to complete.
Once the TCP has been found, it is used to identify the positions of each of the three precision balls on the ball plate in the robot's internal coordinate system. The measuring device must be brought to each of the three balls by jogging. Then, on each ball, the automated centering procedure is executed. When all three indicators show "0.000", the robot pose is saved. The step takes approximately 20 min.
As soon as the semi-automated steps are completed, the measurements can be collected automatically on each of the three 12.7 mm precision balls through an automated centering procedure (i.e., once the robot's TCP coincides with the center of one of the three balls, we take the angle readings of the six joints). The algorithm is shown in Algorithm 1. For the experiment, the maximum error, ε, was set at 0.010 mm. Ideally, this value should be 0. However, if the value is below the robot's position repeatability, the time to measure one configuration will double in some cases. At ε = 0.010 mm, each robot configuration takes approximately 20 s to be measured. The only measurements that can be collected with Setup 1 are the positions of the centers of the three balls with respect to {W}. Let p W T be the position vector of {T} with respect to {W}, and let d ij be the distance measured between the centers of balls i and j (i = 1, 2, 3; j = 1, 2, 3; i = j). The three position vectors that can be measured, which represent the centers of balls 1, 2, and 3, are: where: and: The distances between each pair of magnetic nests were measured with a Renishaw QC-20W ballbar. The rationale for using a ballbar instead of a traditional CMM is that it is more accurate but also much cheaper.
A total of 360 randomly-generated robot configurations (120 robot configurations per precision ball) were subsequently measured with this experimental setup.
The measurement uncertainties of this new calibration method are caused by the inaccuracy of the digital indicators, the mechanical tolerances on each component, the experimental conditions and the measuring procedure. Specifically, Tables 1 and 2 show the uncertainty estimation associated with each source of errors for the measuring device and the measuring procedure, respectively. The TriCal is used as a constraining device, thus some of the sources of errors are negligible or can be reduced significantly by adjusting the parameters and conditions of the calibration process. For instance, the maximum angular deviation (0.122 • ) of the digital indicators might seem important. However, when considering that the robot constrains the measuring device within 0.010 mm with respect to the center of the 12.7 mm precision ball, the projected source of error is orders of magnitude smaller than the other sources of errors. Also, if the calibration can be performed in a temperature-controlled environment, the sources of error related to thermal expansion can be neglected. Furthermore, if more time can be spent on the calibration, the automated centering tolerance ε can be lowered to 0, thus reducing the sources of errors. In these conditions, and assuming the fact that those errors represent a worst case scenario (i.e., all those sources of errors add up), the measuring device's absolute accuracy is approximately 9 µm, which is less than the robot's repeatability (10 µm). It is important to note that the uncertainty associated with the hysteresis and the friction of very small end-effector displacements was not quantified within the automated centering error. The negative effects of this type of uncertainty can be diminished by moving the robot with larger movements. More precisely, the robot end-effector can be moved away significantly from the target precision ball, with the condition that it should maintain contact with it. Then, using the measurements on the three indicators, another single movement attempt can be made to move the end-effector directly on the target within the desired tolerance. This procedure can be repeated until the robot is finally at the desired location. However, the calibration would take more time to perform. The combined maximum error would be less than 27 µm if the sources of errors of the measuring procedure are added to those of the measuring device.
Sources of Errors Uncertainty
Accuracy of each digital indicator ID-C112XB 3.00 µm Tolerance on diameter of measuring balls 2.50 µm Tolerance on diameter of contact point spheres of indicators NA Maximum angular deviation of digital indicators (machining) 0.122 • Projected angular deviation considering ε = 0.010 mm 0.02 µm Combined maximum error~9 µm Table 2. Sources of errors related to the measuring procedure.
Calibration Model and Identification Method
An accurate identification of the robot calibration model's parameters is crucial to improve the absolute position accuracy. The measurements set used for identification should also be optimal for the identification method that is employed. To that end, an observability optimization was performed on a large pool of robot configurations to select the optimal configurations for the identification process. Then, the method of least squares was used to identify the robot parameters using the optimized robot configurations.
The robot was modeled using the Denavit-Hartenberg (D-H) parameters as per Craig's convention [25]. Furthermore, an additional parameter was added to consider consecutive parallel axes [26]. Figure 5 shows all the link frames. The base frame is denoted by {0}. The robot's nominal parameters are shown in Table 3.
axes [26]. Figure 5 shows all the link frames. The base frame is denoted by {0}. The robot's nominal parameters are shown in Table 3.
The homogeneous matrix linking each successive pair of frames of the robot is represented as: and the parameters are presented in Table 4.
−90 0 72 q 6 + 180 - The homogeneous matrix linking each successive pair of frames of the robot is represented as: where α i−1 , a i−1 , θ i , and d i are the D-H parameters, sθ i = sin θ i , cθ i = cos θ i , R Q is the homogeneous rotation matrix around axis Q, D Q is the homogeneous translation matrix along Q, and T j i is the homogeneous matrix representing the pose of frame {i} with respect to frame {j}. The rotation parameter, β i−1 , addresses the problem of the proportionality of the model [27]. To obtain the homogeneous matrices of the base frame {0} with respect to the world frame {W}, and of the tool frame {T} with respect to the flange frame {6}, the following equation was used: and the parameters are presented in Table 4. Table 4. Tool and base nominal parameters. The calibration model includes a total of 31 parameter errors: 26 kinematic parameters and 5 non-kinematic parameters, as seen in Tables 5 and 6. The parameter errors associated with link 1 are not considered, because they are dependent on the base parameters. Also, axes 2 and 3 are parallel, so only one of either δd 2 or δd 3 should be included in the calibration model. Therefore δd 2 was arbitrarily chosen for removal. The tool parameters are also not included for identification because the position of the tool is measured with a 3-axis CMM with respect to the robot's last axis frame. Note that these parameters do not need to be measured frequently, as long as TriCal is manipulated with care. The tool orientation parameters cannot be incorporated into the model because the measuring instrument provides only three-dimensional position measurements.
Frame x (mm) y (mm) z (mm)
The compliance in each gearbox is modeled as a linear torsional spring, as presented in [6]. The compliance in the gearbox of the first joint is not included because no torque is applied to this joint when the robot is not moving, as the joint axis is vertical. The torque on each of the other five joints is calculated with the iterative Newton-Euler algorithm [25].
First, let us define the vector of all constant parameters of the robot model to be: where χ B and χ T are the vectors of the parameters of the base and the tool, respectively. Then, by using forward kinematics, the pose of {T} with respect to {W} can be expressed as a function of the constant parameters (ρ) and the variable parameters (q, τ): Now, the position vector of the homogeneous matrix can be defined to be: By assuming that the parameter errors are small, the difference between the position measurements of one configuration (i.e., x mes , y mes , z mes ) and the position obtained by calculating the forward kinematics (i.e., x, y, z) of the calibrated model is: where J is the Jacobian matrix of x. The concatenation of all the n measurements that are used for identification is expressed as: which can also be written as: To find the variation of the parameters ∆ρ, the identification Jacobian must be inverted. However, since the identification Jacobian is not square, the following equation is used: which is equivalent to: where J + is the expression of the Moore-Penrose inverse. Next, an observability assessment was performed in order to obtain the best sets for identifying the robot's parameters. The observability index O 1 [28] was selected for optimization as it had been previously demonstrated to give better results when the robot model incorporates non-kinematic parameters [29]. The mathematical formula for this observability index is: where n is the number of configurations in the set, m is the number of parameters of the model, and σ i are the singular values of the identification Jacobian matrix. To optimize the observability index of a set of n robot configurations, the DETMAX algorithm was used [30] on a large initial pool of N robot configurations. When the DETMAX algorithm finishes, it outputs a set of n robot configurations with an optimal observability index. The robot's parameters are then identified using this optimal set of robot configurations with the least squares optimization method, as presented in Equations (11)-(15).
Results
In this section, a dispersion analysis of the identification procedure using multiple optimized sets of robot configurations of various sizes is introduced. Then, the performance of the calibration using a set of 75 optimized robot configurations for identification is presented. Next, the performance of this new method is compared to an earlier calibration method using the same calibration device. Finally, the results of this new calibration method are compared with the results of other works performed on the same robot.
Calibration Dispersion Analysis
Several parameter identifications were performed to assess the dispersion of the calibration results when using different set sizes. More precisely, a total of 360 robot configurations were initially measured using setup 1 (i.e., using TriCal as a constraining device on the three precision balls of the ball plate). Then, multiple sets containing between 20 and 100 robot configurations were formed using those 360 configurations. Thirty identifications were performed on each set size (i.e., in each identification, the number of robot configurations is the same, but not the configurations themselves). Furthermore, for each set, the observability index was optimized using the initial pool size of 360 randomly generated robot configurations.
The experimental setup on which the robot configurations were measured for validation purposes is shown in Figure 6, and it is referred to as setup 2. This setup still makes use of TriCal, but with a few modifications in order to be able to measure the position of the center ball with a laser tracker. First, the digital indicators were removed from the conical bracket. Then, the 12.7 mm precision ball of the calibrator was replaced by a 12.7 mm spherically mounted retroreflector (SMR), and the calibrator was locked onto the measuring device using a kinematic coupling and rubber bands (not shown). Using balls and an SMR of identical diameters makes it possible to measure exactly the same TCP position with the laser tracker. Finally, the 12.7 mm precision balls of the ball plate were replaced by 12.7 mm SMRs to measure the TCP with respect to exactly the same {W}. The laser tracker was placed at approximately two meters in front of the SMR 3. According to the manufacturer's specifications, this laser tracker has a point-to-point typical accuracy of 32 µm when using a 2.3 m horizontal scale bar measurement at a measurement distance of five meters from the laser tracker. It is also important to note that the laser tracker was used for validation purposes only, and not to acquire measurements to identify the robot parameters. measured using setup 1 (i.e., using TriCal as a constraining device on the three precision balls of the ball plate). Then, multiple sets containing between 20 and 100 robot configurations were formed using those 360 configurations. Thirty identifications were performed on each set size (i.e., in each identification, the number of robot configurations is the same, but not the configurations themselves). Furthermore, for each set, the observability index was optimized using the initial pool size of 360 randomly generated robot configurations.
The experimental setup on which the robot configurations were measured for validation purposes is shown in Figure 6, and it is referred to as setup 2. This setup still makes use of TriCal, but with a few modifications in order to be able to measure the position of the center ball with a laser tracker. First, the digital indicators were removed from the conical bracket. Then, the 12.7 mm precision ball of the calibrator was replaced by a 12.7 mm spherically mounted retroreflector (SMR), and the calibrator was locked onto the measuring device using a kinematic coupling and rubber bands (not shown). Using balls and an SMR of identical diameters makes it possible to measure exactly the same TCP position with the laser tracker. Finally, the 12.7 mm precision balls of the ball plate were replaced by 12.7 mm SMRs to measure the TCP with respect to exactly the same {W}. The laser tracker was placed at approximately two meters in front of the SMR 3. According to the manufacturer's specifications, this laser tracker has a point-to-point typical accuracy of 32 µm when using a 2.3 m horizontal scale bar measurement at a measurement distance of five meters from the laser tracker. It is also important to note that the laser tracker was used for validation purposes only, and not to acquire measurements to identify the robot parameters. The absolute position errors after calibration were measured with a laser tracker at 506 randomly-generated robot configurations in the complete robot workspace (Figure 7). The only constraint on these configurations is that the SMR is visible to the laser tracker.
The multiple means and standard deviations of the absolute position errors after calibration are displayed as quartiles on a box and whisker plot as shown in Figures 8 and 9, respectively. On those plots, the red crosses are outliers. It can be noted that the performance of the calibration varies significantly for different sets of robot configurations of the same size even when the observability index is optimized. This is especially true for smaller sets (i.e., sets composed of fewer robot configurations used in the identification). Also, the data shows a clear trend. The sets that contain more robot configuration for identification will generally give better calibration results. Consequently, at 75 configurations, the calibration results are similar to those of larger sets. Table 7 shows the descriptive statistics of the identified parameters for 30 sets of 75 robot configurations. According to these statistics, the identified parameters are stable with low standard deviations. This outcome correlates with the low dispersion in the calibration performances. The absolute position errors after calibration were measured with a laser tracker at 506 randomly-generated robot configurations in the complete robot workspace (Figure 7). The only constraint on these configurations is that the SMR is visible to the laser tracker.
The multiple means and standard deviations of the absolute position errors after calibration are displayed as quartiles on a box and whisker plot as shown in Figures 8 and 9, respectively. On those plots, the red crosses are outliers. It can be noted that the performance of the calibration varies significantly for different sets of robot configurations of the same size even when the observability index is optimized. This is especially true for smaller sets (i.e., sets composed of fewer robot configurations used in the identification). Also, the data shows a clear trend. The sets that contain more robot configuration for identification will generally give better calibration results. Consequently, at 75 configurations, the calibration results are similar to those of larger sets. Table 7 shows the descriptive statistics of the identified parameters for 30 sets of 75 robot configurations. According to these statistics, the identified parameters are stable with low standard deviations. This outcome correlates with the low dispersion in the calibration performances.
Absolute Position Errors
The absolute position errors were plotted on the 2D scatter plot shown in Figure 10. The locations of the 10 biggest errors are encircled. Furthermore, two linear regression analyses were performed, one for the absolute position errors with respect to {0}, and the other with respect to the center of the calibration zone (CCZ). The CCZ is located at the center of gravity of the triangle formed by the three measurement positions. The regression analyses are shown in Figures 11 and 12, respectively. Finally, Figure 13 shows the distribution of absolute position errors after calibration as well as the descriptive statistics of the errors before and after calibration. . Linear regression-Distance between the TCP and the origin of the base frame. Figure 11. Linear regression-Distance between the TCP and the origin of the base frame. For all these analyses, the parameters of the model were identified with one optimized set of 75 robot configurations using setup 1 ( Figure 2) and validated by setup 2 (Figure 6) on the same 506 configurations as previously described (Figure 7). Figure 10 shows that the largest errors are primarily located at the extremities of the robot's workspace. The regression analysis presented in Figure 11 confirms the weak linear relationship between the distance from the TCP to the base of the robot and the absolute precision errors.
Naturally, errors are expected to be higher when the arm is fully extended. However, Figure 12 shows no linear relationship between the distance from the TCP to the CCZ and the absolute precision errors. In other words, this result shows that this calibration method provides good performance throughout the whole workspace even though the measurements are collected in a restricted volume in front of the robot. Table 8 shows a comparison between different measurement methods that were performed on the same IRB120 robot. Our method identifies the parameters of the robot model through For all these analyses, the parameters of the model were identified with one optimized set of 75 robot configurations using setup 1 ( Figure 2) and validated by setup 2 (Figure 6) on the same 506 configurations as previously described (Figure 7). Figure 10 shows that the largest errors are primarily located at the extremities of the robot's workspace. The regression analysis presented in Figure 11 confirms the weak linear relationship between the distance from the TCP to the base of the robot and the absolute precision errors.
Comparison of TriCal with Other Calibration Methods
Naturally, errors are expected to be higher when the arm is fully extended. However, Figure 12 shows no linear relationship between the distance from the TCP to the CCZ and the absolute precision errors. In other words, this result shows that this calibration method provides good performance throughout the whole workspace even though the measurements are collected in a restricted volume in front of the robot. Table 8 shows a comparison between different measurement methods that were performed on the same IRB120 robot. Our method identifies the parameters of the robot model through For all these analyses, the parameters of the model were identified with one optimized set of 75 robot configurations using setup 1 ( Figure 2) and validated by setup 2 (Figure 6) on the same 506 configurations as previously described (Figure 7). Figure 10 shows that the largest errors are primarily located at the extremities of the robot's workspace. The regression analysis presented in Figure 11 confirms the weak linear relationship between the distance from the TCP to the base of the robot and the absolute precision errors.
Comparison of TriCal with Other Calibration Methods
Naturally, errors are expected to be higher when the arm is fully extended. However, Figure 12 shows no linear relationship between the distance from the TCP to the CCZ and the absolute precision errors. In other words, this result shows that this calibration method provides good performance throughout the whole workspace even though the measurements are collected in a restricted volume in front of the robot. Table 8 shows a comparison between different measurement methods that were performed on the same IRB120 robot. Our method identifies the parameters of the robot model through 30 optimized sets of 75 robot configurations measured with setup 1. With the TriCal, the performances were validated with the same 506 robot configurations in setup 2. In the case of C-Track, an optical CMM from Creaform (Lévis, QC, Canada), the TCP was a half-sphere with a retroreflective target, while in the case of the laser tracker, an SMR was used (as in the validation of setup 2) [9]. For the identification of the robot's parameters with the laser tracker and the optical CMM, the same least squares method and observability index optimization algorithm were used. Once again, in all three cases, the results were validated with a laser tracker. The data show that the performance of TriCal with three base-mounted spheres is slightly lower than the performance of the laser tracker or the optical CMM. However, the cost of the TriCal is also significantly lower. This comparison might seem unfair since the cost of building the prototype is compared against the acquisition cost of those measuring devices. However, the TriCal can easily be custom built.
Comparison of TriCal with Other Calibration Methods
In reality, all those calibration methods require expert knowledge and a significant amount of time. The time spent on each calibration method depends on many variables. Those variables are the number of configurations required for the identification of the robot's parameters, the number of configurations required for the validation of its accuracy after calibration, the setup time, which can vary depending on the work cell constraints, the experience of the user with the measuring technology, the speed of the robot, etc. Thus, a process time comparison between the different measuring technologies is subjective and inappropriate.
Discussion
In this paper a novel low-cost, three-dimensional automated measuring device (TriCal) and a robot calibration procedure were presented. TriCal was used to calibrate a six-axis serial industrial robot. It was shown that the TriCal device is nearly as good as a laser tracker for calibrating a small industrial robot. Namely, it was possible to reduce the absolute position errors to 0.482 mm (maximum), as verified in more than 500 random robot configurations.
The cost of the new 3D measuring device is significantly lower than any other used for robot calibration in industry. For example, a laser tracker typically costs more than $100,000, while TriCal's prototype costs approximately $5000. Moreover, an annual calibration of a laser tracker, let alone a repair, costs thousands of dollars whereas an accident involving TriCal would incur repair costs of no more than several hundred dollars.
In addition to its low acquisition costs, TriCal is less sensitive to variations in atmospheric conditions than a laser tracker. In an industrial environment where temperature, humidity and vibrations cannot be controlled, the TriCal is a safe alternative for field calibration. The TriCal device can even be purposed for measuring position repeatability for bigger robots.
The TriCal is arguably among the best measuring tools for performance evaluation and calibration of industrial robots, especially for small and medium enterprises that cannot afford an expensive measuring device for the sole purpose of robot calibration. We are in the process of commercializing this tool.
One very important study that remains to be done, however, is on the optimal number and placement of the datum spheres. Ideally, these datum spheres must be fixed on the same plate on which the robot is fixed, and their positions should be measured on a CMM, with respect to the actual base of the robot. These datum spheres should remain part of the robot cell, even after calibration.
They can be used for automated periodical validation of both the accuracy and the repeatability of the industrial robot.
Supplementary Materials: All procedures described in this paper, including the validation, are detailed in a six-minute video available at https://youtu.be/Tvj-IwQmVBw.
Author Contributions: M.G. designed the experiments; performed the experiments, analyzed the data and wrote the paper, while at the ÉTS. A.J. and I.B. helped with the writing, structure and organization of the paper, and supervised the work. | 9,704.8 | 2018-10-01T00:00:00.000 | [
"Materials Science"
] |
Writing in a Polyaniline Film with Laser Beam and Stability of the Record : A Raman Spectroscopy Study
Lines were drawn on polyaniline (PANI) salt films with laser beam, and then the samples were left to age in air at room temperature. Both the irradiated and intact parts of the sample and their ageing were studied with Raman spectroscopy. It was found that the laser-written record is reasonably stable. The degradation of polyaniline by laser irradiation and ageing was compared to the changes in PANI during heating. In all cases, deprotonation and crosslinking of PANI chains proceed but the relative rates of the processes vary with degradation conditions.
When PANI salt is exposed to elevated temperature, the changes at the molecular level manifest themselves by the loss of doping acid molecules and formation of phenazinelike or quinonoid segments [9].Besides the chemical changes, conformation is changed by thermal treatment [14].This feature is an inherent property of PANI and is not influenced by the nature of the protonating acid.Similar processes take place in PANI salt during ageing at room temperature [15,16].
The changes in the molecular structure of PANI which occur during heating can be conveniently observed by Raman spectroscopy [17][18][19].When the excitation laser line falls within the region of a permitted electronic transition of the sample, the Raman intensities associated with vibrational modes coupled with the excited electronic states can be markedly increased due to a resonance effect.The Raman features of semiquinone radical structures typical for protonated emeraldine salt are enhanced by near-infrared excitation; in contrast, the Raman features of quinonoid units typical for the PANI base are enhanced with a red excitation line [20].
PANI strongly absorbs in the Vis-NIR region; thus, the samples can be locally heated by the irradiation.Deprotonation, degradation, and possible carbonization may occur during Raman spectrum recording [21,22].These degradation ways-ageing, heating, and laser irradiation-do have similar effects on PANI [9,21,22], but this has not been directly compared.
The possibility to locally deprotonate a PANI film with laser beam opens an interesting opportunity for organic electronics.Conducting paths of PANI salt films can be separated by lines of the deprotonated PANI base.In this work, the writing on a PANI film with laser beam and ageing of the record are studied.Raman spectra of chemically and thermally deprotonated PANI films are compared with the PANI film after laser irradiation and ageing in air.
Materials and Methods
2.1.Preparation of PANI Films.Polyaniline was prepared by the oxidation of 0.2 M aniline hydrochloride (Fluka, Switzerland) with 0.25 M ammonium peroxydisulfate (APS) (Lach-Ner, Czech Republic) in water [23] at room temperature.The films were deposited in situ on glass and goldcoated glass windows, 13 mm in diameter.A couple of films was deprotonated with an excess of 1 M ammonium hydroxide to the PANI base.The films were then rinsed with acetone and dried in air.
2.2.
Heating.PANI films were heated in ceramic oven in a nitrogen atmosphere (thermal degradation).The heating was switched on, and the temperature was increased at 22 °C•min −1 .After 100 °C was reached, the heating was switched off and the sample was left to cool, still in the flowing nitrogen stream.
2.3.Ageing.PANI films were left in air at room temperature and analysed at selected times up to two months.
2.4.Laser Irradiation.Lines were drawn on PANI-S films with 1064 nm excitation laser of the Thermo Nicolet 6700 FTIR spectrometer with a silicon-coated calcium fluoride beam splitter and NXR FT-Raman module with a microscope accessory by simply moving the sample stage of the microscope continuously at two different speeds while simultaneously observing in white light and irradiating with the laser.Laser degradation was also induced by 514 nm and 633 nm excitation lasers of various powers by irradiation of separated spots.
2.5.Spectroscopic Characterization.Raman spectra excited in the visible range with a HeNe 633 and Ar-ion 514 nm laser were collected on the Renishaw inVia Reflex Raman spectroscope.The research-grade Leica DM LM micrometer with an objective magnification of ×50 was used to focus the laser beam on the sample placed on an X-Y motorized sample stage.The scattered light was analyzed by the spectrograph with a holographic grating with 1800 or 2400 lines mm −1 .A Peltier-cooled CCD detector (576 × 384 pixels) registered the dispersed light.
Results and Discussion
3.1.Raman Spectra of Intact PANI-S Films.The Raman spectrum of PANI hydrochloride films was measured with 514, 633, and 785 nm excitation lines (Figure 1).It is expected that the vibrations originating from quinonoid units should be resonance enhanced with a laser excitation wavelength at 633 nm [24,25]; however, these structures are present in the PANI-S only in a very low content and their features are not observed in the spectrum.On the other hand, the typical bands of PANI-S, mainly at 1595, 1504, 1335, and 1182 cm −1 , are present.Positions and assignments of all the present bands are listed in Table 1.
In the Raman spectrum of PANI hydrochloride films excited with the 514 nm laser line (Figure 1, green line), in contrast with the spectrum obtained with the 633 nm excitation line, the most intense peak in the ring stretching region is observed at 1620 cm −1 and in the C-H deformation region at 1194 cm −1 ; these bands are connected with benzenoid units [26].This excitation line is not in resonance with any form of PANI; thus, the band intensity follows the concentration of benzenoid/quinonoid rings more reliably.
Using the 785 nm excitation line (Figure 1, black line), different semiquinone radical structures attributed to the C~N +• vibration of variously localized polaronic sites are resonantly enhanced.The spectrum is close to the spectrum excited with the 633 nm line; significant changes can be observed only in the group of bands connected with C~N +• stretching vibrations, where a band at 1377 cm −1 connected with semiquinonoid rings with lower polaron delocalization can be observed in addition to the delocalized polaron band at 1335 cm −1 .
Degradation by Laser
Beam.PANI is rather sensitive to irradiation and can be damaged by laser beam during measurement [16].The excitation wavelength at 514 nm is out of resonance with both PANI emeraldine salt and base, but the photon energy is high.With this excitation line, long accumulation at low laser power is necessary to obtain acceptable quality of the spectra without any damage to the sample.The spectra change quickly with increasing laser power (Figure 2(a)).
At laser power up to 10%, the band at 1335 cm −1 decreased and the band at 575 cm −1 increased.These changes can be attributed to partial deprotonation and formation of phenazine-like structures [15,22].At higher intensities, the bands broadened, bands at 514, 1402, and 1620 cm −1 disappeared, the band at 575 cm −1 shifted to 560 cm −1 , and the band at 1335 cm −1 decreased significantly.Further deprotonation, random crosslinking, and bond breakage take place [15,44].
The excitation at 633 nm is in resonance with quinonoid structures, so partial deprotonation of PANI-S is detected easily (Figure 2(b)).Raman spectra can be safely obtained without damage to the sample at higher power, than with that of the green line.The Raman spectra of PANI-S measured with the 633 nm line of HeNe laser were obtained in acceptable quality without any damage to the sample.At higher laser power, the band at 430 cm −1 shifted to 419 cm −1 ; peaks at 577 cm −1 , 607 cm −1 , 730 cm −1 , 1230 cm −1 , 1400 cm −1 , and 1600 cm −1 increased; and bands at 1566 cm −1 and 1638 cm −1 appeared.The band at 1170 cm −1 and shoulder at 1470 cm −1 decreased.These changes are connected with deprotonation of the sample [45]. 2 International Journal of Polymer Science The 1064 nm excitation is in resonance with π-polaron transition of PANI-S, and it is highly absorbed.In addition, the Raman cross-section is significantly lower at longer excitation wavelengths so higher power had to be used.As a result, it is practically impossible to obtain a Raman spectrum of nondamaged PANI-S with this excitation line.For this reason, the lines were first drawn on the PANI-S sample with the 1064 nm excitation line and then analyzed with the low-power 633 nm excitation line, which is, on the other hand, the gentlest way of PANI film measurement (Figure 2(c) and 2(d)).The fact that our FT-Raman setup allows easy and controlled movement of the sample against the laser beam is an advantage for deliberate damage-drawing on the PANI-S film.The treated area changes its color and is visible even by the naked eye (fig Foto).
There are significant changes in the spectra when irradiated with the 1064 nm laser line.The irradiation with power up to 1 W with high-speed movement of the laser beam on the sample causes partial deprotonation and formation of phenazine-like crosslinks, as manifested by appearance of the bands at 1563, 1463, 1390, and 606 cm −1 .The irradiation with 0.2 W at low speed deprotonates the sample completely.Higher power causes transformation to amorphous carbon analogue [9,46].
Degradation by Ageing in Air
The samples with the burned record were left to age in air.The lines on the films faded, as could be seen in Figure 3.The changes of the PANI film at the line drawn with 0.2 W laser power at low speed (Figure 4) and out of the line (Figure 5) are well illustrated by the Raman spectra.The PANI-S film partially deprotonates by ageing, following the increase of intensity of Raman bands at 1470 cm −1 and 1223 cm −1 and the decrease of intensity of the bands at 1338 and 1257 cm −1 .In addition, bands at 1376, 780, 750, and 730 cm −1 appear and the band at 587 cm −1 shifts to 577 cm −1 .This is connected with the formation of phenazine-like crosslinks [15].However, the level of protonation is still high after two months.
The Raman spectra of the line burnt with the 1064 nm excitation wavelength do not change much during time.The shoulder at 1610 cm −1 increased, the band at 1470 cm −1 shifted to 1480 cm −1 , and a small band appeared at 810 cm −1 .The band at 1335 cm −1 increased slightly.All changes were already observed after two days and can be understood as stabilization of the thermally deprotonated sample.First, the volatile chloride anions are removed but the conformation and electron distribution are not yet changed accordingly, as suggested by the position of the C=N
Band position (cm −1 )
Assignment International Journal of Polymer Science stretching band at 1478 cm −1 typical for short pernigranilinelike or phenazine-like oligomers [27,32,40] (C-C stretching in emeraldine salt is also present near this Raman shift [29,47]) and the strong dominance of the C=C stretching band of quinonoid rings at 1560 cm −1 in the ring stretching region.Later, the band of benzenoid ring stretching at 1605 cm −1 appears and the C=N stretching band shifts to the standard position for the PANI emeraldine base at 1468 cm −1 [25,32].These changes can be connected with conformational relaxation in the new deprotonated state.4 International Journal of Polymer Science In fact, the already stabilized spectrum was observed before as the spectrum of thermally deprotonated PANI-S [46].In this case, the spectra of the fresh record were measured within an hour after recording.
Degradation by Heating in Inert
Atmosphere.Another method of PANI degradation which has been studied is heating in inert atmosphere (Figure 6).A PANI-S film on silicon support was heated to 100 °C and analyzed with Raman spectroscopy [46].The Raman spectrum obtained with the 633 nm laser displays mainly the increase of the bands connected with quinonoid structures (1595, 1558, 1468, 1415, and 1222 cm −1 ) that are resonantly enhanced and the decrease of the intensity of benzenoid (1620, 1257 cm −1 ) and semiquinonoid structures (1337 cm −1 ) after heating the sample at 100 °C [29].The broad band of C=N stretching vibrations in quinonoid units at 1480 cm −1 becomes the strongest band of the spectrum.The band connected with localized polarons at 1380 cm −1 appeared.
The Raman spectrum obtained with the 514 nm excitation line is very similar to the spectrum excited with the 633 nm line.This suggests that the quinonoid units are really dominant in the film and not just brought up by the resonance enhancement with the 633 nm excitation line.
With the 785 nm excitation, smaller changes of the spectrum of original PANI-S films are detected.The spectrum differs significantly from the spectra excited with visible laser lines.The semiquinonoid structures are resonantly enhanced with this excitation, so the main emeraldine salt character was preserved in the spectrum.A small band at 1465 cm −1 , corresponding to the C=N vibration in quinonoid units, appeared; the band at 1220 cm −1 increased relatively to the band at 1257 cm −1 ; and the intensity of the peak at 1330 cm −1 decreased while the band at 1378 cm −1 did not change and a small band at 1415 cm −1 appeared.
The changes in the Raman spectra correspond to partial deprotonation, which is sensitively detected with the 633 nm excitation, and to the formation of quinonoid and phenazine-like defects.On the defective sites, localization of polarons takes place, resulting in the Raman band of localized (1380 cm −1 ) and highly localized (1400-1415 cm −1 ) polarons.
Comparison of the Different Types of Degradation.
During all kinds of degradation of PANI films, similar processes like deprotonation, crosslinking, or even carbonization took place; however, their rates differ.
Deprotonation is the dominant process when moderate energy flow is delivered to the sample, either by heating or by laser irradiation.The Raman spectra of heated and irradiated PANI-S films are virtually identical with the chemically deprotonated PANI film (Figure 7); only minor differences in the intensities of the bands of both quinonoid units and residual protonated units imply varying levels of deprotonation.
By simple ageing in air or low-power irradiation, the deprotonation is accompanied by the formation of phenazinelike crosslinks.These changes are gradual.On the other hand, the deprotonated sample does not undergo crosslinking during ageing in a detectable extent.Crosslinking takes 5 International Journal of Polymer Science place also during heating and irradiation, but only at a higher temperature [46].
Conclusions
All ways of PANI degradation, i.e., ageing in air, heating, and laser irradiation, led mainly to two effects, deprotonation and crosslinking.Strong laser irradiation or heating to temperature higher than 500 °C [46] led even to carbonization of the film.The rates of these transformations varied for varying degradation conditions, but qualitatively, the processes were identical.Chemical deprotonation, however, proceeds in a different manner and has a different impact on the material stability [46][47][48].
Due to the different rates of degradations at different temperatures, accelerated ageing at increased temperature cannot be reliably used as a model of ageing at room temperature.At elevated temperatures, deprotonation is favored against crosslinking.
In the present paper, it is demonstrated that it is possible to write into a PANI film with a laser beam.The line thickness is large, in the order of tens of nanometers.Such low spatial resolution should be sufficient for certain applications in inexpensive electronics, such as RFID tags.The recorded pattern shows reasonable stability in air at room temperature.International Journal of Polymer Science
Figure 1 :
Figure 1: Raman spectra of intact PANI hydrochloride films on silicon support obtained with 514, 633, and 785 nm laser lines.
Figure 2 :
Figure2: Raman spectra of a PANI film on gold support burned with various powers of 514, 633, and 1064 nm radiation (for burning with the 1064 nm line, the spectra were measured subsequently with the low-power 633 nm line).
Figure 3 :
Figure 3: PANI-S on glass and gold with structures drawn with 1064 excitation laser.Line thickness varies from 40 to 100 μm depending on the laser power (from 0.1 to 2 W).
Figure 4 :
Figure4: Raman spectra of a line burnt to the PANI-S film on gold with a 0.2 W 1064 nm laser line at low speed, obtained with the 633 nm excitation line at a marked time after burning.The samples were stored at room temperature in air.
Figure 5 :
Figure 5: Raman spectra of the PANI-S film on gold obtained with the 633 nm excitation line at a marked time after preparation.The samples were stored at room temperature in air.
Figure 6 :
Figure 6: Raman spectra of the PANI film heated at 100 °C in nitrogen atmosphere, excited with the 514, 633, and 785 nm laser line.
Figure 7 :
Figure 7: Comparison of degradation methods on a PANI-S film on gold.Raman spectra are obtained with the 633 nm excitation line.
Table 1 :
Assignment of the Raman bands of PANI. | 3,872.4 | 2018-10-04T00:00:00.000 | [
"Chemistry"
] |
Evolution of rotating massive stars with new hydrodynamic wind models
Context. Mass loss due to radiatively line-driven winds is central to our understanding of the evolution of massive stars in both single and multiple systems. This mass loss plays a key role in modulating the stellar evolution at di ff erent metallicities, particularly in the case of massive stars with M ∗ ≥ 25 M (cid:12) . Aims. We extend the evolution models introduced in Paper I, where the mass-loss recipe is based on the simultaneous calculation of the wind hydrodynamics and the line acceleration, by incorporating the e ff ects of stellar rotation. Methods. As in Paper I, we introduce a grid of self-consistent line-force parameters ( k ,α,δ ) for a set of standard evolutionary tracks using G enec . Based on this grid, we analysed the e ff ects of stellar rotation, CNO abundances, and He / H ratio on the wind solutions to derive additional terms for the recipe with which we predict the self-consistent mass-loss rate, ˙ M sc . With this, we generated a new set of evolutionary tracks with rotation for M ZAMS = 25, 40, 70, and 120 M (cid:12) , and for metallicities Z = 0 . 014 (Galactic) and 0.006 (Large Magellanic Cloud). Results. In addition to the expected correction factor due to rotation, the mass-loss rate decreases when the surface becomes more helium rich, especially in the later moments of the main-sequence phase. The self-consistent approach gives lower mass-loss rates than the standard values adopted in previous G enec evolution models. This decrease strongly a ff ects the tracks of the most massive models. Weaker winds allow the star to retain more mass, but also more angular momentum. As a consequence, weaker wind models rotate faster and show a less e ffi cient mixing in their inner stellar structure at a given age. Conclusions. The self-consistent tracks predict an evolution of the rotational velocities through the main sequence that closely agrees with the range of v sin i values found by recent surveys of Galactic O-type stars. As subsequent implications, the weaker winds from self-consistent models also suggest a reduction of the contribution of the isotope 26 Al to the interstellar medium due to stellar winds of massive stars during the MS phase. Moreover, the higher luminosities found for the self-consistent evolutionary models suggest that some populations of massive stars might be less massive than previously thought, as in the case of Ofpe stars at the Galactic centre. Therefore, this study opens a wide range of consequences for further research based on the evolution of massive stars.
Introduction
Evolution of massive stars is an important topic in stellar astrophysics.Indeed some of them are the progenitors of corecollapse supernova events, and give birth to neutron stars and black holes (Heger et al. 2003).Massive stars are also important for the study of nucleosynthesis, production of ionising flux, feedback due to wind momentum, studies of star formation history, and galaxy evolution.
Massive stars with M * 25 M are characterised by strong stellar winds and large mass loss rates ( Ṁ), which determine the evolution of the star such as the phases involving Luminous Blue Variables (LBV) and Wolf-Rayet (WR) stars (Groh et al. 2014).The wind properties through all these stages constrain which kind of core-collapse process the star will experience at the end of its life, and what will be the final mass of the remnant (Belczynski et al. 2010).This last item is relevant, because theoretical masses of stellar black holes need to be in agreement with those measured by gravitational waves, in merger events detected by the Advanced-LIGO interferometers (Abbott et al. 2016).The study of the stellar winds has further implications that reach far beyond stellar astrophysics.For instance, the stellar winds from numerous evolved massive stars collide to produce the plasma that fills up the interstellar medium in the Galactic Centre (e.g., Cuadra et al. 2008Cuadra et al. , 2015;;Ressler et al. 2018Ressler et al. , 2020;;Calderón et al. 2020b).The properties of the stellar winds determine whether the plasma is able to cool and form clumps, which affects the accretion process on to the Galactic supermassive black hole, Sgr A* (Cuadra et al. 2005;Calderón et al. 2016Calderón et al. , 2020a)).Therefore, a proper determination of the stellar winds is needed to correctly interpret the images of the black hole silhouette obtained by the Event Horizon Telescope Collaboration et al. (2022).These examples highlight that theoretical calculations for the stellar winds can even influence the field of relativistic astrophysics.
In the last decade, computational codes such as Mesa (Paxton et al. 2019) and the Geneva evolution code (Eggenberger et al. 2008, hereafter Genec) have been used to study stellar evolution.The recipe for theoretical mass loss rate varies, depending on the stellar evolution stage and the region of the HR dia-gram (HRD).For O-type main sequence (MS) stars, the recipe most commonly adopted has been the so-called Vink's formula (Vink et al. 2001).However, diagnostics of mass loss rates performed in recent years consider that values from Vink's formula are overestimated by a factor of ∼ 3 (Bouret et al. 2012;Šurlan et al. 2013;Vink 2021), and therefore the quest has been the development of evolution models adopting more updated and accurate recipes for mass loss rate.In that direction, we mention the studies from Björklund et al. (2022) using Mesa, and Gormaz-Matamala et al. (2022b, hereafter Paper I) using Genec.
In Paper I, we developed new evolution models for stars born with masses M zams ≥ 25 M , introducing a new recipe for the theoretical mass loss rate based on the self-consistent m-CAK wind solutions from Gormaz-Matamala et al. (2019, 2022a).These new self-consistent tracks show that stars retain more mass through their evolution, therefore remaining larger and more luminous compared with models for massive stars using Vink's formula for Ṁ (Ekström et al. 2012;Georgy et al. 2013;Eggenberger et al. 2021).A similar result is obtained by Björklund et al. (2022), where evolution models adopting new mass loss rates are shifted to higher luminosities in HRD.It is important to remark that, to our knowledge, both Björklund et al. (2022) and Gormaz-Matamala et al. (2022b) are the first studies on developing tracks for stellar evolution of massive stars adopting a self-consistent treatment for the stellar wind.
In this work, we extend the self-consistent evolutionary tracks introduced in Paper I, including rotation to our models.Rotation is important, because it modifies the mass loss rate and the surface abundances due to the rotational mixing (Maeder & Meynet 2010), and because it makes the star not only mass but also losing angular momentum through the stellar wind (Georgy et al. 2011;Keszthelyi et al. 2017).Because of the extra details to be considered into the analysis of our results, this work includes only Galactic (Z = 0.014) and LMC (Z = 0.006) metallicities, leaving the supra-solar (Z > 0.014) and SMC and lowermetallicity (Z ≤ 0.002) cases to forthcoming studies.
This paper is organised as follows.Section 2 summarises the most relevant results and conclusions obtained from our self-consistent evolutionary tracks in Paper I. Section 3 introduces the updates on Ṁsc to account for the effects of rotation.Section 4 presents the new evolution models adopting selfconsistent winds, whereas their comparison with observational diagnostics are analysed in Section 5.In Section 6, we deliberate about the implications of these new evolution models in a Galactic scale.Finally, conclusions of this work are summarised in Section 7.
Self-consistent evolution models
We call self-consistent evolutionary tracks the evolution models that adopt a self-consistent recipe for the mass loss rate.Selfconsistency implies that the wind hydrodynamics (i.e., velocity and density profiles for the wind) are calculated in agreement with the radiative acceleration (i.e., the acceleration over the wind particles due to the stellar radiation).As a consequence, mass loss rate is a direct solution for the wind under a specific set of stellar parameters (temperature, radius, mass, abundances) without assuming any extra condition a priori for the wind (such as a velocity law or a ∞ / esc ratio).For that purpose, we simultaneously calculate the line-acceleration g line by means of the socalled line-force parameters (k, α, δ) from m-CAK theory (Castor et al. 1975;Abbott 1982;Pauldrach et al. 1986), whereas we calculate the hydrodynamics by solving the stationary equation of motion for the wind using the code HydWind (Curé 2004), using ṀVink and Z/Z = 1.0 covering the main sequence stage, hereafter original tracks.Circles represent the location of the selected stellar models to be tabulated in Table 1 of Section 3.
for different sets of stellar parameters for massive stars.This iterative procedure is hereafter named m-CAK prescription, it was introduced in Gormaz- Matamala et al. (2019) and it has demonstrated to provide values for Ṁ of the same order of magnitude than those obtained with self-consistent NLTE studies in the comoving frame such as Krtička & Kubát (2017, 2018) or Gormaz-Matamala et al. (2021).Moreover, the wind hydrodynamics used as input in the NLTE radiative transport code is able to reproduce reliable synthetic spectra, in agreement with the observations (Gormaz-Matamala et al. 2022a).
However, although the calculation of the line-force parameters has been extended for even cooler temperatures (Lattimer & Cranmer 2021;Poniatowski et al. 2022), in Gormaz-Matamala et al. (2019) our m-CAK prescription is restricted into the range of stellar parameters where (k, α, δ) can be assumed as constants.This is an important consideration because, even though lineforce parameters have been commonly treated as constants (Shimada et al. 1994;Noebauer & Sim 2015), there are studies which suggest that the values for k, α and δ may radially change with the wind (Schaerer & Schmutz 1994;Puls et al. 2000).About this, Gormaz-Matamala et al. (2022a, Sec. 2) performed an quantitative analysis on the standard deviation and numerical errors in the calculation of our line-force parameters, and by extension on the error margins for the self-consistent wind parameters Ṁ and ∞ , finding that uncertainties for (k, α, δ) can be neglected for stars with T eff ≥ 30 kK and log g ≥ 3.2.Therefore, we set these thresholds as the range of validity for our m-CAK prescription.
Thus, self-consistent mass loss rate ( Ṁsc ) derived from the m-CAK prescription is parametrised as a function of stellar parameters as shown in Eq. 7 from Paper I: log Ṁsc,predicted = − 40.
Table 1: Self-consistent line-force parameters (k, α, δ) calculated for the stellar parameters corresponding to the positions indicated by circles in Fig. 1, together with their resulting terminal velocities and mass loss rates ( Ṁsc ).Ratios showing the enhancement of mass loss rate due to rotation, adopting either our self-consistent procedure or MM00, are also shown in the last columns.where w, x, y, and z are defined as:
Input stellar parameters
This intervariable fitting, besides minimising the separation with respect to the real calculated wind solution by line-force parameters, allows a deeper examination of the dependance of mass loss rate on stellar parameters.As an example, we found from Eq. 1 that dependency in metallicity does not follow the classical exponential law Ṁ ∝ Z a with a constant exponent, but with a ranging from ∼ 0.53 for M * ∼ 120 M to ∼ 1.02 for M * ∼ 25 M .In other words, we can approximate this Z-dependence for mass loss rate as an intrinsic relationship of stellar mass, as Ṁsc ∝ Z 0.4+ 15.75 (M * /M ) . (2) An analogous result, but for an intrinsic luminositydependence, was found by Krtička & Kubát (2018) for their global wind models; where the exponent for the metallicity was less steep for higher stellar luminosity (and subsequently, stellar mass).Explanations of this intrinsic mass-dependence may rely on the fact that the more massive the star, the nearer it is from the Eddington limit.In that case, the continuum should become increasingly important in contributing to the radiative acceleration g rad , in decline of the line-driving (see Eq. 6 from Gormaz-Matamala et al. 2021).The effect of the continuum is less dependent on metallicity because it does not involve interaction between the radiative field and absorption lines.This condition for the mass loss rate was previously pointed out by Gräfener & Hamann (2008) for WNL stars, where the Z-dependence was weaker closer to the Eddington limit (i.e., more mass).All in all we can state that Eq. 2 is an important step to understand the dependence on metallicity for the mass loss rates at the earlier evolutionary stages of massive stars.
Our new recipe for mass loss rate is implemented inside the Genec code, which runs stellar evolutionary models to explain and predict the physical properties of massive stars (Eggenberger et al. 2008).We used the same prescriptions as described in Ekström et al. (2012); that is, solar abundances from Asplund et al. (2005Asplund et al. ( , 2009)), opacities from Iglesias & Rogers (1996), and overshoot parameter α ov 0.10, except for the mass loss rates.The self-consistent m-CAK prescription has been already incorporated inside Genec for stars satisfying T eff ≥ 30 kK and log g ≥ 3.2.Below such thresholds, the recipe for mass loss rate reverts to Vink's formula.
Because self-consistent mass loss rates are a factor of ∼ 3 lower with respect to Vink's formula at the beginning of MS (i.e., for T eff 40 kK and log g ∼ 4.0, see Gormaz-Matamala et al. 2022a), self-consistent evolution models predict that stars will retain more mass during their H-core burning stage and therefore they are larger and more luminous.These changes are proportional to the initial mass of the star; from important differences in the final stellar properties at the end of MS for M zams = 120 M , to almost negligible differences for M zams = 25 M .Also, evolution models adopting Ṁsc predict a smaller production of the isotope Aluminium-26, which means that massive stars contribute in a lesser proportion to feed the interstellar medium with this isotope, compared to other sources (Palacios et al. 2005;Wang et al. 2009).
Such results are important, but still represent only a first glance to the implications that the self-consistent wind models have for stellar evolution.The next step is the incorporation of rotation.Stellar rotation is important for stellar evolution because of two reasons.First, because the angular momentum in the core at the time of core collapse may strongly impact the final stages of a massive star (Yoon et al. 2006).Powerful stellar winds not only make stars lose mass, but also angular momentum, subsequently affecting the rotation and the evolution of the star (Georgy et al. 2011;Keszthelyi et al. 2017).And second, because rotational mixing (Maeder & Meynet 2010) changes the distribution of the abundances of the elements inside the star and in particular at the surface (see, e.g., Brott et al. 2011;Ekström et al. 2012).The changes of the abundances at the surface is particularly interesting, because modification in the chemical composition of the atmosphere induces variations in the lineacceleration and therefore in the self-consistent mass loss rate, something that it has been either not considered or neglected by previous authors (Björklund et al. 2022).For that reason, before running new evolution models with rotation adopting Ṁsc , it is necessary to study the influence of rotation over the m-CAK prescription.
Enhancing mass loss due to rotational effects
Models by Genec are computed from the formula by Vink (Vink et al. 2001) with the correcting factor induced by rotation as given by Maeder & Meynet (2000, hereafter MM00), that is where Ṁ(0) is the mass loss rate without rotation, α is the lineforce parameter from CAK theory, ω is the angular velocity at the stellar surface, and Γ Edd is the Eddington factor with κ being the total opacity.Notice that Eq. 3 introduces a correction factor independent of the adopted recipe for mass loss rate without rotation; in other words, Ṁ(0) can be computed either from Vink's formula or from Eq. 1. Actually, Ṁ(ω) results from an integration over the surface and the integration accounts for the change of the local mass loss rate with the latitude.Equation 3 incorporates the impacts on mass loss rate produced not only by rotation but also the radiative acceleration due to the continuum (hence the presence of the Eddington factor), through the so-called ΩΓ-limit.This formulation implies that the stellar break-up or Ω-limit (i.e., when the centrifugal forces compensate gravity in a rotating star) is reached for reduced rotation velocities if Γ is large enough.The velocity associated to this break-up is the so called critical rotational speed, and hence, the rotation of the star is commonly expressed as a dimensionless ratio between the rotational velocity and this critical value as On the other hand, the m-CAK prescription incorporates rotational effects in the hydrodynamic solution calculated by Hyd-Wind (Curé 2004), also using Ω as an input.Hydwind solves the equation of motion for the stationary standard line-driven theory, by finding the eigenfunction that satisfies the velocity profile, = (r), and the eigenvalue, which is proportional to the mass loss rate, Ṁ (Curé & Rial 2007).The rotation factor Ω is implicitly included in Eq. 7 by means of φ = rot R * /r, whereas the additional terms are the same as described by Araya et al. (2017Araya et al. ( , 2021) ) or Gormaz-Matamala et al. (2022a).In this work, we use moderate to intermediate values of Ω (= 0.4), therefore all our wind solutions are in the fast wind regime and not in the Ω-slow wind regime which starts approximately from Ω 0.75 (Araya et al. 2018).
In order to evaluate the effect of the rotation over the selfconsistent wind solutions, we repeat the methodology of Paper I and we run evolutionary rotating models in Genec for initial masses of 120, 70, 40, and 25 M adopting Vink's formula for these tracks, hereafter original tracks, as shown in Fig. 1.Then we select a set of representative points over these original tracks, and based on their stellar parameters (T eff , log g, R * /R , Ω) we calculate the line-force parameters (k, α, δ) and their respective self-consistent wind parameters Ṁsc and ∞,sc .Initial rotational velocity for all evolution models is Ω = 0.4, as in Ekström et al. (2012).The results are tabulated in Table 1 where, besides the former columns already included in the Tables 2, 3 and 4 from Paper I, we add the rotation factor Ω as an extra stellar parameter for the input.For each one of the points, we calculate an independent solution for (k, α, δ) for the tabulated Ω but also for Ω = 0.0 (i.e., without rotation), with their respective self-consistent solution for terminal velocity and mass loss rate.The difference in the self-consistent mass loss due to the incorporation of rotation is expressed as the ratio Ṁ(Ω)/ Ṁ(0), which is compared with the correction factor predicted by MM00 (i.e., Eq. 3).Here, we found that both approaches for the rotational effects over Ṁ produce similar results, which is remarkable considering the error bars associated with the self-consistent mass loss rate.Based on this outcome, we decide to perform our new evolutionary tracks keeping Eq. 3 as correction factor for Ṁsc .
The other physical ingredients related to rotation are the same as in Ekström et al. (2012), Georgy et al. (2013) and Eggenberger et al. (2021).Angular momentum is transported by an advective equation as described in Zahn (1992).The horizontal and vertical shear diffusion coefficients are from respectively Zahn (1992) and Maeder (1997).
Variation in abundances
Besides the stellar parameters such as temperature, surface gravity, radius and rotation, we included in Table 1 the values for the abundances of helium (expressed as a fraction of the hydrogen abundance by number) and the individual abundances of carbon, nitrogen and oxygen (expressed as a fraction of the solar abundances).Due to the nature of the CNO-cycle, the abundances Self-consistent tracks are marked with two black dots which represent the zero-age main sequence and the switch of the mass loss recipe from self-consistent to Vink's formula (when either T eff = 30 kK or log g = 3.2), whereas the end of the main sequence (terminal-age main-sequence, TAMS) is represented with a black cross.
of helium and nitrogen increase over time, whereas carbon and oxygen decrease (Przybilla et al. 2010).The chemical composition of the surface is enriched in material processed in the core by rotational mixing in the outer radiative envelope.Since we assume that chemical composition in the stellar wind is exactly the same as in the photosphere, the chemical composition of the stellar wind is altered.The question that arises here is, do these alterations in the wind chemical abundances affect the lineacceleration and afterwards the self-consistent mass loss rate?
To solve that question, we artificially alter the individual abundances for the He and the CNO elements from our stellar models of Table 1, and then calculate the respective line-force with Ṁsc the mass losses tabulated in Table 1, and Ṁsc, the selfconsistent mass loss rate calculated if the abundance of the evaluated element were solar.
Results of these modifications over the Ṁsc are displayed in Fig. 2a for changes in the He/H ratio, and in Fig. 2b for modifications of the CNO elements.We see that the variation of each one of the CNO elements even by a factor of ∼ 10 barely affects the Ṁsc , keeping the difference ∆ log Ṁsc close to zero.Indeed, differences in mass loss rate are just of the order of ±0.05 in log scale (∼ 12% of Ṁsc ), smaller than the error bars shown in Table 1.On the contrary, the increase in the He/H ratio makes Ṁsc decrease in larger magnitudes.
Results from Fig. 2 are quickly explained due to their absolute numbers of the evaluated abundances.Even though the changes in CNO abundances are by a factor of ∼ 10 (compared with the initial values at ZAMS), they still represent a small percentage of the total composition of the star.On the contrary, the abundance of helium plays a major role because it grows (in mass fraction) from ∼ 26% up to ∼ 70%, thus significantly changing the mean molecular weight of the wind.Also, helium abundance is involved in the computation of the ratio N e /ρ and in the radiative acceleration due to Thomson scattering 1 where X He is the helium abundance by number, N e is the density of free electrons, and m p is the proton mass (the meaning of the other symbols are detailed in Curé 2004, Sec. 2).Hence, when the helium abundance increases, the number of free electrons is reduced and then Γ decreases.As a consequence, the effective mass M eff = M * (1 − Γ) becomes larger in Eq. 7, and accordingly effective gravity, meaning that the line-acceleration now is able to remove less material as stellar wind (i.e., mass loss rate decreases).However, it is important to mention that this theoretical analysis is made upon the basis of the isolated alteration of the He/H ratio, and therefore this decreasing in the resulting Ṁ does not imply any modifications in temperature or in any other stellar parameter.For this reason, it differs from the Ṁ ∝ X He relationship found for Hamann et al. (1995), which was the result of a global analysis over the wind regime of WR stars.Thus we decide to introduce an extra variable to the recipe of Eq. 1, based on a simple fit of Fig. 2a log Ṁsc = log Ṁsc,0 − 0.559 × (He/H) * − (He/H) 0 , where (He/H) 0 is the He/H ratio at ZAMS (0.085 from Asplund et al. 2009).We see from Table 1 that (He/H) * may grow up to ∼ 0.3 − 0.45 at the end of the self-consistent regime, thus leading to a reduction of the mass loss rate of around ∼ 30−40%.However, same as in Paper I, this modification applies only when the self-consistent mass loss rate is adopted in the evolutionary tracks, for T eff ≥ 30 kK and log g ≥ 3.2, and hence it is still unknown the magnitude of the decreasing factor in Ṁ prior to the end of the MS, when hydrogen abundance is X = 0.3 at the surface.
Finally, details of the formulae used for both old and new evolution models are summarised in Table 2.
Results
Evolutionary models with rotation, both original and selfconsistent, are shown in Fig. 3 in the HR diagram, whereas the rotation and equatorial velocities for tracks adopting both mass loss recipes are shown in Fig. 4. Additional plots such as the comparison with self-consistent non-rotating models from Paper I, the evolutionary tracks in the log g vs. T eff plane, evolution of the mass loss rate, stellar mass and radii, He/H ratio, Eddington factor, and convective core mass are shown in Appendix A. Also, hereafter all plots comparing new and old recipes of mass loss rate will use solid lines for models with Ṁsc and dashed lines for models with ṀVink .Major features of the stellar models at the end of the MS, comparing self-consistent tracks with the tracks adopting Vink's formula (Ekström et al. 2012;Eggenberger et al. 2021), are tabulated in Table 3.
Overall, compared with Paper I (see Table 8 from Gormaz-Matamala et al. 2022b) we find that the lifetime during the main sequence is ∼ 18 − 22% longer than for the non-rotating models, regardless of the adopted mass loss recipe.The difference with 1 Here, Γ is the Eddington factor when only free electron scattering opacity is accounted for (the so called classical Eddington factor).
non-rotating models is also appreciated in when Ṁsc is adopted.Moreover, besides the straightforward differences in the final masses and the final radii, we also observe that the less strong winds from the self-consistent prescription predict in general weaker surface enrichments of helium and nitrogen at the surface, as seen in the last columns of Table 3 and in the evolution of the He/H ratio (Fig. A.6).The only noticeable exception is the model with M zams = 25 M and Z = 0.006, described in detail in the subsection below.
Concerning rotation, self-consistent evolutionary tracks have higher velocities for all models as appreciated in Fig. 4, where we compute equatorial velocities and Ω = eq / crit as a function of the core hydrogen abundance from ZAMS (starting with X core ∼ 0.72) until the end of the main sequence.Same as in Paper I, the largest impact of the new self-consistent mass loss rates are for the higher initial mass stars: the weaker Ṁsc produces stars that rotate faster.Even though the rotational velocity drops down at TAMS for all models, this braking is slower than the one expected for models with the old mass loss recipe.This weaker braking is more prominent for the low metallicity, where the stars adopting Ṁsc keep their rotational velocity almost constant for a longer time before the final deceleration.This implies that, in our regime of metallicities (between Z = 0.006 and Z = 0.014) mass loss is still a prominent process that impacts the evolution of the rotational velocity.Evolution of rotational velocities is explained by the balance between the transport of angular momentum from the convective core (which is contracting during the main sequence stage) to the stellar surface, and the removal of outer layers due to mass loss (Ekström et al. 2008;de Mink et al. 2013).At extreme low metallicities (Z = 0.0002 = 1/35 Z ), massive stars increase their rot because their mass losses are not strong enough to counteract the transport of angular momentum from the contracting core (Szécsi et al. 2015).Also for Z = 0.006, we see that Ω, which is set initially as = 0.4, increases up to a constant peak of ∼ 0.55 before the final decrease, as a direct consequence of the initial constancy in the equatorial velocity and the decreasing in the evolution of the critical velocity (Eq.5).Weaker winds predicts a nearly exponential increasing in the stellar radii for all initial masses and both metallicities (Fig. A.5), whereas both the stellar mass and Eddington factor show a linear and metallicity dependent difference with respect to the models computed with the Vink's formula (Fig. A.4 and Fig. A.7).Therefore, because they are weaker, self-consistent winds makes our evolution models to approach closer to the break-up limit for rotational velocity in agreement with Ekström et al. (2008), prior to the final drop in rot at the end of main sequence in agreement with Szécsi et al. (2015).Now, let us discuss some more detailed effects of the use of the new mass loss rate for the different initial mass models considered in the present work.
Case M zams = 120 M
This is the range where we find the very massive stars (VMS Yusof et al. 2013), whose properties and evolution have aroused great interest in recent years (Higgins et al. 2022;Sabhahit et al. 2022).Here, the tracks with Z = 0.006 have a special relevance because the most studied cluster containing VMSs is 30 Dor, located in the LMC (Crowther et al. 2010;Bestenlehner et al. 2014).Same as in Yusof et al. (2013), we consider that the star becomes a Wolf-Rayet (i.e., it changes from optically thin to optically thick wind) when its surface hydrogen abundance goes below 0.3 and T eff > 10 kK.After that point, mass loss rate is assumed to follow the recipe from Gräfener & Hamann (2008) for WR winds, generating the discontinuity in Ṁ after the black cross as observed at Fig. A.3.Notice that prior to the change in the recipe, evolution of the self-consistent mass loss rate proceeds as expected due to the large increment in the He/H ratio (Fig. A.6) as a consequence of Eq. 11.Due to the extreme strength of the winds, the VMS is depleted from the 70% of its surface hydrogen even before H is completely exhausted in its core, as seen from Fig. 6a, generating the stars with spectral type WNh (Hamann et al. 2006;Martins & Palacios 2022).This switch not only in the Ṁ prescription but also in the wind regime (from optically thin to optically thick wind) is also evidenced by Fig. A.5, where the radius needs to the recalculated after the black crosses based on the wind opacity (see, e.g, Meynet & Maeder 2005) due to the absence of a well-defined photosphere (Schaller et al. 1992).
The most remarkable difference between classical and selfconsistent tracks in Fig. 3, is the drift 'redwards' (i.e., towards the cool range of temperatures in the HRD) that the selfconsistent models do since the beginning of the main sequence.For both metallicities, stars with Ṁsc end their MS lifetime as O-type at T eff ∼ 40 kK, before suddenly increasing their temperature and becoming WNh stars.We note that the model using the weaker self-consistent Ṁ shows in general a slightly stronger surface enrichment than the original model.This explains why these models evolve to redder regions of the HRD during the MS phase.It is a well known effect that a star showing a nearly homogeneous internal chemical composition keep blue colours in the HRD diagram (Maeder 1987).A homogeneous chemical composition can result from a very strong internal mixing, from very strong mass losses or from both processes.In contrast an internal non-homogeneous composition will produce redwards evolution on the HRD.The extension to the red is favoured when a larger convective core is present and the radiative envelope is not too efficiently mixed, otherwise we would have a situation approaching the one of a homogeneous internal composition and the track will remain in the blue region of the HRD (Martinet et al. 2021).Here we see that reducing the mass loss favours redwards tracks.In Fig. 5, we show the diffusion coefficients in the new and old mass loss rate models in the middle of the MS phase.For the case of 120 M and Z = 0.014, the model with a lower mass loss rate has a more massive convective core at that stage, even though it represents a slightly minor fraction of the total mass: ∼ 84% for the self-consistent model, against ∼ 87% for the old one.This is a result of the higher difference in the mass retained and also to the possible higher value of the effective diffusion coefficient, D eff , just above the convective core.In The meaning of the different line style is the same as in Fig. 3. Vertical grey lines represent the total mass of both models, which correspond to the final masses at the end of MS tabulated in Table 3.
the rest of the radiative envelope, there is not much difference between the values of D eff .Since the diffusion timescale varies as R 2 /D eff , and since the radius of the self-consistent model is larger, we see that the self-consistent model will take a longer time to reach a given level of surface enrichment than the original one.This imposes redder evolutionary tracks for the model with Ṁsc .Figure 6a compares the chemical composition of the selfconsistent and original models at the end of the MS phase.As expected the model with the lower mass loss rate is showing the greatest contrasts between the surface composition and that of the core.Figure 7a shows the variation of the specific angular momentum j r = r 2 ω inside the 120 M at the beginning and end of the MS phase for the two different mass loss prescriptions.Let us first remind that in the absence of any angular momentum transport, j r remains constant.The evolution that we see between the beginning and end of the core H-burning phase (a global decrease of j r in the whole star) is due to the fact that the transport processes will in general transport angular momentum from the inner to the outer layers of the star.At the surface the angular momentum is removed by stellar winds, thus we see that when the mass loss rates are smaller, j r is larger.Indeed, by decreasing the mass loss rate by about a factor three during the main sequence (compared to Vink's formula), we obtain that the specific angular momentum is shifted by 0.3 dex upwards when Ṁsc is used, whereas the value of the angular velocity of the core is increased by a factor of ∼ 2 (see Fig. 7b).
Finally, we notice that for both metallicities our rotating models reach the WNh stage at T eff 40 kK.Since we know that stars will increase their temperature through the WR stage, this means that rotating models with M zams = 120 M do not cross the temperature regime around ∼ 25 kK during their MS phase, where the so-called bi-stability jump (Vink et al. 1999) takes place.Because of this reason, and contrary to the non-rotating models from Paper I, we do not observe LBV behaviour such as eruptive mass losses at this mass range.The fact that only initially slow rotating models can go through an eruptive mass loss regime at the end of the main sequence supports the idea that the LBV phenomenon is very rare for very massive stars (Gräfener 2021).
Case M zams = 70 M
Same as before, besides the differences in the stellar mass (Fig. A.4) we also observe a divergence bluewards-redwards for the evolution across the HR diagram (Fig. 3).However, unlike the previous case, this time the beginning of the WR winds stage coincides with the end of the H-core burning, which can be appreciated in Fig. 6b where hydrogen is depleted in the core at the end of both (standard and self-consistent) tracks.Again we see the change in the slope of the evolution in mass loss rate (Fig. A.3), this time with a flattening in Ṁ prior to reaching the second black dot, associated with the increasing of the He/H ratio (Fig. A.6).
Concerning rotation, the braking in the equatorial velocity only becomes significant after the star has burnt almost half of the hydrogen in its core: X c ∼ 0.38 for Z = 0.014 and X c ∼ 0.36 for Z = 0.006, which also represents the peak of Ω ∼ 0.42 for Galactic metallicities and Ω ∼ 0.54 for SMC metallicities (Fig. 4).This is interconnected one more time with the retaining of more specific angular momentum for the models adopting Ṁsc , and with the larger stellar radii (Fig. A.5). Indeed, the switch in the wind regime from Ṁsc to ṀVink (marked with an abrupt increase in mass loss rate after the second black dot of Fig . A.3) carries an important drop in the rotational velocity which creates some numerical instability, but does not affect the general trend in the evolution of observed for all the rotation models.
From Fig. 6b, we see that the variation of the element abundances across the stellar structure from core to surface is more prominent for the model with Ṁsc , which is evidence for the less efficient rotational mixing for the weaker wind.However, on the contrary of the case with M zams = 120 M , where the inner structure was almost fully homogeneous, now there are remarkable distinctions between core and surface abundances because the mass loss has been less efficient in reducing the mass of the envelope and revealing the inner layers.
The redward drift of the evolutionary tracks for the reduced mass loss rate models is in line with the results found by Björklund et al. (2022), who implemented a mass loss recipe derived from the wind theoretical prescription from Björklund et al. (2021) where Ṁ is also ∼ 3 times below the values coming from Vink's formula.In their study, they used Mesa to trace the evolutionary track of a star with M zams = 60 M , rot,ini = 350 km s −1 , and solar metallicity; where they found the drift bluewards at T eff ∼ 26 kK (see their Fig. 6).On our side, by making an eye inspection on Fig. 3 and Fig. A.1, we conjecture that the point of turning bluewards for one of our self-consistent models with 60 M and Z = 0.014 must be between ∼ 29−31 kK, i.e., around ∼ 5 kK hotter than found by Björklund et al. (2022).Such a discrepancy is relatively minor considering the differences between Genec and Mesa (check the comparison for the models of these both codes, as done by Agrawal et al. 2022).Therefore, we can conclude that the self-consistent m-CAK prescription for stellar winds predicts evolutionary tracks in fair agreement with the recent study of Björklund et al. (2022), as expected given that both studies are based on mass loss rates of the same order of magnitude.
Cases M zams = 25 and 40 M
Effects of self-consistent mass loss rates over the evolution of stars within these mass ranges are less pronounced, as it is seen in Fig. 3 and in the resulting final masses and radii from Table 3.We also observe a 'redder' evolution for the models adopting Ṁsc , but still 'bluer' compared with non rotating cases (Fig The composition in the radiative envelope shows some significant differences between the two models.This is particularly visible for the 40 M star, where in the low mass loss rate model a convective region is associated to the H-burning shell (between about 17 and 25 M coordinates), while no convective zone is associated to that shell in the high mass loss rate model.We see therefore, that reducing the mass of the envelope disfavours the formation of an intermediate convective shell associated to the H-burning shell.
The specific angular momentum inside our model of 25 M is shown in Fig. 9a.We see along the curves small abrupt variations (as for instance around 9 M at the end of the MS phase).These come from regions where there is a transition between a convective and a radiative zone, where the chemical composition changes, also observed as an abrupt jump in the inner angular velocity (Fig. 9b).Like for the 120 M star, the model computed with the self-consistent mass loss rates ends the MS phase with slightly larger values of the specific angular momentum.The increase is less strong since the mass loss rates are anyway weaker when the initial mass decreases.The changes in the interior angular velocity distribution are also much weaker than for the 120 M model.We note as expected a slightly faster rotating core in the self-consistent stellar model compared to the original one.1. × 10 -5 5. × 10 -5 Fig. 9: Same as Fig. 7, but for our model with M zams = 25 M and Z = 0.014.
Comparison with rotational surveys
Because the self-consistent m-CAK prescription predicts less strong winds than previous studies, stellar models computed with such prescription evolve at higher luminosities and reach larger radii during the MS phase than the models of Ekström et al. (2012) or Eggenberger et al. (2021), as already detailed in Section 4. Besides, evolutionary tracks adopting Ṁsc also predict that the rotational velocity during the main sequence will be higher as seen in Fig. 4, implying a weaker braking in the equatorial velocity of massive stars.This weaker braking is also represented in Fig. 10, where we plot rotational velocities as a function of the effective temperatures for evolutionary tracks adopting old and new winds, and where the timesteps of 0.5 Myr are illustrated with the respective coloured circles.Veloc-ities from evolution models with ṀVink quickly decreases after the first ∼ 1−2 Myr, passing from little variations in temperature for M zams = 120 M to moderate variations for M zams = 25 M .On the contrary, the rot from self-consistent evolutionary tracks keep relatively constant at the beginning of the main sequence for all tracks, at timescales inversely proportional to their initial masses (from ∼ 1.0 Myr for M zams = 120 M to ∼ 6.0 Myr for M zams = 25 M ), whereas T eff decreases ∼ 8 kK per each model prior to the final braking.
In parallel, we plot in Fig. 10 the sample of 285 O-type stars taken from the recent survey of Holgado et al. (2022), whose catalog is available online2 .In their study, they revisited the ro- tational properties of 285 Galactic massive O-type as part of the IACOB Project (Simón-Díaz et al. 2015;Holgado et al. 2018Holgado et al. , 2020)).This survey covers a range of stellar masses from 15 to 80 solar masses, and a mixture of ages within the main sequence stage.Besides, they compared the rotation of these stars with the state-of-the-art evolution models Brott et al. (2011) and Ekström et al. (2012), finding that neither of the two sets of rotating evolutionary tracks was able to reproduce such features of the survey.
From one side, models from Brott et al. (2011) cannot reproduce the existence of stars with rot 150 km s −1 across the entire domain of O-type stars (see Fig. 9 of Holgado et al. 2022).From the other side, models from Ekström et al. (2012) cannot adequately reproduce the scarcity of stars with rot 75 km s −1 for the left side of Fig. 10, which is also appreciated by the lack of stars matching the old tracks with 70 and 120 M .
In contrast, in Fig. 10 we observe that new self-consistent evolutionary tracks can adequately explain both issues.Tracks from 25 to 70 M match to the set of stars with rotational velocities below 150 km s −1 for the range of temperatures from 27 to 40 kK.Also, the empty region of stars with T eff ≥ 42.5 kK and rot ≥ 75 km s −1 would only be populated by our 120 M model, which is out of the mass range of the sample.Therefore, evolution models adopting weaker winds better interpret the rotational properties of the survey of Holgado et al. (2022), keeping with initial Ω = 0.4 and without the need of decreasing the initial equatorial velocity for our models.Nonetheless, despite this encouraging result, there are still important aspects of the rotational properties of Galactic O-type stars that need to be taken into account.For instance, although our track of M init = 25 M can encircle a larger fraction of stars in the top-right side of Fig. 10, there are still fast rotators ( rot > 300 km s −1 ) outside the scope of any track, which only could be explained by binary interaction effects (de Mink et al. 2013;Wang et al. 2020).Moreover, the slow braking observed for self-consistent evolutionary tracks indicates that stars born with masses between 25 and 70 M should spend the ∼ 75% of the lifetime in the main sequence stage with rot between ∼ 250 km s −1 and ∼ 330 km s −1 .As a consequence, we should expect to find a larger fraction of O-type stars in this range of rotational velocities.However, because of its proximity to the ZAMS, this range also covers an important portion of the region where there is a lack of empirically detected O-type stars, as shown in the Fig. 11 and described by Holgado et al. (2020).For that reason, it is not surprising to find a moderate number of stars in the range from 250 to 330 km s −1 and temperatures between 30 and 42.5 kK.Therefore, even though our evolution models adopting self-consistent winds represent a relevant upgrade, there are still important challenges in the study of O-type stars to deal with, such as the expansion of current samples to more hidden places by the use of infrared observations.
Implications of evolution models with self-consistent winds
The evolutionary tracks across the HRD from our rotating models adopting self-consistent winds exhibit considerable contrasts with the old models adopting Vink's formula, leading into a better description of the rotational properties of the most recent observational diagnostics.In this Section we move one step beyond, and we explore some of the implications that the incorporation of these self-consistent models have at Galactic scale.We remark however that these implications are here discussed at an introductory level, and their respective conclusions are part of forthcoming studies.4.9 × 10 −5 current rotating models allow 26 Al to appear at the surface well before layers enriched by nuclear burning in 26 Al are uncovered by the stellar winds.This is the effect of the rotational mixing that allows diffusion of the 26 Al produced in the core to reach the surface.Thus we see that, like for the case in Paper I, reducing the mass loss rate reduces the maximum value of the abundance of 26 Al reached at the surface.This maximum also occurs at a more advanced age for the model with self-consistent wind, despite the peak of production of 26 Al in the core is found at almost the same age for both (old and new) wind prescriptions.Such time difference between the peak abundance at the centre and at the surface is easily explained by the reduction on mass loss rate, which makes the star to take more time to remove their outer layers and then expose its inner composition, and for the less intense mixing as previously evidenced in Fig. 6a.The total amount of 26 Al released from the stellar surface to the interstellar medium during the MS for each one of our wind prescriptions, together with previous results from Paper I, are tabulated in Table 4.Although the total amount of the aluminium-26 ejected by old winds is almost the same regardless if rotation is considered, for self-consistent winds the rotating model predicts the ejection of ∼ 8 times less fraction of the isotope during the MS, compared to the rotating evolution model adopting Vink's formula.Such a difference is related not only to the mass loss due to the line-driven mechanism.Whereas nonrotating models predict eruptive processes associated to LBVs (Fig. 7 from Paper I), rotating models for M zams = 120 M never reach those magnitudes for Ṁ (Fig. A.3).Therefore, the con-tribution of 26 Al to the interstellar medium predicted by selfconsistent winds is even weaker (compared with models adopting Vink's formula) for rotating models than for non-rotating models.
However, it is important to remark that Fig. 12 covers only the lifetime of the star when the wind is optically thick, then excluding optically thick winds from WNh (H-core burning) and WR (He-core burning) phases.In contrast, the study of Martinet et al. (2022) calculated the evolution of 26 Al for evolution models with mass ranges from 12 to 300 M and metallicities from Z = 0.002 to Z = 0.020, but without making any distinction between thin and thick wind regimes.Instead, they made the distinction between H-core and He-core burning stages, showing that the production of 26 Al abruptly decreases when the star enters into the He-burning.As a consequence, even though here we are selecting only one mass and one metallicity to do a quick comparison with the more complete analysis from Martinet et al. (2022), we infer that the larger contribution of 26 Al from very massive stars to the ISM must stem from the later stages exhibiting optically thick winds, such as H-core WNh stars, to explain the current estimations of 26 Al total abundance in the Milky Way (Knödlseder 1999;Diehl et al. 2006;Wang et al. 2009;Pleintinger et al. 2019).Certainly, we need a extensive study implementing self-consistent evolution models for wider ranges of masses and metallicities, together with updates in the convection criteria (Georgy et al. 2014;Kaiser et al. 2020) and overshooting values (Martinet et al. 2021) in order to have more complete analysis.
Massive stars at the Galactic Centre
The implications of the new evolution models for massive stars with self-consistent mass loss rate are large, not only across the main sequence but also over the subsequent evolutionary stages.One of such consequences is how using models computed with the weaker self-consistent mass loss rates may actually provide estimates of evolutionary masses that are lower than when models with Ṁ from Vink et al. ( 2001) are adopted.
An example of this situation is the study of the Ofpe3 stars located at the Galactic Centre.Massive stars at the Galactic Centre play an important role feeding the supermassive black hole Sgr A* by means of their stellar winds (Cuadra et al. 2015;Ressler et al. 2018;Calderón et al. 2020b).This is a consequence of the remarkable situation of OB and WR stars representing a large fraction of the stars at the Galactic Centre (e.g., Lu et al. 2013;von Fellenberg et al. 2022), even though this is a region thought to be hostile to star formation (e.g., Genzel et al. 2010).The wind properties of these massive stars have been analysed and constrained by Martins et al. (2007) for the O-type and WRs, and by Habibi et al. (2017) for the B-type stars.Martins et al. (2007) calculated the stellar and wind parameters for a set of stars at evolved stages such as Ofpe and WN, which are particularly relevant for feeding Sgr A*.Their analysis was performed by spectral fitting using the CMFGEN code and the evolutionary tracks from Meynet & Maeder (2005).Therefore, it is important to check whether the state-of-the-art evolutionary tracks suggest a revision of the properties of the massive stars at the Galactic Centre, and the consequences these modifications imply for the accretion on to Sgr A*.To illustrate the issue, Fig. 13 shows the hydrogen abundance at the stellar surface as a function of luminosity for standard and self-consistent evolutionary tracks, plus five of the Ofpe stars analysed by Martins et al. (2007, compare with their Fig. 21).From Fig. 13 we can see that for a given hydrogen surface abundance and for a given initial mass, the tracks computed with the weaker self-consistent mass-loss rates are overluminous.These tracks also end the MS phase with higher surface hydrogen abundances.We notice that the lowest luminosity Ofpe star in the sample cannot be reproduced by any of the two families of tracks.The other four stars can reasonably be fitted by both types of models.On average, although the initial masses that will be deduced from comparison with self-consistent mass-loss rate tracks are smaller than the masses deduced from tracks computed with Vink's mass loss rates.
Given that the observed Ofpe stars in the Galactic center would have lower initial masses, we can speculate that their wind parameters (terminal velocity and mass loss rate) will be lower as well.The terminal velocity is an important parameter in this context, since the Galactic centre is a region with large stellar density, and the winds collide with each other, creating a hightemperature medium.However, slower winds ( 600 km/s) produce a plasma of lower temperature ( 4 × 10 6 K) at the collision, which becomes susceptible to hydrodynamical instabilities and ends up forming high-density clumps and streams.These clumps can then be captured by the central black hole, increasing its accretion rate (Cuadra et al. 2005(Cuadra et al. , 2008;;Calderón et al. 2016Calderón et al. , 2020b,a),a).
In a forthcoming paper we will perform a more complete analysis of the Galactic Centre massive star population, extending the tracks to the WR stage and taking into account the nonstandard chemical abundance deduced for this region.From the observational side, we will also include data collected after Martins et al. (2007), which is expected to afford a reduction in the typical error bars for the stellar luminosities from 0.2 down to 0.1 dex (S. von Fellenberg, priv. comm.).The comparison between models and observations will allow us to update the estimates for the initial masses and ages of this population, and also its wind parameters, which have a large influence on the current accretion on to Sgr A*.
Conclusion
We have extended the stellar evolution models performed in Gormaz-Matamala et al. (2022b, Paper I), adopting selfconsistent m-CAK prescription (Gormaz-Matamala et al. 2019, 2022a) for the mass loss rate recipe (Eq.1), by including the rotational effects.Stellar rotation affects the mass loss of a massive star, not only by changing the balance between gravitational, radiative and centrifugal forces (Maeder & Meynet 2000), but also because rotational mixing considerably modifies the internal distribution of the chemical elements and thus impacts the evolutionary tracks in the HRD and hence the mass loss rates.The progressive increase of the helium-to-hydrogen ratio in the wind impacts the line-force parameters (k, α, δ) and henceforth the self-consistent solution of the equation of motion, leading to a decrease of 0.2 dex in the absolute value of the mass loss rate (Eq.11).
The updated mass loss rates are implemented for a set of evolutionary models at different initial stellar masses, for metallicities Z = 0.014 (Galactic) and Z = 0.006 (LMC).New tracks show important differences with respect to the studies of Ekström et al. (2012) and Eggenberger et al. (2021), who adopted the formula from Vink et al. (2001) for the winds of massive stars, but exhibit a fair proximity with the studies of Sabhahit et al. (2022) for VMS, and with Björklund et al. (2022) and their own self-consistent wind prescription.Besides the differences in the tracks across the HRD, already remarked in Paper I, for rotating evolutionary tracks we find as expected that the surface rotation maintains a higher value when the weaker self-consistent mass loss rates are adopted.We observe that for the initial rotation considered here, tracks computed with the self consistent mass loss rates extend more to the red part of the HR diagram during the MS phase Such an effect is more important for masses above about 40 M .
New tracks predict evolution of the surface rotational velocities for O-type stars that, at first sight, appear to be in better agreement with the most recent observational diagnostics.The slow braking in the evolution of rot explains better the rotational properties of the survey of Holgado et al. (2022) for O-type stars, such as the lack of fast rotators for stars with T eff 42.5 kK and the abundance of stars with rot ≤ 150 km s −1 and temperatures between 27 and 40 kK.Besides, the implications of these new evolution models are wide.For example, lower mass loss rate predicts a less important stellar wind contribution to the Aluminium-26 enrichment of the ISM during the main sequence phase, at least whereas the stellar wind is optically thin.Likewise, the fact that self-consistent models are more luminous in the HRD suggests that the initial mass deduced from evolutionary tracks might be lower when the self-consistent tracks are used.This may apply to the mass estimates of Ofpe and WN stars at the Galactic Centre, which leads us to expect that their wind properties (mass loss rate and terminal velocity) might also be overestimated.Nevertheless, a more accurate analysis of the stars surrounding Sgr A* and their stellar winds are deferred to a forthcoming paper.
Fig. 1 :
Fig. 1: Evolutionary tracks across the HRD for models with rotation
Fig. 2 :Fig. 3 :
Fig.2: Differences in the resulting self-consistent mass loss rate ∆ log Ṁsc for the abundances tabulated in Table1, (a) due to the modification of the He/H ratio with respect to the default He/H= 0.085, (b) due to the modification of any of the CNO elements with respect to ξ/ξ = 1.0 (with ξ being carbon, nitrogen or oxygen depending on the respective symbol).
Fig. 4 :
Fig.4: Evolution of surface equatorial velocities and rotation parameter Ω as a function of the mass fraction of hydrogen at the core, which decreases from X core ∼ 0.72 at ZAMS to X core ∼ 0.05 at the end of the H-core burning stage.The symbols (dots and crosses) have the same meaning as indicated in the caption of Fig.3.
Fig. A.1, where the evolutionary tracks with and without rotation are compared.The rotating tracks remain in bluer regions of the HR diagram compared to the non-rotating ones.Similar to Paper I, self-consistent mass loss rates are of a factor about ∼ 3 lower than the mass loss rates from Vink's formula (Fig. A.3), which makes the stars retain more mass (Fig. A.4) and have larger radii (Fig. A.5)
Fig. 5 :
Fig.5: Diffusion factors for convective turbulence (D conv ), shear turbulence (D sh ) and effective turbulence (D eff ) as a function of the Lagrangian mass coordinate, for our evolution models with M zams = 120 M and Z = 0.014, adopting old and new winds at an intermediate point in its main sequence stage (X core = 0.3).Vertical grey lines represent the total mass of both models.Symbology for solid and dashed lines, is the same as in Fig.3.
Fig. 6 :
Fig. 6: Variation in a logarithmic scale of the mass fraction of hydrogen, helium and CNO elements, as a function of the Lagrangian mass coordinate in solar units inside the (a) 120 M model and (b) the 70 M model, for the two prescription of the mass losses studied in the present work at the end of the main sequence stage.The meaning of the different line style is the same as in Fig.3.Vertical grey lines represent the total mass of both models, which correspond to the final masses at the end of MS tabulated in Table3.
Fig. 7 :
Fig. 7: Angular properties for the inner structure of our 120 M model on the ZAMS and at the end of the MS phase for the two prescriptions of the mass losses studied in the present work.(a) Variation in a logarithmic scale of the specific angular momentum as a function of the Lagrangian mass coordinate in solar units.(b) Angular velocity, in radians per second, also as a function of the Lagrangian mass coordinate.
. A.1). Differences in the rotational velocities are also less remarkable, though they still follow the same trend of the more massive mod-els.For the case of Z = 0.006 (LMC), we observe that the end of H-burning is reached inside the region of validity of the selfconsistent tracks (Fig.A.2), and therefore the second dot overlaps again with the cross.Because we observe this situation only with Z = 0.002 (SMC) in the non-rotating cases from the Paper I, it implies that the evolution models adopting m-CAK wind prescription cover a broader range of the main sequence when rotation is incorporated.The impact of the different mass losses on the chemical structures of the 25 and 40 M stellar models can be seen in Fig 8.The convective cores are 8.5 and 18.5 M for the 25 and 40 M stars, respectively, when the high mass loss rates are used (Fig. A.8).
Fig. 10 :
Fig. 10: Rotational velocities as a function of the effective temperature, for evolutionary tracks adopting old (dashed) and new (solid) winds.The coloured dots represent the intervals of age with a step of 0.5 Myr.Grey dots represent the sample of O-type stars taken from the survey of Holgado et al. (2022).
Fig. 11 :
Fig. 11: Spectroscopic Hertzsprung-Russell diagram (Langer & Kudritzki 2014, sHRD, with L := T 4 eff /g), for evolutionary rotating tracks adopting old (dashed) and new (solid) winds.Grey dots represent the sample of O-type stars taken from the surveys of Holgado et al. (2022).The black line represents the ZAMS for all models, whereas dark yellow line represents the region close to the ZAMS where no stars are foundHolgado et al. (2020).
6. 1 .Fig. 12 :
Figure12shows how the new mass loss rates impact the quantity of 26 Al ejected by the stellar winds of our 120 M models computed with the two mass loss rate prescriptions.Compared to the models discussed in Paper I for the non rotating case, the
Fig. 13 :
Fig. 13: Surface hydrogen abundance (in mass fraction) as a function of the stellar luminosity for different evolutionary tracks.Continuous lines are models computed with the self consistent mass-loss rate, while dashed lines are computed with Vink's original recipe.Magenta symbols are the Ofpe stars from the Galactic Centre as plotted by Martins et al. (2007, compare with their Fig.21).
Table 2 :
Summary of the formulae implemented for old and new evo-
Table 3 :
Properties of the stellar models at the end of the Main Sequence.
Table 4 :
Values for the integration of the mass fractions 26 Al surface for our evolution model of M zams = 120 M and Z = 0.014 (see Fig.12), compared with the results from Paper I (non-rotating tracks). | 14,457.8 | 2023-03-23T00:00:00.000 | [
"Physics"
] |
Relic gravitational waves in verified inflationary models based on the generalized scalar–tensor gravity
In this work, we consider the models of cosmological inflation based on generalized scalar–tensor theories of gravity with quadratic connection between the Hubble parameter and coupling function. For such a class of the models, we discuss the correspondence between well-known versions of the scalar–tensor gravity theories and physically motivated potentials of a scalar field. It is shown that this class of models corresponds to the Planck observational constraints on the cosmological perturbation parameters for an arbitrary potential of a scalar field and arbitrary version of a scalar–tensor gravity theory. The spectrum of relict gravitational waves is analyzed, and the frequency range corresponding to maximal energy density is determined. The possibility of direct detection of the relic gravitational waves, predicted in such a class of models, by satellite and ground-based detectors is discussed as well.
Introduction
At present, the explanation of universe evolution is based on gravity theories associated with the use of various types of exotic matter and different gravity theories, including general relativity (GR) and its modifications [1][2][3][4][5][6]. These gravity theories require the analysis of cosmological perturbations which lead to the formation of a large-scale structure and relic gravitational waves [7][8][9][10][11] in the framework of the inflationary paradigm.
Considering GR scalar field cosmology in the early universe, we accept that observational constraints on the parameters of cosmological perturbation values due to anisotropy a e-mail<EMAIL_ADDRESS>(corresponding author) b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>d e-mail<EMAIL_ADDRESS>and polarization of the cosmic microwave background (CMB) [12,13] are directly connected with the shape of the scalar field potential [14,15]. It is possible to select a certain class of the scalar field potentials corresponding to inflationary models and satisfying observational constraints in GR scalar field cosmology. This procedure can be considered as an effective method of inflationary model verification [16][17][18][19].
On the other hand, inflationary models based on modified gravity theories allow us to consider arbitrary physically motivated potentials as relevant ones, since the spectra of cosmological perturbations depend not only on the form of the scalar field potential, but also on the chosen version of modified gravity [20,21]. The different methods for constructing and analyzing the inflationary models based on the scalartensor gravity theories were described earlier in [22][23][24][25][26][27].
Nevertheless, we suggest the consideration of a new approach for constructing the phenomenologically correct models of cosmological inflation in modified gravity theories. This approach is based on the correspondence to observational constraints not due to the choice of the model parameters, but through certain relationships between these parameters.
Such an approach was proposed in [28] for Einstein-Gauss-Bonnet gravity, and in [29][30][31][32] for the scalar-tensor gravity theories. In [29][30][31][32], the connection H = λ √ F between the Hubble parameter H (t) and the coupling function F(φ), which defined the class of the scalar-tensor gravity theory, was used to construct the exact solutions for verified inflationary models with different types of cosmological dynamics. It is clear that the cosmological dynamic equations should be defined by the scale factor a(t) or the Hubble parameter H (t), the scalar field evolution φ(t), and the coupling function F(φ) if the potential V (φ) is given.
The motivation for using the proposed ansatz H = λ √ F was discussed in [29][30][31][32]. Initially given relations between various parameters are often used to build physically correct cosmological models both in the framework of general relativity and for modified theories of gravity (see, for example, [33][34][35][36][37]). In the case of the proposed quadratic relation between the Hubble parameter and the coupling function, the non-minimal coupling of the scalar field and curvature induces deviations from the purely exponential (de Sitter) expansion of the early universe, which corresponds to Einstein gravity. This approach makes it possible to construct quasi-de Sitter models of the early universe that satisfy observational constraints for various types of inflationary dynamics [29][30][31][32]. In [29], the inflationary solutions for the power-law Hubble parameter were considered. In [30,31], such models were analyzed in the context of the exponential power-law inflation, and the exact solutions for cosmological models with linear deviation from de Sitter expansion were obtained in [32]. We also note that all these inflationary models satisfy the observational constraints on the values of the parameters of cosmological perturbations [29][30][31][32].
In this work, we follow on the described approach and determine a certain type of cosmological dynamics from the correspondence to well-known scalar-tensor theories of gravity for these models. A direct correspondence between the physical potentials of a scalar field and the well-known types of the scalar-tensor theory of gravity for the generalized scalar-tensor models is obtained. We also demonstrate that the cosmological models with quadratic connection between the Hubble parameter and the coupling function H = λ √ F correspond to observational constraints on the parameters of cosmological perturbations for an arbitrary inflationary scenario with a certain dynamic of accelerated expansion of the early universe.
We also determine the spectrum of relic gravitational waves predicted in the proposed models. In this case, we take into account the specifics of the post-inflationary evolution of the early universe, which implies the presence of an additional stage of the stiff energy domination. The presence of this stage between the end of inflation and the beginning of the radiation-dominated era affects the spectrum of relic gravitational waves and distinguishes the proposed models from standard inflationary scenarios.
Finally, taking into account the spectrum of relic gravitational waves obtained, we evaluate the possibility of direct verification of the proposed cosmological models based on the possibility of the observation of the gravitational waves with high and low frequencies by existing and promising detectors [38][39][40][41][42][43][44] as well.
This paper is organized as follows. In Sect. 2, we consider the cosmological dynamic equations for the proposed models and consider satisfying the slow-roll conditions for arbitrary background parameters. In Sect. 3, we reconstruct and analyze a specific type of cosmological dynamics based on the correspondence of the type of scalar-tensor gravity to the Brans-Dicke theory. Section 4 investigates the correspondence between physically motivated potentials of the scalar field for the Brans-Dicke gravity and other well-known scalar-tensor gravity theories for the given dynamics of the accelerated expansion of the early universe. Section 5 shows the correspondence of the proposed cosmological models to any observational constraints (current and future) on the parameters of cosmological perturbations for arbitrary background parameters. We note that further measurement of the values of the parameters of cosmological perturbations leads to a refinement of the energy scale of inflation and the rate of accelerated expansion of the early universe only, and such refinement does not eliminate the possibility for verification of such models. Section 6 demonstrates that the models under consideration differ from standard inflationary models by the additional stiff energy-dominant stage, which leads to a significant discrepancy between these models in the spectrum of relic gravitational waves at high frequencies. In Sect. 7, we calculate the spectrum of relic gravitational waves in accordance with known observational constraints. The parameters of relic gravitational waves predicted by the proposed cosmological models are estimated. A summary of our investigations is presented in Sect. 8.
The inflationary dynamic equations
We start from a consideration of inflationary models based on the generalized scalar-tensor (GST) gravity theory described by the action [29][30][31][32] where κ is the Einstein gravitational constant, g is a determinant of the spacetime metric g μν , φ is a scalar field with the potential V = V (φ), ω(φ) and F(φ) are differentiable functions of φ, R is the Ricci scalar, and L m is the matter Lagrangian. The case of vacuum spacetime corresponds to the absence of matter; therefore, the matter part of the action S m should be equal to zero: S m = 0. Further, we use a natural system of units where κ = 8π G = c = 1. The background dynamic equations in spatially flat fourdimensional Friedmann-Robertson-Walker (FRW) spacetime corresponding to the action (1) under condition S m = 0 in the chosen system of units are [29][30][31][32] 3F Here, an overdot represents a derivative with respect to the cosmic time t, H ≡ȧ/a denotes the Hubble parameter, and F ≡ ∂ F/∂φ. In addition, we note that the scalar field equation (5) can be derived from Eqs. (3)- (4). For this reason, Eqs. (3)-(4) completely describe the cosmological dynamics, and can be represented in terms of the field φ as follows A number of cosmological models were considered earlier on the basis of dynamic equations (3)-(5) with certain scalar field potentials V (φ) and the coupling functions F(φ) (see, for example, [22,23,[25][26][27]).
However, we are interested in inflationary models corresponding to observational constraints on the parameters of cosmological perturbations without restrictions on the shape of the potential V (φ) or on the parameters of GST gravity theory F(φ) and ω(φ). This is the cornerstone of the given investigation which distinguishes our new approach from methods applied earlier.
To this end, we consider models with quadratic connection between the Hubble parameter and coupling function [29][30][31][32] where λ is a constant. The main equations of the quadratic connection models (QCMs) using the ansatz (8) in Eqs. (6)-(7) can be represented as where the kinetic term is We call λ the energy scale parameter of these inflationary QCMs, since the constant λ 2 normalizes the values of the scalar field potential V (φ), its kinetic energy X (φ,φ), and non-minimal coupling function F(φ).
In addition, taking into account the definitions of slow-roll parameters and corresponding conditions on them one can define the reduced potential and reduced kinetic energy of a scalar field from Eqs. (10)- (11), u ≡X Defined functions v and u are connected with the class of GST gravity theory, characteristics of the scalar field, and cosmological dynamics as well.
As one can see, the reduced kinetic energy u is represented in terms of second-order slow-roll parameters, while the reduced potential v contains the slow-roll parameters of the first order. Therefore, we have u v under conditions (13). Thus, the general slow-roll condition X V is satisfied for these inflationary QCMs when 1 and δ 1. Considering inflationary QCMs, we have a five-parametric {V, H, F, ω, φ} cosmological model connected by three Eqs. (9)- (11). To investigate such a model, one can consider the scalar field potential V = V (φ) and the Hubble parameter H = H (t) as a priori defined functions. This allows one to seek cosmological solutions for these models for any physical mechanisms of realization of the inflationary scenario and to define the corresponding type of GST gravity theory {F(φ), ω(φ)} and the type of scalar field evolution φ(t) from Eqs. (9)- (11). The exact solutions for GCMs with different cosmological dynamics were considered earlier in [29][30][31][32].
However, one can also consider the inverse problem of determining the Hubble parameter H (t), the evolution of the scalar field φ(t), and its potential V (φ) for the chosen class of the GST gravity {F(φ), ω(φ)} on the basis of dynamic equations (9)-(11).
We will now consider the solution of this problem in a general form, passing from a particular case of the Brans-Dicke gravity to other types of GST gravity. Also, we will proceed from the necessity of correspondence between physically motivated potentials [18,19] and well-known GST gravity theories [22,23].
Reconstruction of cosmological dynamics
In the general case, one can consider models with arbitrary dynamics corresponding to the accelerated expansion of the universe which can be defined by choosing the dependence δ = δ( ) (or by directly setting the Hubble parameter H = H (t)) to define the parameters v and u as functions of cosmic time.
Nevertheless, we will define the type of cosmological dynamics in inflationary models under consideration based Eur. Phys. J. C (2022) 82:642 on the correspondence of the action (1) to well-known GST gravity theories. For this purpose, we rewrite expressions (11)-(12) as follows Further, on the basis of expressions (9) and (13), we obtaiñ and using Eqs. (9) and (12), we get To define the relation δ/ in explicit form, we consider the correspondence of GST gravity action (1) to the Brans-Dicke gravity theory [22,23] which corresponds to We have the well-known restriction on the Brans-Dicke parameter for the scalar field φ.
Using expressions (18) and (19), we have Thus, Eq. (21) leads to the linear connection between slow-roll parameters where k is a constant parameter. The restriction (20) on the Brans-Dicke gravity parameter ω BD implies the constraint k < 1 on the parameter k. Therefore, the kinetic function ω(φ) can be defined as Further, from the expression (22) and the definition of the slow-roll parameters (13), we find the Hubble parameter where α is a constant. We call k the dynamic parameter of the inflationary models we are talking about, since the constant k determines the rate of accelerated expansion of the universe.
For the special case k = 1/2, we have corresponding to double exponential expansion of the early universe.
For k = 0, the Hubble parameter is It corresponds to the linear deviation from the de Sitter model. For k = 0 and α = 0, we have pure exponential expansion or the de Sitter model corresponding to the minimal coupling, since in the case F = 1, Eqs. (9) and (23) lead to H = λ and ω = 0.
For the other values of the constant k, we obtain the intermediate inflation, that is, the regime of accelerated expansion of the universe between pure exponential and power-law evolutions [45][46][47].
The connection between the class of GST gravity and the potential
We now turn to considering the connection between the scalar field potential and the parameters of GST gravity for inflationary QCMs. First, substituting the connection between slow-roll parameters (22) in (14), we obtain In addition, using the expressions (9), (13), and (24), it is not difficult to find the connection between the slow-roll parameter and the coupling function Substituting (29) into (28) and taking into account (9)-(10), we obtain the exact expression for the potential where Under the slow-roll condition 1, the reduced potential is v 3, and the scalar field potential V (φ) can be considered as the main term only (with negligible second and third terms) in the following form The kinetic function defined earlier by expression (23) is as follows Substituting the expression for the Hubble parameter (24) into (9), we obtain The last expression (35) defines a functional connection between the type of a scalar field evolution φ(t) and the coupling function F(φ). Thus, Eqs. (30)- (35) completely define the relations between parameters of inflationary models based on GST gravity with quadratic connection (8) and cosmological dynamics (24).
It is important to note that under the conditions k = α = 0, from Eqs.
The GST gravity action (1) is reduced to the case of the Einstein gravity and cosmological constant 1 if we consider pure exponential expansion (27). It is clear that using (35) where k = α = 0, we get F(φ) = 1. Such a situation is considered for Higgs inflation [48] and in modified gravity with higher derivatives [49,50]. Thus, the non-minimal coupling F(φ) between the scalar field and scalar curvature induces the following: (i) a deviation of the scalar field potential from the flat one because V = const in (33); (ii) a deviation of the accelerated expansion regimeḢ > −H 2 from the de Sitter model, because H = const; and (iii) a determination of the evolution of the scalar field itself φ(t). 2 Now, we will analyze the correspondence between physically motivated scalar field potentials and well-known classes of GST gravity theories in the framework of QCMs. Expression (33) in the slow-roll approximation and Eqs. (34)- (35) are the basis of our investigation of QSMs.
We also note that the use of the slow-roll approximation is a common practice for analyzing cosmological inflationary models (see, for example, [18,19]).
Chaotic inflation with the massive scalar field
Primarily, we consider the chaotic inflation with massive scalar field [2,18,19,47] From Eqs. (33)-(34) we have Thus, we see the correspondence to the Brans-Dicke gravity, where the mass of the scalar field m 2 = 6λ 2 defines the energy scale parameter. From (35) and 38 we obtain the following scalar field evolution corresponding to the inflationary model under consideration.
Chaotic inflation with quartic potential
Considering the case of chaotic inflation with quartic potential [18,19], where λ C is a self-coupling constant of a scalar field. From expressions (33)-(34) we have Thus, we find the correspondence to induced gravity [23], where the energy scale parameter λ 2 is defined by the coupling constant ξ of the scalar field and self-coupling constant λ C as From Eqs. (35) and (42), one has the corresponding evolution of the scalar field corresponding to this inflationary model. To realize the transition to the pure exponential expansion H = λ, under conditions k = α = 0, from (41)-(45) one has φ = ± 1 √ ξ , F = 1, ω = −2ξ , X = − ω 2φ 2 = 0, and V = 3λ 2 corresponding to Einstein gravity.
Inflation with the Higgs potential
Further, we consider inflation with the Higgs potential [18,19,48] Eur. Phys. J. C (2022) 82:642 where λ H is the self-coupling constant, and σ is the vacuum expectation value of the Higgs field. From expressions (33)-(34) one has non-minimal coupling [23] and the kinetic function where the energy scale constant and non-minimal coupling constant are The corresponding scalar field is For the transition to the pure exponential expansion H = λ, under conditions k = α = 0, from (46)-(50) one has φ = 0, F = 1, ω = 0, X = − ω 2φ 2 = 0, and V = 3λ 2 corresponding to Einstein gravity.
As one can see, we obtain a good correspondence between the physical potentials and well-known scalar-tensor gravity theories in slow-roll approximation in QCMs. Also, one has a transition to the case of the de Sitter model based on Einstein gravity with a cosmological constant as the source of the pure exponential expansion of the universe.
On the basis of Eqs. (33)- (35), one can define the parameters of the scalar-tensor gravity theories for the other types of the physically motivated potentials [18]. On the other hand, one can reconstruct the type of scalar field potential by using expression (33) for any coupling function F = F(φ) and the other corresponding model's parameters from (34)-(35) as well.
Since the specific type of the inflationary scenario is unknown, we will consider a model-independent verification procedure for an arbitrary type of potential or a type of scalar-tensor gravity, which are connected by relations (33)- (34).
This approach differs from usual verification of the standard inflationary models based on Einstein gravity, in which it is necessary to determine the specific form of the scalar field potential [1,2,18,19].
Parameters of cosmological perturbations
Let us consider verification of QCMs, proposed in Sect. 4, onto observational constraints on the cosmological perturbation parameter values.
The parameters of cosmological perturbations in inflationary models based on the action (1) with quadratic connection H = λ √ F for arbitrary dynamics were considered in [29][30][31][32].
The constraints on the values of the parameters of cosmological perturbations following from observation of anisotropy of the CMB by Planck [12], restrict the inflationary model's parameters. The expressions of the parameters of cosmological perturbations on the crossing of the Hubble radius were presented in [29][30][31][32] as follows where A S and A T are the values of the power spectra of scalar and tensor perturbations P S and P T on the crossing of the Hubble radius, n S and n T are the spectral indices of scalar and tensor perturbations, and is the third slow-roll parameter.
In addition, we note that the velocities of scalar and tensor perturbations for such a cosmological model are equal to the speed of light in vacuum c S = c T = 1 [29,30], which correspond to the modern observational constraint |c T −1| ≤ 5 × 10 −16 on the velocity of gravitational waves [24].
For any type of cosmological dynamics, from (58) and (60) we find for the energy scale parameter.
For the reconstructed type of the Hubble parameter (24), we have Substituting these expressions into Eqs. (60)-(61), we obtain Thus, from (66)-(67), one has a dynamic parameter in terms of the parameters of cosmological perturbations Finally, from constraints (55)-(57) and expressions (64) and (68), we obtain the following model-independent conditions on the energy scale parameter λ 2 and dynamic parameter k corresponding to verified inflationary models for any potential V (φ), any type of the scalar-tensor gravity F(φ), and kinetic function ω(φ) connected by Eqs. (33)- (34). As one can see, the observational constraints (55)-(57) lead to the model-independent condition (70) on the dynamic parameter k corresponding to constraint (23) for the Brans-Dicke gravity.
In Fig. 1, the dependence following from (68), namely for different values of the dynamic parameter k is represented. For verifying inflationary models, the values of the tensorto-scalar ratio r and the spectral index of scalar perturbations n S must fall into the outer or inner regions corresponding to 68% and 95% confidence levels [12,13].
Also, we note that the future refinement of observational constraints (55)- (57) leads to the refinement of the condition on the constant parameters of the considered cosmological Fig. 1 The dependencies r = r (n S ) for the different values of the parameter k with constraints on the tensor-to-scalar ratio r due to the Planck TT,TE,EE+lowE+lensing+BAO+BICEP2/Keck Array observations [12,13] models (64) and (68) only, and it does not eliminate the possibility for the verification of such models.
Nevertheless, we note that compliance with the observational constraints on the values of the parameters of cosmological perturbations is an indirect verification of inflationary models. This statement follows from the fact that relic gravitational waves were not detected directly, and at the moment only indirect estimates of the contribution of tensor perturbations to the anisotropy and polarization of CMB are considered, which lead to an upper bound (57) on the value of tensor-to-scalar ratio [12,13]. Also, various models of cosmological inflation can satisfy observational constraints (55)-(57); however, they can differ in the spectra of relic gravitational waves (see, for, example, [51][52][53][54]58]).
Thus, the direct verification of these inflationary models can be carried out by the detection of relic gravitational waves at the present time. To analyze the possibility of direct detection of relic gravitational waves predicted by these models, it is necessary to consider their spectrum taking into account the specific post-inflationary evolution of the universe in the proposed cosmological models compared with standard inflationary models.
Stiff energy-dominated era
For the standard inflation based on Einstein gravity, the radiation-dominated (RD) era with a state parameter w E = 1/3 occurs after the end of inflation [1,2].
The state parameter for the case of standard inflation is defined as follows [1,2] Eur. Phys. J. C (2022) 82:642 and one has w E −1 at the inflationary stage for E 1, w E = −1/3 at the end of inflation for E = 1, and w E = 1/3 for the transition to the radiation-dominated era with E = 2. Now, we consider the state parameter of a scalar field for inflationary models based on GST gravity with connection H = λ √ F in terms of the reduced potential and kinetic energy (14)-(15), namely On the inflationary stage, under conditions 1 and δ 1, from (73) one has w −1. At the end of inflation, with = δ = 1, one has w S = 1, and for the transition to the radiation-dominated era, one has = δ = 2; besides, the state parameter is w S = 3/5.
Thus, for such cosmological models, the intermediate epoch taking place between the end of inflation and the beginning of the radiation-dominated era. These intermediate epoch can be considered as the stiff energy-dominated (SD) era, as discussed in [51][52][53][54].
The generalized analysis of cosmological models implying a stiff energy-dominated era with state parameter 1/3 < w S ≤ 1 was considered in [51][52][53][54]. The quintessential inflation implying an additional kinetic energy-dominant stage with w S = 1 was considered, for example, in [55][56][57]. The spectrum of relic gravitational waves corresponding to n T = 0 and w S > 1/3, similar to the case for the proposed cosmological models, was considered in [51].
Here, in our work, we are not limited to any specific model. We will consider an additional stiff energy-dominated era between the end of inflation and the radiation-dominated era with state parameter 3/5 ≤ w S ≤ 1 ( 7 4 ) for an arbitrary model of cosmological inflation based on the GST gravity with connection H = λ √ F. Since cosmological models with an additional SD stage imply the blue-tilted spectrum of relic gravitational waves (see, for example, [51][52][53][54]), one can estimate their maximal energy density taking into account the ultraviolet cutoff frequency of the spectrum due to the Big Bang nucleosynthesis (BBN) constraint [58].
Spectrum of relic gravitational waves in QCMs
Direct detection of relic gravitational waves is important for verifying the validity of the inflationary paradigm in describing the evolution of the early universe. According to inflationary cosmology, relic gravitational waves at the present time fill the universe as a stochastic background [51][52][53][54].
The energy density of relic gravitational waves ρ GW is usually defined in terms of the following dimensionless quan-tity [51][52][53][54] where ρ c is the critical density, and f is the frequency of relic gravitational waves. The spectrum of relic gravitational waves at the present time for the cosmological models with an additional SD stage can be defined by the expression [53,54] where the plateau of the spectrum is the parameter α S is defined by the state parameter w S as follows and f RD is the present-day frequency corresponding to the horizon scale at the SD-to-RD transition. The reduced Hubble parameter at the present time is estimated as h 0.68 [12].
As one can see, for the case w S = 1/3, the parameter α S = 0, and the spectrum of relic gravitational waves closes to the flat one.
Also, we note that for f f RD and α S > 0, one has GW ; that is, the energy density of relic gravitational waves can be defined as follows Now, we consider the LIGO bound, 3 which can be obtained from LIGO optimal sensitivity GW 10 −9 for the gravitational waves with frequencies f 10 2 Hz [59]; that is, one has the following constraint GW ( f 10 2 Hz) < 10 −9 .
Thus, to match the Planck constraint on the value of the tensor-to-scalar ratio (57) and the LIGO bound on the energy density of gravitational waves (79), we will use expression (78) and obtain On the basis of the state parameter of stiff energy for the cosmological models under consideration (74) and expres-sions (77) and (80), we obtain 10 −10 Hz f RD 10 −5 Hz, To find the cutoff frequency of the spectrum of relic gravitational waves f * , one can use the BBN constraint [58] where f BBN 1.41 × 10 −11 Hz. This constraint is derived from the fact that a large gravitational wave energy density at the time of BBN would alter the abundance of the light nuclei produced in this process.
Taking into account that f * f RD and f * f BBN from (78) and (83), one has Thus, from the constraint on the value of the tensor-toscalar ratio (57) and (81)-(82), one has the following cutoff frequency of the spectrum of relic gravitational waves 10 5 Hz f * 2 × 10 7 Hz.
After substituting (81) and (85) into (78), we obtain the following energy density of gravitational waves at the present era corresponding to cutoff frequency (85). Figure 2 shows the spectra of relic gravitational waves predicted in the proposed inflationary models. The energy density corresponding to the flat part of the spectra is GW 1.8×10 −16 , and the maximum energy density The dimensionless amplitude of relic gravitational waves can be obtained from the expression [58,62] h Thus, the maximal amplitude of relict gravitational waves with frequencies (85) and energy density (86) is After estimating the parameters of relic gravitational waves in the proposed cosmological models, we will consider the possibility of detecting them.
As a promising method for registration of low-frequency gravitational waves, the use of the satellite cluster as an interferometric gravitational wave detector can be considered [38][39][40].
For such projected satellite detectors, namely for the Laser Interferometer Space Antenna (LISA) [39] and the Deci-Hertz Interferometer Gravitational-Wave Observatory (DECIGO) [40], the best sensitivities are [41] (LISA) From expressions (78) and (81)-(82) we obtain the following maximal energy density of relic gravitational waves for the frequencies f L 10 −3 Hz and f D 10 −1 Hz predicted in QCMs 4 2 × 10 −14 Also, for the standard inflation with α S = 0 from (76), we get Thus, the relic gravitational waves predicted in QCMs can in principle be registered by LISA and DECIGO, and for the case of standard inflation, relic GWs with frequency close to f 10 −1 Hz can be registered by DECIGO as well. Thus, a joint data analysis from the future observations of LISA and DECIGO can make it possible to determine what type of cosmological inflationary model is correct: models with an additional stage of stiff energy domination or standard inflation.
However, it should be noted that the implementation of such projects is expected no earlier than the 2030s [41].
Also, the method of direct verification of the proposed QCMs is the registration of high-frequency relic gravitational waves in a frequency range (85) with amplitudes (88), corresponding to a maximum energy density (86).
Among the existing and prospective detectors of highfrequency gravitational waves [42], part of this frequency range f = 1−13 MHz is covered by the ground-based Fermilab Holometer with sensitivity h c 8 × 10 −22 [42][43][44], which consists of two co-located power-recycled Michelson interferometers. On the basis of expression (88), we can conclude that relic gravitational waves predicted by QCMs cannot be registered by this detector.
Thus, a significant improvement in the sensitivity of gravitational wave detectors in frequency range (85) is required for direct verification of these cosmological models by registration of the high-frequency gravitational waves.
Conclusion
In this work, we considered the models of cosmological inflation based on the generalized scalar-tensor theory of gravity with a quadratic coupling between the Hubble parameter and the non-minimal coupling function (8). In these QCMs, the de Sitter stage induced by a cosmological constant corresponds to the Einstein gravity, while the non-minimal coupling between the scalar field and the scalar curvature leads to the deviations from the de Sitter stage. Therefore, the correspondence of the theory of gravity at present to the case of general relativity (minimal coupling) leads naturally to the CDM model [63][64][65] to describe the second accelerated expansion of the universe within the framework of the QCMs. It is necessary to note that the CDM model is in good agreement with observational data for the current stage of the universe's evolution [12].
Obviously, it is necessary to determine the relationship between the physically motivated scalar field potentials and well-known types of GST gravity. Such a relationship was found for a particular type of inflationary dynamics corresponding to the Hubble parameter (24).
In contrast to inflationary models based on general relativity, QCMs are verified by observational constraints on the values of cosmological perturbation parameters for an arbitrary potential of the scalar field, that is, for an arbitrary realization of the inflationary scenario.
Observational constraints on the values of cosmological perturbation parameters (55)-(57) restrict only the values of two constant parameters of QCMs, namely, the inflationary energy scale parameter λ 2 and the dynamic parameter k, which determines the expansion rate of the early universe. This result is achieved not by choosing the parameters of the cosmological model, but by means of the certain relationship (8) between the model's parameters. Obviously, using the relation (8) is not the only way to obtain verifiable cosmological models. However, this approach provides new opportunities to construct phenomenologically correct cosmological models on the basis of certain relations between the model's parameters regardless of how the inflationary scenario was implemented.
Thus, such an approach differs significantly from the reconstruction procedure of the scalar field potential from the parameters of cosmological perturbations in the framework of GR [14,15] or from the reconstruction of modified gravity theories from the universe expansion history [3,4], which also imply different inflationary scenarios.
As an interesting feature of the proposed cosmological models, we note that the restriction on the dynamic parameter k < 0.973 which follows from the constraints on the parameters of cosmological perturbations (55)-(57) corresponds to the restriction k < 1 following from the constraint on the Brans-Dicke gravity (20) for any type of inflationary models based on STG with relation (8).
Another property of QCMs is the presence of the stiff energy-dominant stage after the end of inflation and before the beginning of the radiation-dominant stage. In these models, the stiff energy state parameter has a fairly wide range of values 3/5 ≤ w S ≤ 1, which affects the spectrum of relic gravitational waves.
An analysis of the spectrum of relic gravitational waves in the proposed models allowed us to determine the frequency range 10 5 H z f * 2 × 10 7 Hz in which their energy density is maximal. However, in this range, the sensitivity of modern detectors of high-frequency gravitational waves is insufficient for direct registration.
Nevertheless, the low-frequency relic gravitational waves predicted in the proposed cosmological models can in principle be registered in future measurements by advanced low-frequency gravitational wave satellite detectors [39][40][41], which is an additional way to verify the proposed cosmological models. right holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. Funded by SCOAP 3 . SCOAP 3 supports the goals of the International Year of Basic Sciences for Sustainable Development. | 7,982.8 | 2022-07-01T00:00:00.000 | [
"Physics"
] |
New techniques and results for worldline simulations of lattice field theories
We use the complex $\phi^4$ field at finite density as a model system for developing further techniques based on worldline formulations of lattice field theories. More specifically we: 1) Discuss new variants of the worm algorithm for updating the $\phi^4$ theory and related systems with site weights. 2) Explore the possibility of canonical simulations in the worldline formulation. 3) Study the connection of 2-particle condensation at low temperature to scattering parameters of the theory.
Introduction
In recent years we have seen a fast development of dualization techniques for lattice field theories which is largely motivated by exploring new approaches for overcoming complex action problems for simulations at finite chemical potential (see, e.g., the reviews [1][2][3][4][5]). In a dual formulation the lattice field theory is exactly rewritten in terms of new variables which correspond to worldlines for matter fields and worldsheets for gauge fields. If the weights for the dual configurations are real and positive a Monte Carlo simulation can be done directly in terms of the dual variables and the complex action problem is solved.
In this contribution we continue the development and testing of new worldline techniques in the complex φ 4 model system. More specifically we revisit the problem of optimizing the worm algorithm, explore the possibility of canonical worldline simulations and study the relation of the 2particle condensation thresholds at low temperatures to scattering data of the theory. All three topics, improved worm algorithms, canonical worldline simulations and the relation of scattering data to low temperature condensation are methods and phenomena with implications beyond the simple φ 4 theory which here is merely considered for developing the techniques and physical ideas.
Worldline representation of the complex φ 4 field
As already outlined in the introduction, we here discuss techniques and results developed for the worldline representation of the complex φ 4 field. Although the worldline representation is well known, we briefly summarize it in this section to make clear its form and our notations.
The lattice action for the conventional representation of the complex φ 4 field in d-dimensions is, In the conventional representation the degrees of freedom are the complex valued fields φ x ∈ C assigned to the sites x of a d-dimensional lattice Λ with periodic boundary conditions. The spatial extent is denoted by N s and the temporal extent by N t . The latter corresponds to the inverse temperature β in lattice units, i.e., β = N t . Here we consider the cases of d = 2 and d = 4, i.e., we work on lattices with volumes V = N s × N t and V = N 3 s × N t . The bare mass m enters via the parameter η ≡ 2d + m 2 and λ denotes the quartic self-interaction. The chemical potential µ gives a different weight for forward and backward propagation in the Euclidean time direction (ν = d). The grand canonical partition sum is given by Z = D[φ]e −S [φ] , where in the path integral we integrate with a measure that is the product of complex integrals, D[φ] = x C dφ x /2π.
It is obvious, that for non-zero µ the action has a non-vanishing imaginary part and thus we face a complex action problem. However, this complex action problem may be overcome by exactly transforming the partition sum to a dual representation (see, e.g., [14] for the derivation of the form we use here). In the dual representation the grand canonical partition function is a sum {k} over configurations of flux variables k x,ν ∈ Z assigned to the links of the lattice, The flux variables k x,ν are subject to constraints which in the dual form (2) are represented by the product x∈Λ δ( ∇ · k x ) over all sites x of the lattice, where by δ( j) ≡ δ j,0 we denote the Kronecker Delta. ∇ · k x is the lattice form of the divergence of k x,ν given by ∇ · k x ≡ ν (k x,ν − k x−ν,ν ). The zero-divergence constraint implies that the net flux of k x,ν vanishes at every site x and consequently the admissible configurations of the k x,ν have the form of worldlines of conserved flux. In Fig. 1 we show an example of an admissible configuration of k-flux. From the figure it is obvious that the worldlines can wind around the spatial and temporal directions due to the periodic boundary conditions. In the example of Fig. 1 we show an example of a worldline configuration with a temporal net winding number of +2 (the vertical direction is time in the illustration). We denote the temporal winding number of a configuration of k-worldlines by ω[k], and this temporal winding number is exactly the geometrical quantity the chemical potential µ couples to. ω[k] appears in the form of e µ β ω [k] , such that from the canonical form of the coupling of particle number N and chemical potential µ we can identify the temporal winding number ω[k] with the net particle number N. Note that this form of the net particle number as a topological quantity (the temporal winding number) allows one to uniquely assign an exact integer net particle number to any given configuration of the worldline variables. Thus the worldline formulation is very suitable for canonical simulations which we will address in Section 4. This aspect is different from the conventional representation where one has to work with a discretization of the continuum Noether charge, which does not give an integer result.
In addition to the chemical potential factor e µ β ω[k] and the Kronecker constraints, each configuration of k-fluxes also comes with a weight factor B[k]. This factor is itself a sum over configurations Figure 1: Example of a configuration of dual variables. The dual variables correspond to conserved flux and thus must form worldlines. Their temporal winding number corresponds to the net particle number of the configuration, i.e., the example configuration we show has a net-particle number of +2.
{a} of integer valued auxiliary link variables a x,ν ∈ N 0 and is given by Obviously the integrals I(s x ) come from integrating out the radial degrees of freedom of the original field variables at site x. The argument s x is a non-negative integer combination of the auxiliary variables and the moduli of all k-fluxes that run through x. For a numerical simulation the integrals I(s x ) can be pre-calculated and stored for sufficiently many values of the arguments s x ∈ N 0 . Since all weights, those in the dual partition sum (2), as well as those in the weight factor B[k] are real and positive we can analyze the system in a Monte Carlo simulation using the flux variables k x,ν and the auxiliary variables a x,ν . Thus the dual representation solves the complex action problem.
It is straightforward to also transform observables to their dual representation. The observables we are interested in here are derivatives of the free energy density f ≡ − ln Z / (N d−1 s β) = − ln Z/V with respect to the parameters. These derivatives can then be evaluated also for the dual formulation and the result is the dual representation of the observables. For the examples of the particle number density n = N / (N d−1 s ) and the field expectation value |φ| 2 the corresponding dual representations are The vacuum expectation values on the right hand sides are now understood in the dual representation.
As we have seen in the previous section the variables of the dual representation are subject to constraints. In particular the flux variables k x,ν must obey the zero-divergence condition at every site x and consequently the admissible configurations are worldlines of conserved flux. For this type of systems the Prokofev-Svistunov worm algorithm [25] provides a powerful strategy which takes into account the constraints in a natural way. However, the φ 4 field has a feature that goes beyond simpler cases such as the Ising or O(2) models where the original degrees of freedom are group valued. The scalar φ x field not only has a complex phase in U(1), but also a radial degree of freedom, i.e., the modulus |φ x |. Obviously the modulus is essential for the physics of the theory since it appears in the mass term and the quartic term whose interplay determines whether the model is in the massive or in the Higgs phase. As can be seen from the form of the weights (3) of the dual representation, the radial degrees of freedom give rise to the specific weight factors I(s x ) that live on the sites x and via s x = ν [|k x,ν | + |k x−ν | + 2(a x,ν + a x−ν )] depend on the fluxes on all links that attach to x. This makes the worm algorithm more involved, since at every link (x, ν) which the worm adds to its contour we do not know the final weight at the endpoint x +ν of the link, since this weight is fully determined only after the subsequent step of the worm. This feature leads to an imbalance in particular for the initial and final steps of the worm that may lead to an inefficient algorithm in some parameter range, unless suitable additional strategies are invoked which we discuss here (see also [17]).
Before we come to addressing the new ideas for worm algorithms for systems with additional site weights, let us point out that the site weights might also come from different sources, in particular when the Haar measure from a non-abelian group enters the weight factors (see [26][27][28] for examples).
To have a more general notation that covers all such cases we consider a system of a conserved flux k x,ν with partition sum (the auxiliary variables a x,ν of (3) are here considered as a background field that can be updated with a conventional local Monte Carlo) Here we allow for link weights L x,ν (k x,ν ) that depend on the value of k x,ν on that particular link as well as the position of the link such that also the spatial dependence from the background field a x,ν of (3) is taken into account. We also include site weights S x ({k x,• }) that depend on the fluxes k x,• on all links that attach to x. It is easy to see that the dual formulation (2), (3) of the charged φ 4 field is of the form (5), but also other theories such as the worldline form of the principal chiral model [26,27]. For systems of the form (5) we now present two new strategies for efficient worm algorithms. The first one is the introduction of an amplitude parameter A that can be used to tune the length of the worms and thus the average number of fluxes changed by every completed worm. The second idea is to impose an even-odd decomposition of the lattice and to use different steps for moves of the worm that lead from an even to an odd site and moves that lead from an odd to an even site.
In general a worm consists of three parts: (we here use the convention k x,−ν ≡ k x−ν,ν ) -Worm start: The worm randomly chooses a flux increment ∆ ∈ {−1, 1}, a starting site x 0 and an initial direction ν 0 ∈ {±1 ... ± d}. A defect that violates the flux conservation constraints is inserted by proposing to change k x 0 ,ν 0 → k x 0 ,ν 0 + sign(ν 0 )∆ which is accepted with a Metropolis decision 1 with Metropolis ratio ρ S x 0 ,ν 0 . Upon acceptance the worm head proceeds to x 1 = x 0 +ν 0 .
-Worm propagation: Departing from the current site x j of the head of the worm a new direction ν j ∈ {±1 ... ± d} is chosen randomly and the change k x j ,ν j → k x j ,ν j + sign(ν j )∆ is proposed. The propagation steps are accepted with a different Metropolis ratio ρ P x j ,ν j . The algorithm keeps proposing new directions until one is accepted and the head of the worm proceeds to x j+1 = x j +ν j .
-Worm closing: In case the site x j+1 = x j +ν j that would be reached in an accepted propagation step coincides with the starting site of the worm, i.e., x j+1 = x 0 , the worm uses a different Metropolis ratio ρ C x j ,ν j for accepting that step. If accepted, the worm terminates, thus healing the initially induced defect, and the new configuration again obeys all flux conservation constraints.
We will show that the following assignments for the Metropolis ratios ρ S x 0 ,ν 0 ρ P x j ,ν j and ρ C x n−1 ,ν n−1 give rise to a worm algorithm that obeys detailed balance for systems of the form (5): where for the Metropolis ratio of the closing step we have already assumed that the worm has length n, i.e., the closing step leads from x n−1 to x n = x 0 . The role of the amplitude A ∈ R + will be discussed later. For the subsequent proof of detailed balance we introduce the notation w = (∆, x 0 , ν 0 , ν 1 , ... ν n−1 ) for a worm, which denotes all steps of the worm in a compact form.
In order to establish that the worm generates the correct probability distribution W(C k ) for flux configurations C k , we show the sufficient (but not necessary) detailed balance condition where P(C k → C k ) is the probability to go from configuration C k to C k . Note that here C k denotes the new configuration after the worm has closed, since intermediate steps of the worm do not correspond to admissible configurations that obey the constraints. The configuration C k differs from C k by a set of links where the flux variables k x,ν were changed. It is important to note that there is an infinity of loops that give rise to that particular change of flux. These worms can differ by their starting point x 0 , their flux increment ∆ but also by the sequence of directions ν j , since the worm can produce dangling ends where it retraces previous steps and thus reverts changes of flux. Consequently we need to write the detailed balance condition (7) in the form where the transition probabilities P(C k → C k ) are written as sums over the set {w} of all worms that lead from C k to C k , and by {w } we denote the corresponding set of worms that go from C k to C k . The transition probability for an individual worm w is denoted by P w (C k → C k ). A sufficient solution of (8) is to show the existence of a bijective mapping f between {w} and {w } such that each pair of a worm w and the inverse worm w −1 = f (w) individually obey detailed balance: We can identify a suitable bijective mapping f by defining the inverse worm w −1 = f (w) as the worm with the same flux increment ∆ and the same starting point x 0 , but reversed orientation: Obviously w −1 transforms C k back to C k . It is important to stress that with the choice (10) at every site of the worm path, w and w −1 encounter the same values of the variables k x,ν such that the probabilities for accepting the steps are computed in the same background of flux variables. The transition probabilities P w ( C k → C k ) in (9) are now obtained as the product of the probabilities for the individual steps in the worm w. For the three different parts, starting, propagating and closing the corresponding probabilities are P S where we have normalized the probabilities by summing over all possible choices at each step. For the starting step this is the trivial factor 2dV for choosing the first link and its orientation, while for the propagating and closing steps the normalization reads Thus the transition probability of a worm path reads P w (C k → C k ) = P S x 0 ,ν 0 n−2 j=1 P P x j ,ν j P C x n−1 ,ν n−1 and the sufficient condition (9) for detailed balance turns into where the probabilities of the individual steps of the inverse worm are marked with a twiddle. We stress again that with our choice of w −1 at each step the worm w and its inverse w −1 make their decisions for the next direction in the same background of flux variables such that all normalizations cancel between the numerator and the denominator. Thus (12) reduces further to where we again use twiddles to denote the Metropolis ratios of the inverse worm. We also have reordered the factors such that we match the steps of w and w −1 at the same link in the same factor. Inspecting our chosen Metropolis ratios (6) one easily shows the properties, Finally we make use of the identity min{ρ, 1}/min{ρ −1 , 1} = ρ for ρ > 0 and find This concludes our proof of detailed balance. In our choice for the Metropolis ratios for the starting and closing steps in (6) we have included the amplitude parameter A ∈ R + , and obviously detailed balance holds for arbitrary choices of A. The usefulness of the parameter A becomes clear when inspecting the Metropolis ratios for the starting and closing steps. As already discussed, the fact that the site weights of (5) depend on all fluxes k x,ν attached to a site x implies that we cannot choose the Metropolis ratios for the starting and closing steps with the same number of site weights in the numerator and the denominator. As a consequence, in the starting ratio ρ S x 0 ,ν 0 two factors of site weights S x ({k x,• }) appear in the denominator. The typical numerical values of the site weights S x ({k x,• }) will of course depend on the coupling values where the simulation is performed. For couplings where the site weights are large the form of ρ S x 0 ,ν 0 will give rise to very small probabilities for accepting the starting step of a worm. In turn, the ratios ρ C x n−1 ,ν n−1 for the closing step, where the site weights appear only in the numerator (see (6)), will be larger than 1, such that every closing attempt is accepted. Thus in such a region of coupling space we have worms that hardly start and then quickly terminate, i.e., are very short. The amplitude factor in (6) can now be used to eliminate or at least milden the problem. Choosing A proportional to the square of the characteristic site weight gives suitable starting and closing probabilities and thus worms of reasonable length, as was also confirmed in numerical tests documented in [17].
Let us now come to the discussion of the second new worm strategy we are currently exploring. It is based on an even-odd decomposition of the lattice and the fact that a worm alternates steps leading from an even to an odd site with steps that lead from an odd to an even site. The key idea is to use different Metropolis acceptance rates for even to odd (ETO) and odd to even (OTE) steps of the worm. For the inverse worm, at each link of the contour of the worm, the role of ETO and OTE are reversed such that in the detailed balance equation for each link both ETO and OTE probabilities appear. This allows for an additional freedom in the choice of the acceptance probabilities which can be explored to construct efficient worms for systems with the additional site weights we discussed.
As before the worm starts with randomly selecting a flux increment ∆ ∈ {±1}, a starting site x 0 and an initial direction ν 0 ∈ {±1 ... ± d}. Subsequent steps are determined by randomly choosing new directions ν j ∈ {±1 ... ± d} and in every step the change of flux k x j ,ν j → k x j ,ν j + sign(ν j )∆ is proposed. If the change is accepted, the head of the worm proceeds from x j to x j +ν j . Once the head reaches the starting site x 0 the worm terminates. Each step is accepted with a specific probability, and we use P S x 0 ,ν 0 to denote the probability for the starting step, P P x j ,ν j for the intermediate propagation steps and P C x n−1 ,ν n−1 for accepting the closing step. As already announced, the acceptance probabilities are now chosen with respect to an even-odd pattern. Per definition the chosen staring site x 0 is labelled as even. Thus the first step of the worm is an ETO step followed by alternating OTE and ETO steps. The final step is an OTE step, since the number of steps the worm needs for forming a closed contour is an even number (we stress that we assume that both N s and N t are even, such that this holds also for worms that wind around the boundaries). The probabilities for accepting the steps are now chosen as where the ratios ρ P x j ,ν j and ρ C x n−1 ,ν n−1 are those of (6). Note that the even-odd decomposition allows one to eliminate backtracking for ETO steps, which is manifest in the choice of P P x j ,ν j for ETO steps. The proof of detailed balance proceeds in the same way as before, i.e., we again write the overall transition probability as the sum over all worms that lead to the same change of flux and subsequently identify the inverse worm via the choice (10). The detailed balance condition (12) between a worm and its inverse now reads where in the first step we have already cancelled the constant (i.e., flux-independent) probabilities for all ETO steps, including the starting probability. In the second step we have inserted the explicit expressions for the probablities of the OTE steps from (16), and again the normalizations N x j cancel for our choice of w −1 . We have already remarked that the ETO steps of w −1 are the OTE steps of w. Thus in (17) indeed non-trivial flux dependent factors ρ P appear for all links of the contour of the worm. We can complete the proof of detailed balance by using the relations (14) to bring the factors from the denominator to the numerator and find, where the last equality is now obvious from the definitions (6). This concludes the proof of detailed balance for the even-odd worm algorithm. We are currently exploring the performance of the even-odd worms in numerical tests in the φ 4 theory.
Canonical worldline simulations
Let us now come to our second topic, the use of the worldline representation for canonical simulations, i.e., simulations at a fixed net particle number. As we have discussed in Section 2, the net particle number N corresponds to the temporal winding number ω[k] of the worldlines of conserved k-flux. Thus for every dual configuration we can identify the net particle number in a unique way. For simulating in a sector with a fixed net-particle number N we can start with an initial configuration with a fixed net winding number ω[k] = N, e.g., by placing N simply winding temporal loops of k-flux, and then use a Monte Carlo update that does not change the winding number. This can either be implemented by reflecting the worms described in the previous section at the last timeslice or by offering local updates that change the flux around single plaquettes combined with offering flux that winds only around the spatial boundaries. This simulation at fixed N corresponds to a simulation of the ensemble with a canonical partition sum Z N . An implementation of canonical worldline simulations based on these ideas was presented in [18]. The numerical simulations were done for the 2-dimensional charged φ 4 field and we here briefly summarize the key steps and some results. The approach described in the previous paragraph allows one to calculate observables at fixed net particle numbers N at a given spatial extent N s and thus at a net particle density n = N/N d−1 s for the d-dimensional system. In principle one then can study these observables as a function of n and in this way describe all finite density physics.
Among the observables one can define in the canonical ensemble is also the chemical potential which is defined as the derivative of the free energy with respect to the particle number density n, where f (n) = − ln Z N /V at spatial extent N s , such that n = N/N d−1 s . In the last step of (19) we have already discretized the derivative with respect to n (this form is specific for d = 2 where n = N/N s ).
The free energy f (n) = f (N/N d−1 s ) cannot be computed directly in a Monte Carlo simulation, but may be determined by integrating suitable observables. Here we use as observable the expectation value |φ| 4 , i.e., the derivative of f (n) with respect to the coupling λ. Integrating this differential equation we find, Figure 2: The relation between the particle number density n and the chemical potential µ (lhs. plot) and the field expectation value |φ| 2 versus µ (rhs.). We compare the grand canonical results to those determined from the canonical simulations. For the latter µ is an observable and thus we have horizontal error bars. The data are for d = 2 dimensions with λ = 1.0, η = 4.01, N s = 10 and N t = 100.
As an integration constant appears f (n)| λ=0 , i.e., the free energy at vanishing coupling which can be evaluated with Fourier transformation. The integral in (20) has to be done numerically. For that the expectation value |φ| 4 | λ ,N,N s has to be determined in canonical simulations at fixed N and N s at several values λ of the quartic coupling. We found [18] that the behavior of |φ| 4 | λ ,N,N s is rather smooth as a function of λ and that the numerical integral in (20) can be determined accurately.
In the lhs. plot of Fig. 2 we show with blue circles our results for the relation between n and µ from the canonical determination based on (19) and (20). Note that we plot µ, which in the canonical ensemble is an observable, on the horizontal axis, such that the canonical data points have horizontal error bars. The canonical data in both plots of Fig. 2 are from a simulation of the 2-dimensional canonical ensemble in the worldline form. The update was done with the local strategy of updating flux around plaquettes and spatially winding loops, using statistics of 10 5 configurations separated by 10 combined sweeps for decorrelation and 5 × 10 4 sweeps for equilibration. The couplings and lattice size are λ = 1.0, η = 4.01, with N s = 10 and N t = 100.
The canonical relation between n and µ can now be compared to the corresponding grand canonical determination, where one evaluates n as a function of µ using the dual representation (4). These are the data represented by red diamonds in Fig. 2 which we generated with the worm algorithm using a statistics of 4 × 10 5 configurations separated by 10 worms for decorrelation after 2 × 10 5 worms for equilibration. It is obvious that the two data sets nicely fall on top of each other and we conclude that with the canonical worldline approach we can reliably determine the relation between n and µ.
Once the relation between n and µ is known we can also convert other observables from the canonical determination as a function of n into their grand canonical form, i.e., express them as a function of the chemical potential. In the rhs. plot of Fig. 2 we illustrate this for the field expectation value |φ| 2 . Again we compare the canonical results with a direct determination in the grand canonical ensemble and also here find excellent agreement, such that we conclude that the canonical worldline approach provides an interesting alternative to grand canonical simulations. Figure 3: Condensation for the 1-particle (lhs. plot) and the 2-particle sectors (rhs.) in the 2-d case.
We show the particle number N as a function of the chemical potential µ for η = 2.6, λ = 1.0, N s = 16 and a temporal extent of N t = 400. The symbols represent the numerical data from the worldline simulation and the curves the fit with the logistic function as discussed in the text.
We have shown for the example of the 2-dimensional scalar φ 4 field that the canonical worldline approach is a possible alternative for Monte Carlo simulations at finite density. However, for the bosonic example used here for developing and testing the method the possible advantages of the canonical approach cannot be fully appreciated. We expect that for fermionic systems the canonical approach is considerably more interesting because there the Grassmann nature of the matter fields leads to worldlines with more rigid constraints: Each site of the lattice has to be run through by a loop, be the endpoint of a dimer or is occupied by a monomer. This additional constraint makes the system much more stiff in a Monte Carlo simulation and the insertion of additional winding loops, i.e., additional charges is rare. This has, e.g., been observed in the simulation [29][30][31] of the worldline form [32] of the massless Schwinger model. It was found that in the grand canonical worldline simulation in particular the particle number suffers from very long autocorrelation times. A canonical worldline simulation as we have described here overcomes this problem, since the net-winding number of the fermion loops and thus the particle number is kept fixed. First tests with simulations of the worldline form [32] have shown that with the canonical worldline approach autocorrelation problems are much milder.
Particle condensation and scattering data
Let us now come to the third topic which we here study using the worldline representation of the charged φ 4 field: The connection between scattering data of a theory and the condensation threshold for the two-particle sector in the grand canonical ensemble at low temperature.
The existence of a real and positive worldline representation allows one to study condensation of particles at very low temperature. To illustrate the condensation phenomena one can observe in such a study, in Fig. 3 we show the expectation value of the net particle number N as a function of the chemical potential µ for the 2-d case. The grand canonical worldline data for 2-d used in this section were generated with the worm algorithm with 4 × 10 5 configurations separated by 10 worms for decorrelation after 2 × 10 5 worms for equilibration. The chosen coupling parameters are η = 2.6, λ = 1.0 and a temporal extent of N t = 400. The worldline data we use in this section for the 4-dimensional case are based on 10 5 to 2 × 10 5 configurations generated at η = 7.44, λ = 1.0 using spatial extents of N t = 320 and N t = 640.
In the lhs. plot of Fig. 3 we show the results for N in a range of µ where the net particle number rapidly increases from N = 0 to N = 1 at a critical chemical potential µ (1) c ∼ 0.26. In the rhs. plot we show the second step from N = 1 to N = 2 at a critical chemical potential µ (2) c ∼ 0.32. Although with N t = 400 we work already at a relatively low temperature of T = 1/400 in lattice units we still observe rounding from temperature effects near the thresholds µ (1) c and µ (2) c . Of course one can reduce the rounding by increasing N t further, which on the other hand drives up the numerical cost. In order to determine the critical values µ (i) c also at non-zero T we fit the function N(µ) in the vicinity of the condensation steps with the logistic function N( As the plots in Fig. 3 show, these fits describe the data near the two condensation thresholds i = 1, 2 very well and the corresponding critical values µ (i) c are obtained as one of the two fit parameters. The position µ (1) c of the first condensation threshold coincides with the renormalized physical mass of the system, i.e., µ (1) c = m phys . However, also the second threshold can be related to a specific energy value: In [33] it was noted that in the limit of vanishing temperature the critical value µ (2) c coincides with W−m phys , where W is the 2-particle energy. The relations that connect the condensation thresholds to physical quantities thus read [33] The results for µ (1) c and µ (2) c and thus m phys and W depend on the spatial extent N s , as can be seen from the lhs. plot of Fig. 4 for the 2-d case and the lhs. plot of Fig. 5 for 4-d. In both plots we show the results for µ (1) c (N s ) and µ (2) c (N s ) as determined from the corresponding condensation thresholds. The physical mechanisms for the finite volume effects of m phys and W were discussed in two papers by Lüscher [34,35] where also the finite volume scaling formulas for m phys (N s ) and W(N s ) are given. In particular the second paper [35] analyzes the 2-particle energy W and relates it to scattering data: From the finite volume scaling of W(N s ) one can extract the scattering length a 0 and for the 2-dimensional case even the full scattering phase shift δ(k) (see also [36]).
Before we start with the determination of scattering data from the 2-particle energies, we discuss an important cross-check of the results for W obtained from the condensation thresholds. The 2-particle energies may also be computed from 4-point functions in the conventional formulation: Denoting by φ(t) the sum of the φ x over the time slice at t, i.e., the zero-momentum projected field, we can obtain W from the exponential decay of the 4-point function Certainly one may improve the determination of W by considering a full correlation matrix of 4point correlators, but we found that the simple correlator (22) entirely dominates the signal. We implemented the determination of W based on (22) for both the 2-d and the 4-d cases for the same values of N s as studied for the condensation thresholds. The corresponding data points are shown as squares in the rhs. plots of Fig. 4 and Fig. 5. We find excellent agreement of the results from the two determinations and conclude that our interpretation of the 2-particle threshold as µ (2) c = W − µ (1) c applies and that our determination of the critical values µ (i) c at non-zero temperature from the fit with the logistic function is reliable. The 2-dimensional conventional data for this comparison were computed from 10 6 configurations separated by 10 sweeps of local Monte Carlo updates for decorrelation after 10 4 sweeps for equilibration using temporal lattice extents of N t = 32 and N t = 64. In 4 dimensions we used 5 × 10 5 configurations and N t = 48 and N t = 64. Having confirmed the relation of the second condensation threshold to the 2-particle energy with the comparison to the results from 4-point functions in the conventional approach let us now come to the determination of the scattering data. We begin the analysis with the 2-dimensional case (compare [36]). There the 2-particle energy W is related to the relative momentum k of the two particles via This equation may be inverted to determine the momentum k. Due to the finite volume the momentum k is subject to the quantization condition, which also contains the phase shift δ(k) for that momentum. Thus by combining (23) and (24) one may extract the scattering phase shift δ(k) from the 2-particle energies W(N s ). Varying N s allows one to access different values of the relative momentum k, such that δ(k) can be determined for a whole range of momenta. In Fig. 6 we show the results for this analysis of the 2-dimensional case (compare also the results for the 2-d Ising model [37,38] which corresponds to a particular limit of the φ 4 system studied here). We plot the results for δ(k) as a function of k and again compare the data based on the condensation thresholds in the worldline simulation to the results from analyzing 4-point functions in the conventional approach. We find very good agreement of the two data sets, and point out that the results from the worldline simulation scatter less and have smaller error bars.
We conclude the discussion of the relation between the 2-particle condensation threshold and scattering data with the analysis of the 4-dimensional case. There the 2-particle energy can be expanded in a power series in a 0 /N s [35], Here m 0 is the physical mass in lattice units in the infinite volume limit, i.e., m 0 = lim N s →∞ µ (1) c (N s ), and a 0 the scattering length in lattice units. The coefficients c 1 and c 2 were determined in [35] with values c 1 = −2.837297 and c 2 = 6.375183, and c 3 we use as an additional fit parameter that allows us to use our data for W(N s ) also for relatively small N s . The physical mass in the infinite volume limit m 0 was determined from fitting the data for µ (1) c (N s ) with the 2-parameter form µ (1) c (N s ) = m 0 + cN −3/2 s e −m 0 N s [39]. For the parameters we use (λ = 1.0, η = 7.44), this gives a value of m 0 = 0.168 (1). Using this as input in (25) reduces the fit of W(N s ) to a 2-parameter fit with parameters a 0 and c 3 , which we applied to the worldline data for W(N s ) in the range from N s = 5 to N s = 10. This fit is shown as a dashed line in the rhs. plot of Fig. 5.
Our final results for the physical mass m 0 in lattice units determined from fitting the data for µ (1) c (N s ) and the scattering length a 0 in lattice units determined from (25) are: The error we quote for m 0 is the statistical error, while for a 0 we tried to estimate the systematic error by adding an additional term for higher corrections in (25) and also using m 0 as an additional independent fit parameter (this gives a value of m 0 = 0.169 which is consistent with the independent determination from µ (1) c quoted in (26). We stress that these results are preliminary and we are currently improving the analysis and generate data for a second lattice spacing.
To summarize: Following the analysis in [33] we have shown that the second condensation threshold µ (2) c at low temperatures is related to the 2-particle energy W. We cross-checked this relation for both, the 2-and the 4-dimensional cases with a determination of W from 4-point functions computed in a simulation in the conventional representation of the φ 4 lattice field theory. In a subsequent analysis of the finite volume behavior using Lüscher's formulas we determined the scattering phase shift δ(k) for the 2-dimensional case and the scattering length a 0 in 4 dimensions.
Also here the findings discussed for the φ 4 lattice field theory clearly go beyond that simple model. A particularly simple and interesting test case would be QCD with two degenerate quark flavors. In this case the introduction of an isospin chemical potential µ I does not lead to a complex action problem and µ I could be used to condense pions. The second condensation threshold could then be analyzed along the lines discussed here to extract parameters for pion scattering.
Summary
In this contribution we have presented three developments for Monte Carlo simulations with worldlines, using the charged φ 4 field as the lattice field theory to illustrate these ideas. The three developments are: 1) New algorithmic ideas for worm updates. 2) The possibility and perspectives for canonical worldline simulations. 3) The relation between condensation at low temperature and scattering data.
Concerning the new strategies for worm algorithms the key issue is to find efficient algorithms for worldline models which in addition to the usual weights on the links also have weights on the sites of the lattice. For the φ 4 case these weights come from the radial degrees of freedom, but in other models may also origin from, e.g., Haar measure contributions. The site weights couple the flux variables on all links attached to a site and imply that the Metropolis ratios for the starting-or the closing step of the worms can be very small. We here discuss two possibilities for eliminating or mildening this problem: The introduction of an amplitude factor that controls the starting-and closing probabilities to optimize the length of the worms. A second technique we discuss is an even-odd staggered worm, where one chooses different acceptance probabilities for even to odd and odd to even steps.
The second issue we address is the use of the worldline formulation for canonical simulations. Since in the worldline representation the net-particle number is the temporal winding number ω of the worldlines, we can use an update that does not change ω and in this way simulate the canonical ensemble. We explore the feasibility of such a simulation by cross-checking also with grand canonical results and establish that canonical worldline simulations are an interesting alternative for finite density simulations.
Finally we explore the relation of the critical chemical potential values µ (1) c and µ (2) c for the onset of 1-and 2-particle condensation at low temperatures to scattering data. The 2-particle energy W is given by the sum W = µ (1) c + µ (2) c and using grand canonical worldline simulations we determine the critical chemical potentials as a function of the spatial lattice extent N s and thus obtain W(N s ). Fitting W(N s ) with Lüscher's formula we obtain the scattering phase shift for the 2-d case and the scattering length in 4 dimensions. Our results are cross-checked with a determination of W(N s ) from 4-point functions in the conventional formulation, and we thus establish that the low-temperature condensation thresholds are related to scattering data.
We conclude with stressing that all three topics we have discussed here go beyond the simple φ 4 theory we have used as the physical system for our presentation. The further development of the methods and concepts discussed in this contribution is currently pursued also in the context of other lattice field theories. | 10,323.6 | 2017-11-07T00:00:00.000 | [
"Physics"
] |
Integrated Farm by Making of "Poc-fish" as the Alternative for Economical Coastal Communities Increase
Coastal communities play an important role in marine and fisheries development, as well as forming a culture in coastal areas. The socio-economic life of coastal communities in Kolakaasi Sub-District of Kolaka District is far from prosperous as the data obtained from Badan Pusat Statistik of Kolaka (2015), the number of poor population in Kolaka reached 27,210 with the percentage of 14.68%. Partners in this IbM activity are teenagers who drop out of school environments and groups of housewives living in coastal areas. The problem of partners in the activities of IbM is the number of teenagers dropping out of school in the partner environment due to the low level of welfare of coastal communities so that the average level of the highest education is junior high school where the young women have to help the family economy by working as laborers in traditional markets of Kolaka or only help parents at home while the men work at sea. Fishing is highly dependence to the nature, so that if the weather is bad then the fishermen cannot gain income. IbM-Integrated Farm by making "POC-FISH" is the manufacture of liquid organic fertilizer that combines agricultural activities with fisheries. POC-FISH is mad of small fish, commonly called lure fish (teri) by Kolaka community. This type of fish is abundant in Kolaka and sold cheaply (R.p 5,000/Kg). The purpose of this IbM activity was the empowerment of coastal communities through the transfer of science and technology by utilizing local potentials so that the partners involved can begin to be productive and economically independent by conducting business on a household scale. The method of making POC - FISH will be carried out simply so that technology transfer can be easily understood by partners. The process of transfer of science and technology was carried out with the pattern of 1) the education of the partner group on the importance of technology adoption by utilizing the potential and local wisdom that will be able to produce a product with higher economic/selling value 2) POC-FISH making training 3) mentoring partner group in marketing 4) monitoring and evaluation. The outgoing plan of this IbM activity is the publication of the ISSN national journal published in 2017 and POC-FISH Products
A. Introduction
The poverty level of coastal communities in Indonesia is still very worrying. In fact, Indonesia is a country with great natural and marine resources. However, the potential of marine has not been fully utilized. Villagers of Kolakaasi Sub-district, Kolaka District, Southeast Sulawesi Province who live on the coastal area depend their lives on the sea. Socio-economic life of coastal communities in Kolakaasi Sub-district is far from prosperous. As the data obtained from BPS of Kolaka, it was reported that the number of poor population of Kolaka in 2015 reached 27,210 with the percentage of 14.68%.
The great dependence on nature makes coastal communities generally have no other activities. If bad weather comes, coastal fishermen do not go to sea. To fill the leisure time people prefer to improve their net. Others prefer to repair their boats to fill their spare time as long as they do not go to sea. Although in certain seasons fishing income is very hig, however, income is very small in subsequent seasons fishing. Another characteristic of the coastal community of Kolakaasi Sub-district is the activity of women and children. There are many teenagers dropping out due to economic limitations so that generally coastal children only finish school up to high school level (SMP). Meanwhile, male teenagers have often been involved in fishing activities.
Development programs in coastal areas do not significantly contribute in improving the economic condition of coastal communities. The failure of the program is due to the fact that development projects within the context of coastal communities in Indonesia are not based on community empowerment and the utilization of local potentials. The introduction of various development programs is more laden with the content of a political bureaucratic approach than attempting to empower and utilize and local potential. Whereas, community empowerment and utilization of local potency can give big contribution particularly in encouraging social learning process so that integration of project mission or program with value of utilization of local resources, knowledge, ability, requirement of coastal community can be achieved (Masterson et al 2000).
Science and Technology Program for the Community "IbM-Integrated Farm by Making 'POC-FISH' as an Economical Alternative Efforts for Coastal Community" is a partnership program between the academics of Universitas Sembilan Belas November (USN) Kolaka with the community. The target partner of the IbM activity program is a group of drop out teenagers with age between 15-20 years and groups of housewives living in coastal area.
IbM-integrated farm by making POCH-FISH program is a community empowerment program for coastal communities to be able to have other business alternatives by utilizing the local potential of lure fish (teri) which has a cheap selling value into a product with a higher selling or economic value. The ultimate goal of this IbM program is to help improve the welfare of the people in coastal communities of Kolaka District.
B. Methodology
POC-FISH is chosen as a partner problem solution with some considerations: Fish used as main raw material of organic fertilizer is lure fish/teri which is a fish species that is very easy to be found in Sulawesi and sold at cheaper price than other fish species (Rp .5000/kg). Utilization of lure fish in the community is still limited to be processed into driedfish or used as live bait, so the availability of raw materials is no longer a consideration for increasing the amount of production at any time. A figure of lure fish can be seen in Figure 1. The approaches offered to solve partner problems in the IbM program are: a. Determination of partners Partners of IbM-Integrated Farm by making "POC-FISH" as an Economical Alternative Efforts for Coastal Communities, in which group of drop out of school teenagers who do not complete their secondary education obligations over 15-20 years of age living in coastal areas. The second partner in this a program is a group of housewives who have no permanent job whose husband is a captive fisherman who fully depends on the family's economy from catching at sea. The consideration of choosing these two partners is that the two groups of partners are the groups that most need coaching, mentoring, and training the creative economy by utilizing the abundant natural potentials, so that the future can be economically independent. b. Persuasive Approach The first step of the implementation of this IbM program is to conduct persuasive approach to the two groups of partners who do not open access to the input of external innovation on their custom, where coastal communities are known as a society that does not adaptive with other environment outside of their community. The persuasive approach is aimed to change the partner mind set that by applying science and technology plus the utilization of local wisdom/potency can produce a simple product with higher economic value rather than selling raw materials without any diversification or innovation process. In a persuasive approach, the partner education stage is done through meetings and socialization forum. c. Active approach The active approach is carried out with the field practice of POC-FISH fertilizer, preparation of tools and materials, implementation of activities, and packaging. d. Marketing POC-FISH product marketing is done by mapping the target market. The main target market is the local market. Another target market is the online market to reach the national market. At this stage, partners will be introduced to online marketing media. Training on making and using social media (Facebook, Whatsapp, BBM) as a marketing tools. Designing process of POC-FISH product package will be accompanied by the proposing Team. POC-FISH products will be packaged in ½ and or 1 L bottles.
C. Result and Discussion
Meetings and socialization between USN academics with both partners of IbM program. In this forum, the presentation of the purpose of the activity as well as the partner education stage about the existence of the economic value of a product with the innovation and diversification and is expected to be an economic alternative effort for the improvement of their welfare.
After the socialization stage, the activity continued with the stage of POC-FISH fertilizer preparation in the form of preparation of tools and materials, making POC-FISH, and packaging. The materials used in making POC-FISH are 10 kg of lure fish, 10 liters of water, 1 kg of tomatoes, 1 kg melted Javanese/aren sugar. Equipment required is a pot or 10 liter capacity pot, 5 liter drum capacity of 4 pieces and 1 pieceof pH meter. How to make: (1) 10 kg lure fish is washed, boiled until it half cooked, lifted and then cooled, then the fish is pressed, put the water together with the cooking water (2) After really cold and then filtered with a soft cloth and measured the pH. Neutralize the pH by inserting the filtered grated tomatoes until they become neutral in pH (pH 7). (3) Entering 1 kg of liquid sugar in the solution, stirring until the sugar is dissolved. Prepared drums that have been washed (4) Place the solution into the drum and close it tightly (5) Stored in a shady and cool place protected from sunlight/rain (6) Left for 12-15 days. Check the drum, if there is a bubble, immediately open the lid cover so the gas can come out and close ittightlyagain. If the process runs then the solution will be the typical fresh natural scent, not fishy/foul (7) multi purpose Liquid Organic Fertilizer made from lure fish (POC-FISH) is ready for use (8) POC-FISH products are packed in bottles ½ and or 1 L . POC-FISH is presented in Figure 2. | 2,294.6 | 2017-11-30T00:00:00.000 | [
"Environmental Science",
"Agricultural and Food Sciences",
"Economics"
] |
Martensite’s Logistic Paradigm
This work introduces a deterministic approach to the martensite transformation curve. Martensite is a nucleation-controlled transformation that has two characteristics: autocatalysis and auto-accommodation. Only a small number of martensite units initially form owing to primary nucleation. These new units may cause the transformation of other units by autocatalysis. We call this kind of transformation chained autocatalysis. Moreover, as the transformation progresses, the auto-accommodation influences the arrangement of new units. This work assumes that the transformation-saturation relates to the exhaustion of the chained autocatalysis, which underlines the microstructure. To compare, we considered the KJMA’s extended-transformation concept that implies assuming exhaustion by impingement. Data from isothermal martensite transformations and anisothermal martensite transformations are used to validate the model. Those data comprised different grain sizes and carbon contents. The model is based upon Verhulst’s logistic concept. We propose that the model’s high fitting-capability stems from its deterministic aspect combined with martensite’s self-similarity. Additionally, we suggest that chained autocatalysis controls the rate of martensite transformation. Therefore, the relaxation of transformation strains by plasticity assisted by mutual accommodation determines the transformation’s martensite volume in the absence of post-propagation coarsening/coalescence.
Introduction
The transformation curve, that is, the volume fraction transformed, V V , against time, t , is a tool in research and process development and industrial operations. Modeling the transformation curve is an issue that has been studied for decades. In the late thirties and early forties of the last century, Kolmogorov, Johnson-Mehl, and Avrami 1-5 , KJMA, published seminal papers on this subject. KJMA used a geometrical model to obtain transformation curves. KJMA supposed that the growing regions were spherical, that their growth rate was a constant, that the nuclei were uniform randomly located in space, and that the nucleation took place in two ways: site-saturation and constant nucleationrate. Their most important contribution was how to consider impingement 6 . As Liu et al. 7 put it, "KJMA's model consists of nucleation, growth, and impingement." KJMA model was generalized in different directions. One direction was to obtain more KJMA-like expressions using mathematically exact methods when nucleation and growth took place in a way distinct from KJMA's. Recently, Rios and Villa 8 used mathematical methods for this purpose. The disadvantage of such an approach is that a limited number of situations admit an exact expression. Another possibility, suggested by Avrami herself, is to employ the well-known "Avrami's equation," which is an expression containing two adjustable parameters: k and n ( ) Focusing the transformations in steels, the present authors have proposed an alternative to Avrami's equation [9][10][11] In the equation above, x is an "advance" variable, which in previous works [9][10][11] was equal to temperature, magnetic field, mechanical deformation, and time. The x i is the first datum in a dataset. Vi V is the integration constant resulting from the process of obtaining Equation 2. We denote this as Vi V . This constant is a small volume fraction transformed when the martensite transformation starts. In this work, one uses Vi V as a fitting parameter. x * and K ϕ are also fitting parameters. Throughout the text, one discusses the meaning of these parameters. This equation showed excellent agreement when fitted to transformations ranging from martensite to pearlite 10,11 . *e-mail<EMAIL_ADDRESS>However, another way to approach formal kinetics is possible. Abramov´s idea 12 of using Verhulst's logistic equation 13 as the basis to describe transformation kinetics represents a significant shift of paradigm. For the derivation of the transformation curve, the kinetic ideas are expressed directly by mathematics instead of mediated by the transformation's geometry, as did KJMA. The purpose of using such an approach for modeling transformations is not new. In 1938, Austin and Rickett 14 took the logistic equation as their starting point to obtain the so-called "Austin-Rickett equation": In 1938 not all KJMA's papers had already been published. The so-called Austin-Rickett equation is seldom applied today, superseded by KJMA's developments.
The description/rationalization of the fundamental aspects of martensite transformations [15][16][17][18][19][20] constructed the present understanding of martensite. That is, the martensite is a diffusionless and nucleation-controlled transformation. This understanding has been a particular venue to develop steels with optimized characteristics to suit the engineering demand. Martensite bears a lattice-correspondence with the austenite matrix. It also possesses a notable shape-change whose relaxation influences the geometric aspects of its constricted microstructure. Moreover, martensite-units do not coarsen or coalesce after propagation.
Consequently, the austenite grains confine the transformation because impingement on high-angle boundaries disassembles the reaction mechanism. However, martensite impingement on the grain-boundaries raises a stress field that can stimulate further intragrain and intergrain transformations to optimize transformation strains' accommodation. Thus, the first units can induce the formation of other units through autocatalysis. We call this kind of transformation chained autocatalysis. Chained autocatalysis occurs after the initial heterogeneous nucleation events in a scarce number of randomly scattered austenite grains 20 .
Verhulst's Logistic Equation
The nucleation-controlled aspect of the martensite transformation is compatible with the original Verhulst equation 13 . Verhulst analyzed the sustainability of populationgrowth based on his "logistic equation" where ( ) N t means the population, means the time, r stands for the population-intrinsic growth-rate and MAX N stands for the maximum population, which can be maintained by available resources. Thus, Equation 4 is consistent with martensite's autocatalytic kinetic. Equation 4 also agrees with the view that the transformation process may be studied in terms of propagation-events since the transformation is nucleation-controlled, and the martensite units do not grow/coalesce after propagation. Furthermore, Equation 4 implies that the transformation-saturation is determined by nucleation-exhaustion instead of the matrix's volumetric exhaustion. Indeed, experimental results show that saturation may occur for a volume fraction transformed 1 V V << 21,22 . Therefore assuming that post-incubation autocatalysis controls the transformation, we substituted ( ) ϕ ∆ is a time-independent transformation-intrinsic factor referred to the external process variable, D, e.g., driving force, temperature, or an applied field. This substitution is equivalent to admitting the pertinence of self-similarity 23 . Besides, both the morphology and the auto-accommodated of the martensite units suggest self-similarity. See Figure 1 in ref 24 . Thus, we recast Equation 4 to describe the martensite transformation curves, where x is the experimental "advance" variable and the subscript " V " indicates per unit volume of material, is the incubation delay. Since we cannot calculate or ( ) ϕ ∆ from first principles, they are treated here as fitting parameters.
Then, acknowledging that transformation curves are usually described in terms of the fraction transformed, N ξ is the mean volume of the martensite units. We calculate Introducing these relationships into Equation 5 includes the influence of the relaxation of the transformation strains, which affects the growth of the martensite units, into the logistic model. The ( ) ϕ ∆ refers to this crucial process, is not available. Thus, we considered two approximations. The invariance of the mean martensite units proposed by Magee 25 and the KJMA's approach assumes transformation in extended space [1][2][3][4][5] . In the first case , so that the integration of Equation 6 yields a formal analog of the "Austin-Rickett equation," where ξ i refers to the value of x at the beginning of the transformation detected in the experimental dataset. We , what is reasonable in the absence of an initial transformation-burst. To use KJMA's impingementcorrection, we set that is analog to Equation 2. Summing up, we have obtained two logistic equations where autocatalytic nucleation advances the transformation, but the volume fraction transformed depends on the relaxation of the transformation strains. Noteworthy the parameter ( ) ϕ ∆ refers to the relaxation of the transformation strains which influences the growth of the martensite units, whereas the transformation exhaustion described by depends on the arrangement of the martensite in the austenite grains and the spread of the transformation over the austenite grains 26 .
Experimental Data
As in the previous work, we imported databases from papers found in peer-reviewed scientific journals to validate the proposed equations. To fit the analytical expression to the experimental values, one calculated the sum-of-squares, ΣSQ, between experimental and calculated values of The ΣSQ gives a "global" idea of the fitting quality. One may also define the relative distance, δ , between the experimental data and the analytical solution predictions x means volume fraction imported from experimental data and the ( ) VA V x means the volume fraction predicted by the analytical equations. As already established, experimental procedures may be subject to errors. One can consider a reasonable error of 5% for metallurgical experiments. Regarding the error of 5%, Tables 1-5 show the percentage of the number of points below the error of 5%. This number can help to give a quantitative basis for the fitting besides ΣSQ and visual inspection.
Isothermal Transformation
and Equation 9 becomes where τ is the incubation time, i t is the first transformationtime datum of the dataset, and T means the temperature. The isothermal-martensite database, Fe-23.2wt%Ni, 2.8wt%Mn, 0.009wt%C, grain intercept 0.048 mm, was initially described in ref 22 . The isothermal-martensite database, Fe-12wt%Cr, 9wt%Ni maraging steel, was presented in ref 27 . Since the imported data did not allow a precise determination of the incubation time, we assumed τ = λ ⋅ i t , and fitted λ until the sum-of-squares, ΣSQ, between experimental and Figure 1 shows the FeNiMn database asfitted. Figure 2 shows the maraging curves as-fitted.
The values of ΣSQ point out that Equation 11 performed slightly better than Equation 12. Visual inspection is consistent with ΣSQ values. That is, both expressions provided a good fit despite their different formal-approaches to transformationsaturation. The behavior of the parameter δ confirms this.
Concentrating on the FeNiMn alloy, Table 1 and Figures 3 and 4, at the high transformation temperatures, ( ) ϕ T refers to a thermally activated process. By contrast, at the lower temperatures, 163K -77K, the anti-thermal variation in ( ) ϕ T points to the mechanical autocatalysis, which feeds back strain energy 28 . Phenomenologically, we propose, where 0 ϕ is a proportionality factor, Ea , and ∆Ga are apparent energies, T is the reaction temperature and B k is the Boltzmann constant. The charts in Figure 3 yield Ea ≈ 5 kJ/mol -13 kJ/mol, which is compatible with dislocation processes, and ∆Ga ≈ 0.9 -1.3 kJ/mol, which has the same magnitude as the elastic free-energy (0.9 kJ/mol) of an oblate spheroid with a typical 0.05 aspect-ratio in a constrained matrix 29 . The FeNiMn isothermal martensite undergoes a substructure change at low transformation temperatures 22 . Thus, we propose that the variation in ( ) ϕ T refers to changes in the relaxation of transformation strains 30 . The variation in the Vi V corroborate the variation in ( ) ϕ T . However, the variations in 1/ τ show the opposite trends, see Figure 4. Such specific behavior point to differences in the martensite propagation. Martensite propagation at incubation depends on the probability that austenite defects sustain coordinated atomic groups to cross the nucleation path 31,32 . By contrast, the nucleation's postincubation is determined by a previously formed martensite unit (autocatalysis feedback) 28,33 . At high-temperature thermal agitation hamper atomic groups' stability, creating an entropic barrier for converting such groups into nuclei. Thence the chemical driving force controls the incubation. Instead, at low temperatures (higher driving forces), a thermal barrier controls the martensite incubation/nucleation. In this regard, it is noteworthy that the apparent activation energy obtained from the incubation time, ~ 6 kJ/mol, compare with the ~ 5 kJ/mol obtained from the parameter ( ) ϕ T , which refers to the accommodation of the shape strain at high transformation temperatures. This comparison says that dislocation processes are present in both processes (relaxations of lattice-misfit and the shape strain). At this time, the analysis of the temperature variation in the parameter Vi V was not conclusive. Lastly, mind that impingement of martensite on the austenite grain boundary generates a stress-field capable of fostering martensite propagation into the next grain 26,34 . However, such an "intergrain-spread" is hindered if the austenite plasticity halts the radial propagation of a martensite unit 35 . Such a possibility is comparable to "soft-impingement."
Martensite "Athermal" Transformation
To describe the transformation curve of time-independent, driving-force induced ("athermal") martensite, one replaces temperature for the advancing variable in Equations 11 and 12 that gives: and ( ) where * T is the upper temperature for martensite nucleation, and i T is the highest experimental temperature in a data set.
The variables, Vi V , * T and ϕ G are fitting parameters. We quantities seem to be inverted when compared with The reason for this is that time increases and temperature decreases. Thus, the terms are inverted so that the subtractions remain positive.
In FeMnSiMo, the transformation took place in a dilatometer. In addition to allowing the models' validation, the database permits to characterize the austenite grain size's influence on the transformation curve. Bearing scatter in * T , we expressed * = λ ⋅ i T T, and fitted λ until the sum-of-squares, ΣSQ, between experimental and calculated values of V V became invariant, see Figure 5. Table 3 lists the values of the obtained parameters of Equations 14 and 15. Inspection of the values of ΣSQ indicates that Equation 14 provides the best fittings with a minor variation in ΣSQ. By contrast, the values of ΣSQ, which characterize the fittings with Equation 15, increase with increasing the austenite grain-size.
A similar procedure was used to fit the Fe-18wt%Cr, 8wt%Ni data. The results are shown in Figure 6 and Table 4. The fit is excellent.
Concentrating on the FeMnSiMo, Figures 7 and 8 , are due to the severe effect of crystallographic-variance in fine-grain austenite. This crystallographic-variance affects the microstructure's localrandomicity, which is a requirement for utilizing the KJMA's methodology [1][2][3][4][5] . Again, the behavior of the parameter δ indicated a better agreement between Equations 14 and 15, which assumes exhaustion by nucleation and by impingement, respectively.
Lastly, we consider the influence of the carbon in the martensite, transformed by continuous cooling. Typical plain carbon-steels with similar austenite grain-sizes were considered: Fe46C(0.46wt%C, 0.71wt%Mn, 0.26wt%Si, 0.1wt%Ni, 0.2wt%Cr), Fe66C(0.66wt%C, 0.69wt%Mn, 0.30wt%Si, 0.1wt%Ni, 0.2wt%Cr), and Fe80C(0.80wt%C, 0.61wt%Mn, 0.41wt%Si, 0.2wt%Ni, 0.3wt%Cr). These databases were imported from ref 40 . The fittings with Equations 14 and 15 are shown in Figure 9, and the values of the respective modelparameters are listed in Table 5. Again, visual inspections of the charts and the variations in ΣSQ indicate that the Equation 14 provided better fittings, especially concerning the transformation-charts' progressive induction. These fittings were consistent with the behavior of the parameter .
δ
We ascribe the variation in * T to the influence of the carbon on the austenite stability. The variation in G − ϕ is related to the influence of carbon content on the transformation microstructure since increasing carbon enhances the partitioning of the austenite grains into finer packets and blocks 41 .
Like the isothermal transformation curves above analyzed, the different modes of considering the transformationsaturation provided proper fittings of the data. Nonetheless, the values of the fitting parameters are model-dependent, as might be expected.
Discussion
The classical Verhulst's logistic equation, Equation (4), proposed to describe constrained population growth, provides a venue to express transformation curves 12 Notably, the variations in the parameters obtained from the FeMnSiMo database typical of martensite transformation by cooling ("athermal") exhibit similarity and are in qualitative agreement with the results reported in the referenced paper 36 . Thence, the experimentalists may choose the more appropriate expression to analyze their data and describe the transformation under consideration 12 . Nonetheless, the meaning of the physical parameters obtained from formal models depends on the models' premises. We assert that autocatalysis and transformation-saturation by nucleationexhaustion are realistic premises to model martensite transformation curves as provided by Equations 11 and 14. It is worth discussing the fitting parameters displayed in Tables 1-5.
First, we would like to offer some background on the use of phenomenological equations and fitting parameters to describe a specific kinetic curve. In the present case, to fit a ( ) curve. The first possible approach to describe experimental measurements by an analytical expression is to employ an arbitrary function to fit the experimental curve. This fit may be useful if one has an analytical theory that takes a continuous function as its input, for example, Ref 42 . On another extreme, one may fit an expression derived from fundamental theories. Generally, these are not easy to come by. An intermediary approach is to use the so-called formal kinetics. These provide exact expressions when one specifies the nucleation and growth rates. The pioneering work is, of course, KJMA theory [1][2][3][4][5] From the equation employed here, one expects: I) that they give a good fit; II) that we can extract some information from the fitting parameters. Notice that the functional form is different for Equations 14 and 15. Therefore it comes as no surprise that the absolute value of the fitting parameters differs. Nonetheless, Tables 6, 7, and 8 show that they do not differ by the same magnitude. In the case of Table 6, the differences between the fitting parameters were calculated as follows: (Equation 12 parameter -Equation 11 parameter)/ (mean value of Equation 11 and Equation 12 parameters). The same reasoning was adopted for Tables 7 and 8, but with Equations 11 and 12 replaced by Equations 14 and 15.
The parameters that mark the beginning of the transformation, such as initial transformation temperature and incubation time, are physical parameters. Tables 7 and 8 demonstrates that the values of * T lie quite close when Equations 14 and 15 determine them. Table 6 shows the values of τ , obtained from Equations 11 and 12, behave similarly but with an apparent discrepancy at the highest and lowest temperatures.
The absolute values of the other parameters have a significantly higher difference. This behavior is unavoidable as the proper functions are different. This result suggests that the function Table 6. Difference between the fitting parameters shown in Table 1 parameters, such as, Vi V and ϕ . One cannot expect Equations 11, 12, 14 and 15 to be more than they are. They are equations with a physical or mathematical background, but they are still approximations. And it is well-known that fitting parameters carry the error made by assuming a certain approximation. But, as shown above, the parameters are not influenced in the same way.
Here, parameters that have a direct physical interpretation tend to be almost independent of the fitting expression. By contrast, parameters that are more directly related to the functional form of the fitting expressions tend to have more considerable differences.
Conclusions
1. The utilization of the logistic formalism to describe isothermal and continuous cooling martensite transformations yielded quality-fittings of experimental data. These quality-fittings are consistent with current views regarding martensite's nucleation-controlled, autocatalytic kinetics, and self-similarity. 2. The apparent activation energies obtained from Equation 13, 5 kJ/mol -13 kJ/mol, compares with the activation energies for martensite nucleation reported in refs 46,47 . Therefore, one may suggest that there are two kinds of active dislocation processes. One dislocation process acts in the conversion of coordinated atomic groups into nuclei. The other, intrinsically different dislocation process relates to the relaxation of the martensite shape-strain 30 . 3. The incorporation of self-similarity into Verhulst's logistic formalism allowed good descriptions of the martensite transformation curves as well as characterizations of kinetic aspects of isothermal or "athermal" transformations. | 4,540.2 | 2021-01-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Controlling Chaos in Permanent Magnet Synchronous Motor Control System via Fuzzy Guaranteed Cost Controller
and Applied Analysis 3 In the system 2.2 , the external inputs are set to zero, namely, T̃L ũd ũq 0. Then, the system 2.2 becomes dĩd dt − ĩd ĩqw̃, dĩq dt − ĩq − ĩdw̃ γw̃, dw̃ dt σ ( ĩq − w̃ ) , 2.3
Introduction
The permanent magnet synchronous motor PMSM is an important role in industrial applications due to its simple structure, high efficiency, high power density, and low maintenance cost 1-3 . However, the dynamic characteristics and stability analysis of PMSM has emerged as a new and attractive research field, such as bifurcation, chaos, and limit cycle dynamic behaviors 4-9 . Moreover, many profound theories and methodologies [10][11][12][13][14][15][16] have been developed to deal with this issue. In 4 , the adaptive dynamic surface control DSC of PMSM has been presented. In 5, 6 , the authors had derived some feedback control design methods for stability of PMSM in their results. Some control methods had studied to stabilize the PMSM systems, such as sliding mode control SMC 7 , differential geometry method 8 , passivity control 9, 10 , sensorless control 11-14 , Lyapunov exponents LEs placement 15 , and fuzzy control 16 .
Takagi-Sugeno T-S fuzzy concept was introduced by the pioneering work of Takagi and Sugeno 17 and has been successfully and effectively used in complex nonlinear systems 18 . The main feature of T-S fuzzy model is that a nonlinear system can be approximated by a set of T-S linear models. The overall fuzzy model of complex nonlinear systems is achieved by fuzzy "blending" of the set of T-S linear models. Therefore, the controller design and the stability analysis of nonlinear systems can be analyzed via T-S fuzzy models and the so-called parallel distributed compensation PDC scheme 19, 20 . This paper contributes to the development of the state-feedback control design for PMSM. Based on Lyapunov stability theory and LMI technique, the stability conditions of PMSM are analyzed. Finally, an example is given to illustrate the usefulness of the obtained results.
Problem Formulation and Main Results
Based on d-q axis, the dynamics of permanent synchronous motor plant can be described by the following differential equation 15 : where i d , i q , and w are state variables, which denote d, q axis stator currents, and w is motor angular speed, respectively. T L , u d , and u q are the external load torque, the direct-and quadrature-axis stator voltage components, respectively. J is the polar moment of inertia, β is the viscous damping coefficient, R 1 is the stator winding resistance, and L d and L q are the direct-and quadrature-axis stator inductors, respectively. ψ r is the permanent magnet flux, and n p is the number of pole pairs. By applying the affine transformation x λ x, β/ n p τψ r , τ L q /R, and λ diag λ d λ q λ w diag bk k 1/τ , system 2.1 can be transformed as follows: where u d u d /kR, γ −ψ r /kL q , u q u q /kR, σ βτ/J, and T L τ 2 T L /J.
Abstract and Applied Analysis 3 In the system 2.2 , the external inputs are set to zero, namely, T L u d u q 0. Then, the system 2.2 becomes To investigate the control design of system 2.4 , let the system's state vector x t x 1 x 2 x 3 T and the control input vector be u t . Then, the state equations of PMSM can be represented as follows:ẋ The continuous fuzzy system was proposed to represent a nonlinear system 17 . The system dynamics can be captured by a set of fuzzy rules which characterize local correlations in the state space. Each local dynamic described by the fuzzy IF-THEN rule has the property of linear input-output relation. Based on the T-S fuzzy model concept, the nonlinear PMSM system can be expressed as follows.
Model rule i: where z 1 t , z 2 t , . . . , z r t are known premise variables, M ij , i ∈ {1, 2, . . . , m}, j ∈ {1, 2, . . . , r} is the fuzzy set, and m is the number of model rules; x t is the state vector, and u t is input vector. The matrices A i and B are known constant matrices with appropriate dimensions. Given a pair of x t , u t , the final outputs of the fuzzy system are inferred as follows: In this paper, we assume that w i z t ≥ 0, i ∈ {1, 2, . . . , m}, and m i 1 w i z t > 0. Therefore, we have η i z t ≥ 0, i ∈ {1, 2, . . . , m} and m i 1 η i z t 1, for all t ≥ 0. To derive the main results, we first introduce the cost fuction of system 2.4 as follows: where S 1 and S 2 are two given positive definite symmetric matrices with appropriate dimensions. Associated with cost function 2.8 , the fuzzy guaranteed cost control is defined as follows.
Definition 2.1 see 21 . Consider the T-S fuzzy PMSM system 2.6 ; if there exist a control law u t and a positive scalar J * such that the closed-loop system is stable and the value of cost function 2.8 satisfies J ≤ J * , then J * is said to be a guaranteed cost and u t is said to be a guaranteed cost control law for the T-S fuzzy PMSM system 2.6 . This paper aims at designing a guaranteed cost control law for the asymptotic stabilization of the T-S fuzzy PMSM system 2.6 . To achieve this control goal, we utilize the concept of PDC 17 scheme and select the fuzzy guaranteed cost controller via state feedback as follows.
Control rule j: If z 1 t is M j1 and · · · z r t is M jr , then where K j , j ∈ {1, 2, . . . , m} are the state feedback gains. Hence, the overall state feedback control law is represented as follows: Before proposing the main theorem for determining the feedback gains K j j 1, 2, . . . , m , a lemma is introduced. where V x t is a legitimate Lyapunov functional candidate, and P is positive definte symmetric matrices . By the system 2.6 with m i 1 η i z t 1, the time derivatives of V x t , along the trajectories of system 2.6 with 2.8 and 2.10 , satisfẏ
Lemma 2.2 see 22 Schur complement . For a given matrix
2.14 6 Abstract and Applied Analysis In order to guaranteeV x t − m i 1 m j 1 η i z t η j z t x T t S 1 K T j S 2 K j x t < 0, we need to satisfy Φ ij < 0. By Lemma 2.2, premultiplying, and postmultiplying the Φ ij in 2.14 by P −1 > 0, Φ ij < 0 are equivalent to Φ ij < 0 in 2.11 , then we can obtain the following:
2.15
From the inequality 2.15 ,V x t < 0, we conclude that system 2.6 with 2.8 is asymptotically stable. Integrating 2.12 from 0 to ∞, we have Since that the system 2.6 with 2.8 is asymptotically stable, we can obtain the following results: Consequently, J ∞ 0 x T s · S 1 · x s u T s · S 2 · u s ds ≤ x T 0 Px 0 V x 0 J * . This completes the proof.
Numerical Simulation and Analysis
In this section, a numerical example is presented to demonstrate and verify the performance of the proposed results. Consider a PMSM as given in 2.1 with the following parameters 23 : L d L q L 14.25 mH, R 1 0.9 Ω, ψ r 0.031 Nm/A, n p 1, J 4.7 × 10 −5 kg m 2 , β 0.0162 N/rad, γ 20, and σ 5.46.
From the simulation result, we can get that x 3 t is bounded in interval −12 12 . By solving the equation, M 1 and M 2 are obtained as follows: 3.1 M 1 and M 2 can be interpreted as membership functions of fuzzy sets. Using these fuzzy sets, the nonlinear system with time-varying delays can be expressed by the following T-S fuzzy models.
By the theorem, the stabilizing fuzzy control gains are given by K 1 K 2 3.968 19.902 77.990 .
Consequently, the minimal guaranteed cost is J * 5.443 × 10 −11 . The simulation was done with a four-order Runge-Kutta integration algorithm in MATLAB 7 with a step size of 0.0001. The simulation results with initial conditions x 0 13.5 − 5 − 5 T are shown in Figures 1-2. The chaotic attractor of PMSM system is given in Figure 1. The frequency power spectrum of the PMSM system variables is illustrated in Figure 2. The system state responses trajectory of controller design is shown in Figure 3. Figure 4 depicts the time responses of the control input of u t . When t 20 sec, it is obvious that the feedback control gain can guarantee stability of PMSM systems. From the simulation results, it is shown that the proposed controller works well to guarantee stability.
Conclusion
We have presented the solutions to the guaranteed cost control of chaos problem via the Takagi-Sugeno fuzzy control for PMSM system. Based on Lyapunov stability theory and LMI technique, the guaranteed cost control gains can be easily obtained through a convex optimization problem. Finally, a numerical example shows the validity and superiority of the developed result. The future work will extend the proposed method to the underlying systems with noise and disturbances effects, like noise or disturbances, uncertainties effects, and robustness to time-varying delays. Also, the future application in the experiment will be included. | 2,392.6 | 2012-08-14T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Data processing and visualization tool for atomic and molecular data for collisional radiative models
The Monte Carlo kinetic code EIRENE (Sect. 2) is used for simulating the behavior of neutral species in the edge of tokamak plasma and coupled with fluid plasma codes for a self-consistent description to be compared with measured experimental conditions. The data and visualization toolbox HYDKIN (Sect. 3) has been developed as a pre-processing tool for validation of those atomic and molecular (A &M) data used in EIRENE simulations, such as cross sections and reaction rates for plasma-neutral and neutral-neutral collisions. The restructuring that is being performed to increase HYDKIN readability and usability is here presented (Sect. 4).
Introduction
The confinement and the sustainability of burning plasma within tokamaks are key aspects of the research on nuclear fusion. In order to reduce the impact of leaks on plasma-facing components, plasma losses are driven toward some targets while being cooled down through the interaction with atomic and molecular species in the so-called divertor region [1]. In particular, seeding the plasma with impurities (nitrogen, neon) and hydrogenic gas puffing reduce the particle and heat loads at the target plates, reaching the so-called detached phase, at which plasma temperature in the divertor drops below few eVs and a key role is played by a chain of reactions involving molecules (molecular assisted dissociation/recombination/ionization, MAD/MAR/MAI) [2]. EIRENE [3,4] (named from the Greek goddess of of peace) is a neutral transport code developed for solving the kinetic equations of neutral particles at the boundary of a tokamak plasma in order to predict the plasma conditions and the heat flux in the divertor for realistic configurations. EIRENE uses a Monte Carlo model for tracking several particle histories and statistically derives the solution of the transport equations. A particle history is made by tracklength estimator and collision events, in which by "collision" it is meant any reaction among particle species parametrized by the corresponding reaction rates. In Dmitry V. Borodin and Bettina Küppers have contributed equally to this work. a e-mail<EMAIL_ADDRESS>(corresponding author) b e-mail<EMAIL_ADDRESS>c e-mail<EMAIL_ADDRESS>this respect, EIRENE must be integrated with atomic and molecular databases. Some particle databases have been intentionally realized in view of the integration with EIRENE and are currently being updated. The first one was AMJUEL [5], collecting data (interaction potentials, cross sections, rates) from different sources represented as the polynomial fits defined in Ref. [6]. HYDHEL [7] contains the same kind of data but corresponding to different processes, all of them derived from Ref. [6], while H2VIBR [8] mostly provides data for those reaction with vibrationally resolved molecules. Other EIRENE databases provide data for the breakup of non-hydrogenic molecules (for instance METHANE for hydrocarbons). They have not being considered for restructuring, and we will not discuss them here.
These databases are available at EIRENE web page [4] and through the online database and data analysis tool-box HYDKIN [9]. HYDKIN was originally just a tool for studying hydrocarbons molecular decay, which has grown into a comprehensive toolbox for plotting and manipulation of data with a user-friendly graphical interface. It also contains a 1D particle solver that can be useful for beam-like scenario or in the presence of symmetries that can reduce the dimensionality of the problem (toroidal and poloidal symmetry or assuming short penetration from an infinite flat surface). Spectral resolution and online sensitivity analysis are possible. Generically, HYDKIN can be used to get some insight into the A&M side of the problem and understand the crucial physical parameters and the most promising measurements for experiments.
The manuscript is organized as follows. In Sect. 2, a general introduction to tokamak edge plasma physics is given and the relevance of EIRENE simulations is outlined. Section 3 provides a description of HYD-KIN toolkit, whose restructuring is presented in detail in Sect. 4 with an emphasis on the default cases for molecular decay described in Sect. 4.1. Brief concluding remarks follow in Sect. 5.
Tokamak plasma: edge simulations and detachment
The magnetic field configuration of a tokamak reactor is designed in order to provide energy and particle confinement in the core region where nuclear reactions occur. However, because of magnetic drifts and turbulent transport some leaks are unavoidable. A technical solution to reduce the energy and particle fluxes to the main chamber walls is to drive particles toward some specific tungsten target plates, usually placed in the lower part of the tokamak poloidal section (see Fig. 1) called the divertor.
In the divertor, some additional particles are present besides bulk ions and electrons coming from the core: They are recombined atoms, molecules injected via external systems and impurities due to the interaction between plasma components and material walls. They Fig. 1 A poloidal section of the planned EU-DEMO tokamak: plasma particles losses from the core region (blue) are driven toward the divertor (green) by the magnetic configuration and hit the target plates (red ). EIRENE simulations cover the triangulated region between the core and the wall. The triangulation gets finer and finer approaching the socalled separatrix (the border between the core region and the external region where particles are driven to the target) and becomes indistinguishable from it are generically denoted as neutrals and they are in a collisional regime in which the fluid approximation fails (relatively high Knudsen number and where the energetic tail of the ions and electrons distributions plays a significant role) [10,11]. For this reason, the threedimensional kinetic code EIRENE [3] has been developed and used to interpret the results of experiments. EIRENE provides a statistical solution to the characteristic of the kinetic equation via a Monte Carlo approach in both stationary and non-stationary cases. It is generically coupled iteratively to a plasma fluid code, as for instance B2.5 in SOLPS-ITER [12], and EIRENE provides the particle, energy and momenta sources due to neutrals, which are then inserted in the next iteration within the plasma code, providing a new background for the next EIRENE iteration and so on up to convergence.
EIRENE takes as input the list of all neutral particle species and the transition data among them, usually in the form of Maxwellian-averaged rates. Essentially EIRENE determines the interplay between particle transport and local chemical equilibrium, that can be studied by solving the corresponding collisional radiative model (CRM). In this respect, the analysis of CRMs is a necessary step to understand the impact of transport.
Several studies of different divertor conditions have been performed in the last decades including EIRENE modeling to get quantitative predictions (see for instance the review [13]). The best operational regime is the socalled detached phase, providing a substantial reduction of ion energy and particle fluxes to the target plates [1]. Atomic and molecular reactions are beneficial for detachments. In fact, the scattering of plasma components by atoms and molecules reduce particle and momentum fluxes to the targets. Moreover, the energy costs of the ionization and/or dissociation reactions due to the impact with bulk ions and electrons reduce the energy content of the plasma and thus the energy fluxes to the targets.
Moreover, atomic and molecular emissions provide spectroscopic signals that can reveal the divertor condition and provide detachment control. For instance, the plasma background temperature and density together with the atomic and molecular concentrations have been determined for the Swiss research fusion reactor "variable configuration tokamak" (TCV) [16] as follows: • plasma density is derived from Stark broadening of atomic Balmer lines, • plasma temperature is derived from the ratio of different Balmer lines, • the density of the donor species, among which there are atoms and molecules, is computed from the absolute line intensities.
This procedure has been applied to atomic lines and the idea is to extend to the molecular emission lines, which have a more complex band structure [14]. The complex band structure for, in particular H 2 , is due largely to the high degree of mixing between excited electronic states in this molecule, leading to highly perturbed rovibrational levels whose energies and corresponding transition intensities has been historically difficult to assign, predict and measure accurately. A crucial aspect of this procedure is thus the availability of reliable physical modeling of particle species and databases for transition data.
As we will discuss in next section, HYDKIN is a toolkit that can be used for checking how reliable are the A&M data by themselves (looking for internal consistency) and with respect to the experiments on fusion reactors modeled through EIRENE simulations. For instance, in real-time control of a burning plasma one needs a prominent spectroscopic features to characterize the degree of detachment by single and reliable lineof-sight integrated measurement. HYDKIN can be used for that. The data analysis tool-box helps the user to define a collisional radiative model, i.e., a representative list of particle/photon states and reactions/transitions for an experimental plasma configuration. The solver provides the densities of particle species in stationary and non-stationary configurations. The visualization tools can give insights into the results and facilitate the identification of the prominent parameters. As an application, a representative scenario for the next generation tokamak EU-DEMO [18,19] is discussed and the associated CRM is analyzed and solved within HYD-KIN in Sect. 4.
HYDKIN
HYDKIN [9] is a toolbox for plotting and manipulating atomic and molecular data with a user-friendly graphical interface. The data currently available are taken from existing databases. Some of them have been intentionally realized for EIRENE: AMJUEL [5], HYD-HEL [7], H2VIBR [8]. Each datum corresponds to a chemical reaction (ionization, excitation, dissociation, etc.) in which the reactants are a (usually charged) projectile and a (usually neutral) target. The databases have a common structure in terms of the following chapters: • H.0, containing interaction potentials, • H.1, containing the cross sections σ vs energy in the laboratory frame (comoving with the neutral particle species); • H.2, containing rate coefficients σv 1 vs temperature with vanishing neutral particle energy and nondrifting Maxwellian distribution for charged species; [17]). They have a different structure, some of them are not interoperable with EIRENE and they are out of the scope of the present paper since they were not subject to restructuring.
HYDKIN provides not only some interfaces for plotting cross sections and reaction rates, but also a onedimensional particle solver that can be used to deduce the concentration of particles given a set of chosen reactions solving the associated CRM.
HYDKIN solver provides an exact analytic solution for the so-called Master equation governing particle concentrations y i i = 1, .., n where species are sorted such that the matrix A containing all the reaction rates and loss terms is uppertriangular, while b denotes external source rates. The solution is based on an expansion on A eigenvectors e i , with vanishing eigenvalues λ i for i > m, i.e., λ m+1 = λ m+2 = .. = λ n = 0, corresponding to the final dissociation products, and it is the sum of the homogeneous solution y h and a particular inhomogeneous one y p : The coefficients c i andc i are determined by requiring y p to be a solution of the Master equation (1), while d i are derived from the initial condition (see Sect. 2 in Ref. [9] for further details). The solver allows us to include external sources, reservoirs (an infinite reserve of particles at finite density and temperature, useful for modeling background species) and also to distinguish between P and Q species. 3 Furthermore, one can also add finite velocity for each species along one direction or perform a spectral analysis outlining the relevant time scales and the development of particle concentration up to equilibrium, if any. The solver graphical interface is designed to easily allow the user changing parameters from one run to the other and thus to perform sensitivity analysis.
HYDKIN can be used (see Fig. 2) • to import/export data (json, tabular and other formats), • to produce input data for EIRENE, • to check data for consistency, abnormal features,.., • to check and improve the results of the simulation, • to understand A&M side of the problem and identify the most significant processes (among the selected ones).
For instance, in Fig. 3 it is shown how HYDKIN can be used to compare the recombination reaction rates for different models and/or assumptions.
HYDKIN restructuring
A major restructuring of HYDKIN toolbox is being performed in order to increase accessibility and readability of the available databases and as a first step toward the integration with other sources. The new version is available at HYDKIN web page http://www.eirene.de/ hydkin/ by clicking on "version 2022" and at present it is restricted to those databases containing data for hydrogenic species, namely AMJUEL, HYDHEL and H2VIBR. The access is restricted to registered EIRENE users. Registration can be done at the website http://www.eirene.de/cgi-bin/eirene/registration. cgi, and the access to the resources, included HYDKIN, is regulated by Eirene license for non-commercial and non-military use available at www.eirene.de/Licence/ EPL.pdf.
The restructuring activity has been focused on three main aspects: updating the graphical interface for choosing the reactions for the plotter and the solver, providing manageable output formats suitable for interaction with other codes, making available some default cases for testing. The new interface is shown in Fig. 4.
Concerning the update of the graphical interface, the reactions can now be chosen from a main table in which the reactions coming from different sources (AMJUEL, HYDHEL or H2VIBR) are grouped together and shown as consecutive rows. Furthermore, some columns specifying additional reaction features are now present. The list of the available feature columns is • number: a label for the physical process in the form of three integer digits split by dots, for instance 2.1.5 for hydrogen ionization by electron impact. This classification is directly derived from that of AMJUEL and HYDHEL databases, while H2VIBR data are labeled according with the group they belong to. • reaction: the formula of the reaction, which needs not to be unique for each process as data may refer to different models and/or different particle inter- taken. • data type: the type of data, whether they are "experimental," "calculated" from a theory or "mixed," meaning a combination of experimental data and a theoretical model.
• peculiar properties: a general comment about the data, as for instance the kind of model from which they are taken, some specific properties or features. • generation: a label to keep track of the insertion of new data: 0 for all reaction at present, it will increase in next developments and correlated with next HYDKIN releases. • data origin: a specification of how the data have been obtained, for instance whether it is the original fit present in the reference, or it is an improved fit or for experimental data whether it is the fit of the original data. There are three additional columns that do not refer to data features but they allow the user to select those reactions to be sent to the plotter and to the solver: • the first column selects those processes not to be included into the solver, • the second column is to choose those reactions to include into the solver and for each process only one of them can be chosen (radio button), so avoiding duplicate reactions from different sources (as for instance data coming from different databases or computed from different theoretical models or from different experiments), • the third column selects the reactions for the plotter.
After having chosen the reactions, at the bottom of the page one can find the buttons to access the plotter and the solver and some inputs to fix additional parameters (beam energy, temperature, density, etc.).
The second aspect of HYDKIN restructuring deals with providing a manageable output format for the interaction with EIRENE and with other codes. In particular, the solver now provides an output json file that contains all the specified background parameters (density and temperature), the list of particle species included into the solver and the obtained concentrations, the list of the chosen reactions with all the features into the graphical interface, the corresponding rates and the respective temperature and energy/density parameters where available (see Fig. 5).
The json output represents the first step toward establishing a common manageable format for plasma codes and A&M databases. This file can be used for storing the results of a solver run and also for reloading the same input parameters through the button "Start with own configuration" into the main page (see fig-Fig. 6 The buttons available at the top-right of the main HYDKIN webpage: "Start with own configuration" for loading from the json output file of a previous run, "Unresolved," "Vibrationally resolved 7" and "Vibrationally resolved 14" for loading the corresponding default case ure 6). While for now the main interest is interoperability with EIRENE, whose input files have also been re-written in json format, the next step is to adhere to some standard format also with respect to other A&M databases (as for instance PyValem). In this respect, json format is also quite useful, because reading and manipulating packages are available in most programming languages.
Moreover, a manageable external plotting tool capable of reading json output is also planned such that clear readable plots can be generated even if many data are selected.
Default cases
The third aspect of HYDKIN restructuring is the introduction of some default cases that can be useful as benchmarks for molecular decay and for testing pur- poses. The default models are a set of pre-selected options in the GUI described in Fig. 4. Three decay models for H 2 molecules with a parametric source term are now present: the vibrationally unresolved one and two vibrationally resolved ones, with seven and fourteen internal molecular states. They can be accessed through the buttons at the top-right of the main page shown in Fig. 6. The list of reactions is shown in Table 1 and it contains four ionization/dissociation reactions for H 2 , one by proton impact (charge exchange) and three by electron impact, and three dissociation reaction for molecular ion H + 2 . All these three defaults models are built selecting as reactions the ones detailed in Table 1 including ladderlike vibrational transitions by electron impact in the two vibrationally resolved cases only. The list of reactions is taken from EIRENE simulations for JET [22] and EU-DEMO tokamaks [23].
HYDKIN plotter can be useful to identify the relevant reaction channels with temperature. By inspecting the dissociation reactions for H 2 and H + 2 in Figs. 7 and 8, one can see how at low plasma temperature (T < 3eV ) heavy particle collisions is the main dissociation channel providing molecular ions (red curve in Fig. 7) which experience mainly neutral dissociation and secondarily dissociative recombination (yellow and green curves in Fig. 8, respectively), whose branching ratio increases with decreasing temperature. The resulting reaction chains are molecular assisted dissociation and recombination (MAD and MAR), the former dominating but becoming less relevant at very low temperature. This example outlines the utility of HYDKIN plotter for analyzing the relevant physical processes in CRMs.
This scenario is confirmed by solving the model and deriving the concentration of particles. This has been originally done through the flexible Yacora solver [24] in Ref. [25] at equilibrium that has been here used for validation. The HYDKIN output shown in Fig. 9 provides time evolution up to equilibrium, and it is useful to determine the relevant time scales (spectral analysis).
Including vibrational resolved molecular states and tracking them as separate species is computationally (i) the dissociation rate is overestimated in the unresolved case, as shown in Ref. [26], (ii) emission bands for different electronic transitions overlap and it is experimentally hard to separate them, so that one generically gets measurements of single lines coming from vibrationally (and also rotationally) resolved states (see for instance [14]), (iii) vibrational distribution is generically not Boltzmann one and its shape should be determined by solving CRMs or from EIRENE simulations.
HYDKIN plotter can give an hint on (i). In Fig. 10 the rate for the charge exchange reaction in the unre-solved case (blue) is compared with those vibrationally resolved and it is generically larger with respect to the rates for the ground (orange) and the first excited states (green and red). Combining the vibrationally resolved rates with those concentrations obtained by solving the CRM, one obtains a lower overall dissociation rate with vibrational resolution.
Conclusions
Reliable models and interaction data for atomic and molecular species are crucial for simulating and understanding the behavior of particles in tokamak divertor region, especially in view of getting and controlling detached conditions. HYDKIN has been developed as a comprehensive tool for testing some databases intentionally realized for EIRENE in order to provide proper input data to simulations. A major HYDKIN restructuring has started in view of facilitating the interaction with other codes/databases and the comparison among different sources. Some models of H 2 molecules decay have been added to HYDKIN as default cases and they guide the user across the novel features.
The restructuring is still in progress along the following guidelines: • including a bundle structure for internal states resolution, • including additional CRMs and further data sources, in particular for vibrationally resolved molecules and for isotopes/isotopologues (in order to avoid the isotope scaling usually adopted in EIRENE simulations), such as Molecular Convergent Close-Coupling database [27], • increasing readability and usability, correcting server errors and bugs, providing a novel reaction classification related with a search functionality, • linking to EIRENE input files.
On the physical side, the main goal is to identify spectroscopic signals characterizing detached divertor conditions for next forthcoming tokamak experiments (EU-DEMO) by systematic testing different data sources and models (reaction lists, bundle states, etc.), by introducing a compact and effective description of vibrational and rotational excitations and through the comparison between EIRENE simulations and experiments. Molecular spectroscopy requires reliable models and databases to analyze correctly the local chemical equilibrium and to determine emissivities. In this respect, HYDKIN provides very useful tools for the quality check on data and the comparison between different sources. | 5,136.6 | 2023-07-01T00:00:00.000 | [
"Physics"
] |
Application of Design of Experiment Method for Thrust Force Minimization in Step-feed Micro Drilling.
Micro drilled holes are utilized in many of today's fabrication processes.Precision production processes in industries are trending toward the use of smaller holeswith higher aspect ratios, and higher speed operation for micro deep hole drilling. However,undesirable characteristics related to micro drilling such as small signal-to-noise ratios,wandering drill motion, high aspect ratio, and excessive cutting forces can be observedwhen cutting depth increases. In this study, the authors attempt to minimize the thrustforces in the step-feed micro drilling process by application of the DOE (Design ofExperiment) method. Taking into account the drilling thrust, three cutting parameters,feedrate, step-feed, and cutting speed, are optimized based on the DOE method. Forexperimental studies, an orthogonal array L27(313) is generated and ANOVA (Analysis ofVariance) is carried out. Based on the results it is determined that the sequence of factorsaffecting drilling thrusts corresponds to feedrate, step-feed, and spindle rpm. Acombination of optimal drilling conditions is also identified. In particular, it is found in thisstudy that the feedrate is the most important factor for micro drilling thrust minimization.
Introduction
Recently, with increasing demand for precise micro components production, the importance of micro hole drilling processes is increasing in fields such as medical instruments, aerospace engineering, and computer industries. [1,2] It is required for micro deep hole drilling technologies to achieve higher accuracy and higher productivity, because deeper and smaller holes are required for specific applications in the aforementioned industries. For these applications, thermal methods (e.g. electron beam, laser, electric discharging) and chemical methods (e.g. electrolytic polishing, electrochemical machining) are usually applied. However, in general applications, mechanical drilling process is preferred over other processes for producing micro deep holes due to their higher economical efficiency and productivity. Generally, mechanical micro deep hole drilling process become increasingly difficult as the aspect ratio of drill increases. This is caused by insufficient chip and heat discharging mechanisms, as well as problems related to ineffective lubrication arising from failure to adequately supply coolant. Due to such problems, the machining quality of drilled micro holes can be significantly deteriorated. Also, the micro drills can be easily fractured by small impact, bending, and torsion, because they are very slender and long.
To realize a more efficient micro-drilling process, a step-feed process is required instead of the onepass drilling method. The step-feed process repeats drill feeding forward and backward with a certain number of steps, as shown in Figure 1. This provides better discharge of chips and heat, longer tool life, and more accurate drilling results. With increased step-feeding frequency, it is possible to achieve more enhanced chip and heat discharge; however, the total processing time increases consequently. Conversely, the total processing time can be reduced by decreasing the step-feeding frequency, but chip and heat discharge is degraded. Thus, it is necessary to determine the optimal drilling conditions based on reliable experimental results to improve the productivity in the micro drilling processes. [1,3,4,5] In this paper, the thrust forces in 200μm micro deep hole drilling processes are minimized. The number of steps per one drilling, the feeding speed, and the spindle rpm are used as process parameters to determine the optimum drilling conditions. For this purpose, experimental works are carried out based on DOE (Design of Experiments), and the obtained experimental data are analyzed using ANOVA (Analysis of variance). [6,7] 2. Application of Design of Experiment DOE (Design of Experiments) provides a powerful means to achieve breakthrough improvements in product quality and process efficiency. From the viewpoint of manufacturing fields, this can reduce the number of required experiments when taking into account the numerous factors affecting experimental results. DOE can show how to carry out the fewest number of experiments while maintaining the most important information. The most important process of the DOE is determining the independent variable values at which a limited number of experiments will be conducted. For this purpose, Taguchi [7] proposed an improved DOE. This approach adopts the fundamental idea of DOE, but simplifies and standardizes the factorial and fractional factorial designs so that the conducted experiments can produce more consistent results. The major contribution of the work has been in developing and using a special set of orthogonal arrays for designing experiments. Orthogonal arrays are a set of tables of numbers, each of which can be used to lay out experiments for a number of experimental situations. The DOE technique based on this approach makes use of these arrays to design experiments. Through the orthogonal arrays, it is possible to carry out fewer fractional factorial experiments than full factorial experiments. Also, the relative influence of factors and interactions on the variation of results can be identified. Through fractional experiments, optimal conditions can be determined by analyzing the S/N ratio (Signal-to-Noise ratio) as a performance measure, often referred to as ANOVA (Analysis of Variance). The details of this approach are presented in the following subsections [7].
Orthogonal Arrays
When optimizing process conditions to obtain higher quality products, it is necessary to carry out several steps. First, factors or conditions have to be selected, which predominantly affect the process results. These selected factors are divided into several levels, and all combinations are usually taken into account. In this case, the number of all possible combinations corresponds to the number of needed experiments. Here, orthogonal arrays make it possible to carry out fractional factorial experiments in order to avoid numerous experimental works as well as to provide shortcuts for optimizing factors. The orthogonal arrays are determined by the number of factors and levels considered in the process. They are usually described in the form L A (B C ), where A denotes the number of fractional experiments, B is the number of levels, and C is the number of factors. The number 2 or 3 is usually selected for the levels.
Degree of Freedom in DOE
Degree of freedom (DOF) is a common term used in engineering and science. However, there is no visible interpretation of DOF applied to experimental data. Regarding statistical analysis of experimental data, DOF provides an indication of the amount of information contained in a data set. In DOE processes, DOF is applied to characterize four separate items as follows: (1) DOF of a factor = number of levels of the factor -1 (2) DOF of a column = number of levels of the column -1 (3) DOF of an array = total of all column DOFs for the array (4) DOF of an experiment = total number of results of all trials -1 DOF is the minimal number of comparisons between levels of factors or interactions in order to improve process characteristics. The type of orthogonal array used in DOE can be selected by the DOF. When determining factors and levels, the orthogonal array has to be selected. In this case, the DOF is taken into account as a reference for selecting a certain type of orthogonal array. Determining the number of factors and levels, a suitable orthogonal array can be selected by the total DOF of the experiment, because the total DOF of factors and levels used in an orthogonal array is already determined [7].
Analysis of Variance
ANOVA (Analysis of Variance) is a statistical technique that identifies factors significantly affecting the experimental results. ANOVA consists of (1) summing squares for distributions of all characteristic values (experimental data); (2) unbiased variance; (3) decomposing this total sum into the sums of squares for all factors used in the experiment; (4) calculating unbiased variances through the sums of squares for all factors over their DOF; (5) calculating the variance ratio F 0 by dividing each unbiased variance by the error variance; and (6) searching which factors significantly affect experimental results by analyzing the error variance. This procedure can be accomplished by constructing an ANOVA table. An example of an ANOVA is described as follows. Taking into account a factor A whose number of levels is l and m repetitions for each level, Table 1 can be obtained. x 12 x 22 x 32 … x l2 x 13 x 23 Sum of levels If the total deviation of each datum ij x and the total average x is decomposed into 2, the following equation is obtained.
Squaring this equation and summing l , m data, the following equation is given.
Here, the left side of equation (2) is the total sum of squares T S , the first term of the right side is SSB (Sum of Squares Between) A S , corresponding to variations due to differences between each level effect or to the variances of factors, and the second term of the right side is SSW (Sum of Squares Within) E S , corresponding to the sum of squares within each level. This relationship can be expressed by T S = A S + E S and simple expressions of these terms are given as follows.
where ij V is the mean square or a variance, i S is the sum of squares of an arbitrary factor, and i φ is its DOF. The variance and an error variance can be described as follows.
Micro Drilling System
In the experimental studies, a micro drilling system employing a high speed air spindle is used. The drilling process is divided into a certain number of steps and the drill is fed into the workpiece and retracted repetitively. This allows avoiding micro drill fracture problems and providing enhanced chip and heat discharge. Figure 2 illustrates the configuration of the experimental system used in this study, and the specifications of the micro drilling system are provided in Table 2. In order to monitor the states of the workpieces and the drilling processes in real time, as well as to measure the drilling thrusts, a microscope with a built-in monitoring system and a measuring instrument for cutting force are established, respectively. The specifications of these systems are given in Table 3.
Experimental Conditions
In the experiments, SM45C specimens are used as workpieces. In order to fix the workpieces and dynamometer onto the micro drilling system, a fixture system is installed. The 200μm micro drill used for the experiment is shown in Figure 3.
200μm Micro Drilling Process Optimization Based on DOE
When the step-feeding frequency is increased in order to reduce drilling force, the micro drill can easily be broken due to work-hardening. Furthermore, by reducing the drilling feedrate, the efficiency of the drilling process is deteriorated while increasing the drilling spindle speed leads to expansion of the size of the machined hole, because drill vibration becomes significant. In order to resolve these problems, the optimal drilling conditions are determined by using a DOE for the micro drilling process with SM45C workpieces and 200μm diameter drills. For this purpose, a L 27 (3 13 ) orthogonal array is used, and it is attempted to distinguish the predominant factors for drilling thrust. The same depth for all drillings is taken into account. Part of the orthogonal array and drilling thrusts as experimental results are shown in Table 4.
For the analysis of data acquired through DOE, Taguchi method is applied for gathering required data by using an orthogonal array and investigating the S/N ratio (Signal-to-Noise ratio) derived from these data. In the approaches, characteristics of loss functions are usually classified into "Smaller the Better Characteristics", "Larger the Better Characteristics" and "Nominal the Best Characteristics". In these experiments, "Smaller the Better Characteristics" are taken into account in order to determine drilling conditions for producing minimal drilling thrust. Taking into account the interactions of A*B and A*C, the factors are assigned and the experiments are carried out. Level 2 -11.34 -11.24 -12.05 -11.81 -11.27 -11.55 -11.31 -11.22 -11.15 -11.30 -11.50 -11.18 -11.28 Level 3 -14.02 -10.21 -10.64 -11.04 -10.43 -11.27 -11.48 -11.21 -11.58 -11.46 -11.23 -11.46 -11.42 Table 5 shows the calculated average values of the S/N ratio responses when the levels of feedrate are 1, 2, and 3. Similarly, Table 6 and Table 7 show A*B and A*C interaction matrices, respectively. On the basis of the experimental results, the S/N ratio responses are calculated as shown in Figure 4. Based on these responses, the average values of the thrusts are arranged according to the levels of all factors, and the sums and average values of characteristic dispersions resulting from the level variance of the factors are described in Table 8.
Through the results presented in Table 8, the sequence of factors affecting drilling thrust corresponds to feedrate, step, and spindle rpm. In particular, it is noted that the feedrate is an important factor for drilling thrust. Table 9 shows the pooled analysis results.
The process of ignoring a factor if it is deemed insignificant, called pooling, is done by combining the measure of influence of the factor with that of the error term. Finally, on the bases of the S/N ratio graphs and ANOVA, it can be declared that A1, B3, and C3 correspond to the factors producing minimal drilling thrusts and there are no interactions between A*B and A*C. Figure 9 shows the actual micro drilling process and a 200μm hole drilled based on the applied methods. (a) Interaction A*B (b) Interaction A*C Figure 8. Interaction graph.
The process of ignoring a factor if it is deemed insignificant, called pooling, is done by combining the measure of influence of the factor with that of the error term. Finally, on the bases of the S/N ratio graphs and ANOVA, it can be declared that A1, B3, and C3 correspond to the factors producing minimal drilling thrusts and there are no interactions between A*B and A*C. Figure 9 shows the actual micro drilling process and a 200μm hole drilled based on the applied methods.
Conclusions
The objective of this study is to ascertain factors predominantly affecting drilling thrust in micro deep hole drilling processes. For this purpose, DOE (Design of Experiments) technique and ANOVA (Analysis of Variances) are used. Through this study, as presented in this paper, the conclusions can be summarized as follows: (1) In the 200micro drilling process, the experimental works designed by a L 27 (3 13 ) orthogonal array are carried out and ANOVA is also conducted. Through these works, it is possible to recognize that the sequence of the most influential factors for drilling thrust corresponds to feedrate, step-feed, and spindle rpm. In particular, it can be concluded that the feedrate is the most influential factor for drilling thrust.
(2) Through the S/N ratio graphs and ANOVA, it can be observed that A1, B3, and C3 correspond to the factors producing minimal drilling thrust; there are no interactions between A*B and A*C. Thus, the optimal conditions are A1, B3, and C3.
In this study, only the drilling thrust is taken into account as the most significant factor in order to optimize the step-feed micro drilling processes. It is possible, however, to consider other factors such as drill life, roughness, circularity of drilled holes, drilling time, burrs, etc. The selection of these factors depends on the main objectives of the required processes. The influence of interactions among the factors will be studied and discussed in our next study. | 3,526.6 | 2008-01-01T00:00:00.000 | [
"Materials Science",
"Business"
] |
Nonlinear dynamics inside femtosecond enhancement cavities
We have investigated the effect of intracavity nonlinear dynamics arising from enhanced peak powers of femtosecond pulses inside broad-bandwidth, dispersion-controlled, high-finesse optical cavities. We find that for χ (3) nonlinearities, when a train of femtosecond pulses are maximally coupled into a cavity by active stabilization of its frequency comb to the corresponding linear resonances of a cavity, enhancement ceases when the peak nonlinear phase shift is sufficient to shift the cavity resonance frequencies by more than a cavity linewidth. In addition, we study and account for the complex spectral dynamics that result from chirping the input pulse and show excellent qualitative agreement with experimental results. © 2005 Optical Society of America OCIS codes: (190.7110) Ultrafast nonlinear optics, (320.7110) Ultrafast nonlinear optics, (230.5750) Resonators References and links 1. V. Petrov, D. Georgiev, and U. Stamm, “Improved Mode-Locking of a Femtosecond Titanium-Doped Sapphire Laser by Intracavity Second-Harmonic Generation,” Appl. Phys. Lett. 60, 1550–1552 (1992). 2. A. Ashkin, G. D. Boyd, and J. M. Dziedzic, “Resonant optical second harmonic generation and mixing,” IEEE J. Quantum Electron. QE-2, 109–123 (1966). 3. W. J. Kozlovsky, C. D. Nabors, and R. L. Byer, “2nd-Harmonic Generation of a Continuous-Wave Diode-Pumped Nd-Yag Laser Using an Externally Resonant Cavity,” Opt. Lett. 12, 1014–1016 (1987). 4. C. S. Adams and A. I. Ferguson, “Frequency Doubling of a Single Frequency Ti-Al2o3 Laser Using an External Enhancement Cavity,” Opt. Commun. 79, 219–223 (1990). 5. Z. Y. Ou and H. J. Kimble, “Enhanced Conversion Efficiency for Harmonic-Generation with Double-Resonance,” Opt. Lett. 18, 1053–1055 (1993). 6. K. Fiedler, S. Schiller, R. Paschotta, P. Kurz, and J. Mlynek, “Highly Efficient Frequency-Doubling with a Doubly Resonant Monolithic Total-Internal-Reflection Ring-Resonator,” Opt. Lett. 18, 1786–1788 (1993). 7. T. Heupel, M. Weitz, and T. W. Hansch, “Phase-coherent light pulses for atom optics and interferometry,” Opt. Lett. 22, 1719–1721 (1997). 8. A. Kastler, “Atomes a l’Interieur d’un Interferometre Perot-Fabry,” Appl. Opt. 1, 17–24 (1962). 9. J. Ye, L. S. Ma, and J. L. Hall, “Ultrasensitive detections in atomic and molecular physics: demonstration in molecular overtone spectroscopy,” J. Opt. Soc. Am. B 15, 6–15 (1998). 10. J. Ye and T. W. Lynn, “Applications of optical cavities in modern atomic, molecular, and optical physics,” Advances in Atomic, Molecular, and Optical Physics 49, 1–83 (2003). 11. R. J. Jones and J. Ye, “Femtosecond pulse amplification by coherent addition in a passive optical cavity,” Opt. Lett. 27, 1848–1850 (2002). 12. J. C. Petersen and A. N. Luiten, “Short pulses in optical resonators,” Opt. Express 11, 2975–2981 (2003), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-22-2975. 13. R. J. Jones and J. C. Diels, “Stabilization of femtosecond lasers for optical frequency metrology and direct optical to radio frequency synthesis,” Phys. Rev. Lett. 86, 3288–3291 (2001). (C) 2005 OSA 7 March 2005 / Vol. 13, No. 5 / OPTICS EXPRESS 1672 #6553 $15.00 US Received 7 February 2005; revised 24 February 2005; accepted 25 February 2005 14. R. J. Jones, I. Thomann, and J. Ye, “Precision stabilization of femtosecond lasers to high-finesse optical cavities,” Phys. Rev. A 69, 051803 (R) (2004). 15. J. Ye and S. T. Cundiff (editors), Femtosecond optical frequency comb technology: Principle, operation and application (Springer, New York, 2005). 16. A. N. Luiten and J. C. Petersen, “Ultrafast resonant polarization interferometry: Towards the first direct detection of vacuum polarization,” Phys. Rev. A 70, 033801 (2004). 17. R. J. Jones and J. Ye, “High-repetition-rate coherent femtosecond pulse amplification with an external passive optical cavity,” Opt. Lett. 29, 2812–2814 (2004). 18. F. Ouellette and M. Piche, “Ultrashort Pulse Reshaping with a Nonlinear Fabry-Perot Cavity Matched to a Train of Short Pulses,” J. Opt. Soc. Am. B 5, 1228–1236 (1988). 19. M. J. Thorpe, R. J. Jones, K. D. Moll, and J. Ye, “Precise measurements of optical cavity dispersion and mirror coating properties via femtosecond combs,” Opt. Express 13, 882–888 (2005), http://www.opticsexpress.org/abstract.cfm?URI=OPEX-13-3-882. 20. R. W. Boyd, Nonlinear Optics, 2nd ed. (Academic Press, San Diego, 2002). 21. P. Dube, L. S. Ma, J. Ye, P. Jungner, and J. L. Hall, “Thermally induced self-locking of an optical cavity by overtone absorption in acetylene gas,” J. Opt. Soc. Am. B 13, 2041–2054 (1996).
Introduction
Investigations into many nonlinear optical interactions require the use of amplified femtosecond laser systems.In addition to a potential increase in amplitude noise and pulse timing jitter, a serious limitation of conventional amplifier systems is the drastic reduction in the pulse repetition rate.This not only reduces the average power of the system and prolongs the data acquisition time for many experiments, but also limits the ability to actively stabilize the laser parameters against fluctuations (i.e.noise processes, such as amplitude fluctuations, occurring faster than half the repetition time become totally stochastic).The higher powers inside a mode-locked laser cavity can be utilized, such as for second harmonic generation [1], but this has limited use due to the dynamics of the particular mode-locking process and the need to keep the intracavity intensity in a range required for stable mode-locking.
The use of passive optical cavities external to a laser has been an effective approach to enhance laser power for nonlinear optical interactions.Although commonly used with CW lasers for efficient optical frequency conversion [2][3][4][5][6], pulse generation [7], or increasing the interaction length for precision spectroscopy [8][9][10], enhancement cavities have found limited use with sub-100 fs pulses due to many technical difficulties.For instance, coupling the entire spectrum of a mode-locked pulse train to an external high-finesse cavity requires high reflectivity mirrors with sufficient dispersion control across a broad bandwidth defined by the short laser pulse [11,12].Recently, however, with improving mirror-coating technology, femtosecond laser stabilization techniques [13,14], and a more complete understanding of the underlying "femtosecond comb" associated with a modelocked laser [15], sub-50 fs pulses can be efficiently coupled and "stacked" inside high-finesse (> 1000) cavities, leading to significant enhancement of the stored femtosecond pulse energy.Unlike traditional active amplifiers, the laser's original high repetition rate is maintained inside the cavity, while the stable cavity additionally provides temporal and spatial-mode filtering.This provides an environment suitable for precise measurements of nonlinear light-matter interactions at laser intensities not previously achievable without active amplification [16].For example, in a recent experiment [17], significant spectral distortion was observed when the cavity contained a fused-silica acousto-optic modulator (AOM) at an intracavity focus that was used to dump the pulse from the cavity.The strong nonlinear response of the fused silica also limited the ultimate peak pulse intensities that were achieved in the AOM.An earlier theoretical study of nonlinear Fabry-Perot cavities predicted that significant pulse shaping can occur with ultrashort pulses [18].Considering the potential use of the femtosecond enhancement cavity for future extreme nonlinear optics studies, it is essential that we develop a complete understanding of the intracavity nonlinear dynamics and identify the key mechanisms responsible for the experimental observations.In this article, we present a detailed investigation of the of intracavity nonlinearities on femtosecond enhancement cavities.
Theoretical model
As a model, we have numerically investigated a cavity similar to that studied experimentally [17], as depicted in Fig. 1.To simulate the physical system, we have assumed that the cavity has a net zero group-delay dispersion (GDD) around 790 nm and a residual third-order dispersion of ∼ 250 fs 3 .In addition, we have included in our model a nonlinear medium equivalent to a 3.8-mm fused-silica AOM (n 2 = 3.8 × 10 −16 cm 2 /W) as well as a small loss term (0.5%) which accounts for the experimentally present absorption and scattering losses from the mirrors and small residual reflections from the nonlinear medium.The cavity length is such that the roundtrip group delay corresponds to a 100-MHz repetition rate.
In the time domain, we model a pulse train incident on the input coupler with a periodic repetition frequency of f rep , and the relative phase between pulses is fixed.These two variables control the two degrees of freedom of the femtosecond comb in the frequency domain ( f rep controls the comb mode spacing, and the phase between pulses determines the carrier-envelope offset frequency, f o ).These two degrees of freedom of the comb allow us to adjust the coupling efficiency of the pulse train to the cavity by aligning the femtosecond comb modes to the cavity resonances.For ultrashort pulses, though, net cavity GDD plays the dominant role in limiting the coupling efficiency.For studying cavity effects, a convenient way to represent the net cavity GDD is to compute the free spectral range (FSR) of neighboring cavity resonances as a function of wavelength.For the cavity in Fig. 1(a), the corresponding FSR is shown in Fig. 1(b), where the zero GDD of the cavity occurs at the maximum of the FSR.Because the FSR is not uniform across the relevant pulse spectrum, it can be seen that for high finesse cavities with narrow- resonances, a femtosecond comb (which has a constant separation, f rep , between comb components) will be spectrally filtered due to misalignment between the comb modes and the cavity resonances.This aspect of dispersive cavities when probed by femtosecond combs can be utilized to provide precise characterization of the cavity dispersion [19].
Since the cavity in Ref. 17 was typically under vacuum, we have assumed linear propagation except for the nonlinearities resulting from the χ (3) response of the nonlinear material, specifically self-phase modulation (SPM) and self-steepening [20].We numerically modeled the intracavity pulse by using a split-step method in which the nonlinear interaction (equivalent to propagating through a 3.8-mm fused silica AOM) is calculated in the time domain while the linear-dispersion effects are computed in the frequency domain where we model a subset of the frequencies that compose the underlying femtosecond comb of the pulse train.The spatial dynamics of the simulation were neglected; in order to model the nonlinear effects, the intensity is calculated using the beam radius at the focus inside the AOM.
Results
The first property that we investigated was the intensity dependence of the pulse-energy enhancement.In a cavity with a completely linear response, maximum enhancement is achieved when the pulse spectrum is centered around the zero GDD of the cavity, the laser repetition frequency equals the local FSR of the cavity, and f o is adjusted such that the average comb frequency matches that of the corresponding cavity modes such that successive pulses constructively interfere at the input coupler.In order to isolate the effect of SPM on cavity enhancement, we simulated a pulse train composed of 45-fs pulses which have been chirped to 1 ps for which self-steepening is negligible.A series of simulations were conducted in which the input-pulse energy was increased, and the steady-state energy enhancement that was achieved was recorded.The results are presented in Fig. 2 that an enhancement of ∼ 180 is expected for a CW laser locked to a particular cavity resonance mode; however, we only see an enhancement of ∼ 170 when low-intensity femtosecond pulses are coupled to the cavity.This reduced enhancement results from the spectral filtering due to the nonuniform spacing between the cavity resonances arising from the residual third-order dispersion.As the intensity increases, we observe that the enhancement factor starts to rapidly decrease from the linear-cavity response.If we calculate the nonlinear-phase shift experienced by the pulse after propagating through the AOM, we observe that when the energy enhancement has dropped by a factor of 2, the single-pass nonlinear phase-shift is approximately 8 From the linear analysis of cavity, we determine that the cavity resonance has a halfwidth half-maximum of ∼ 112 kHz.Comparing this to the FSR of 100 MHz, this means that a phase-shift of 2π(112kHz/100MHz) ∼ 7 mrad in the cavity will shift the resonance by half its linewidth.Hence, we find that the degradation of the cavity enhancement is due to the dynamic change of the effective cavity resonance positions as the intra-cavity pulse builds up.It is interesting to note, however, that even though the pulse-amplification factor is decreasing as the input pulse-energy increases, the total intracavity energy (as can be inferred from the nonlinear phase shift) continues to increase, at least over the range of input-pulse energies that we studied.In order to obtain higher energy enhancement at high intensities, we next considered the effect of coupling a pulse train with its comb structure slightly detuned from the linear cavity resonances.Since the nonlinearities effectively shift the resonance frequencies to lower values, we down shifted f o of the pulse train by approximately one cavity resonance half-width (-125 kHz).The results are shown as triangles in Fig. 2. As expected, we find that it is possible to obtain increased amplification at higher intensities if the pulse train is slightly detuned from the linear cavity resonances.However, there is a delicate balance in order to see improved enhancement; if too large of a detuning is used, the intracavity pulse never reaches a high enough intensity to add a nonlinear shift to the resonance.In addition, the enhancement never achieves the value predicted from a linear analysis of the cavity because the nonlinear frequency chirp imparted on the pulse always introduces some degree of destructive interference at the input coupler which cannot be compensated.
(squares). A linear analysis of the cavity shows
We also simulated the situation in which the underlying frequency comb of the input pulse train shifts.This occurs, for example, in experiments where the laser is scanned through the cavity resonance, when feedback is applied to the laser to keep the laser and enhancement cavity locked, or when random noise perturbs the laser.As an example, we modeled the limiting case in which a single shift in the frequency comb occurs, for example from a random perturbation to the laser cavity.An intracavity pulse associated with a specific f o is built-up to steady state.We then simulate the perturbation by turning off the original driving pulse train, and a new pulse train with a shifted f o becomes incident on the cavity.Inside the cavity, we follow the dynamics of two pulses: the original pulse which decays within the cavity lifetime and the second pulse which grows as it is driven by the shifted pulse train.Just as an intense intracavity pulse can shift the cavity resonances it experiences, it can also shift the cavity resonance that a second weaker pulse experiences through cross-phase modulation (XPM).Hence, because of XPM, a highly detuned pulse train can efficiently couple to the cavity.As a specific example, we considered a chirped 1-ps input pulse with a peak intensity of 4.5 × 10 7 W/cm 2 coupled to the cavity.If a pulse train with a detuning δ f o = −550 kHz relative to the cavity resonance is directly coupled to the cavity, an enhancement of ∼ 9 is observed (dotted line in Fig. 3).However, if first a pulse train of δ f o = −300 kHz is coupled to the cavity and then, after reaching steady state, a second pulse train of δ f o = −550 kHz is coupled to the cavity, the highly detuned pulse reaches a maximum enhancement of ∼ 75.On a time scale much longer than the cavity lifetime, though, the pulse gradually decays to the value achieved by directly coupling the pulse train.This effect of the dynamic "memory" of the cavity resonance is analogous to experiments with CW lasers in which a cavity can "self-lock" to the laser frequency due to thermal changes in the intracavity medium [21].However, because the long-decay time in our model is due to an electronic nonlinearity, these transient-behavior features can occur on a much faster time-scale and can play an important role in the presence of noise mechanisms.Thus, we have identified three relevant time scales when performing experiments on nonlinear cavities.The shortest time scale is governed by the natural lifetime of the cavity and represents the time for an intracavity pulse to decay if it is no longer driven by an incident pulse train.The intermediate time scale is dominated by the transient behavior of the nonlinear cavity.As can be seen in Fig. 3, this can be considerably longer than the lifetime of the cavity.For a linearly-responding cavity, a change of the frequency comb requires a time on the order of the cavity lifetime to buildup to steady state.In a nonlinear cavity, though, the long transient behavior severely limits how quickly the steady-state behavior (e.g., the cavity linewidth) can be measured.The third relevant time scale corresponds to the time in which the feedback system can make corrections to the laser cavity in order to keep the laser optimally coupled to the nonlinear cavity.Typically, piezo-actuated mirrors have a bandwidth of several tens of kilohertz.For a repetition frequency of 100 MHz, this implies that the feedback system cannot react to changes until ∼ 10 4 pulses are incident on the cavity.The results presented so far have been obtained from chirped input pulses such that a higher enhancement factor is achieved compared to their transform limited equivalents.However, the higher energies obtained by using chirped input pulses have come at the cost of increased spectral distortion at the output.Even though the center wavelength of the pulse is at the zero-GDD of the cavity, there is still a strong net third-order dispersion term.In combination with SPM, very complicated spectral features develop.In Fig. 4, we present numerically calculated spectra of different pulse profiles with the same initial spectral intensity distributions: a 45-fs transform limited pulse with a peak intensity of 10 9 W/cm 2 , the same pulse chirped to 1 ps by a normal dispersion material (+chirp), the same pulse chirped to 1 ps by an anomalous dispersion material (-chirp), and a 45-fs transform limited pulse with the same peak intensity as the 1-ps chirped pulses.Additionally, we present an experimental spectrum which resulted from input conditions similar to (c).As can be seen, the simulations reproduce the complicated fringe pattern observed in the experiment quite well.
As expected, when using pulses of equal input energy, the highest intracavity pulse energies are obtained with chirped input pulses.This indicates that if the enhancement cavity is used as an amplifier as done in Ref. 17, then it is advantageous to stretch the input pulse and compress it after it is dumped from the cavity.However, when intracavity peak intensities are important, using transform limited pulses leads to the highest intensities when compared to a chirped pulse of equal input energy.This is because the increased enhancement of the chirped pulse cannot compensate for its significantly reduced input peak intensity required to achieve the high amplification.
Conclusions
In conclusion we have presented a detailed study of nonlinear dynamics inside a passive, femtosecond enhancement cavity where peak pulse energies can reach interestingly high levels.We have found that the pulse-energy enhancement is limited by a dynamic shifting of the cavity resonances brought about by the nonlinear phase shift imparted onto the intracavity pulse.We have identified that the transient behavior of the buildup dynamics can persist for times much longer than the natural cavity lifetime when nonlinear elements are introduced into the enhancement cavity and that the complicated spectral features that were experimentally observed can be explained by the interaction of SPM and GDD.
Fig. 1 .
Fig. 1.(a) Schematic of the enhancement cavity being modeled.The cavity is composed of a 0.9% input coupler (IC), two negative dispersion mirrors (NGDD), a high reflector (HR), and two curved mirrors (CM) to focus the beam inside the nonlinear medium χ (3) . (b) Numerically calculated free spectral range of the cavity versus wavelength.
Fig. 2 .
Fig. 2. Cavity pulse-energy enhancement as a function of input intensity and the associated nonlinear phase shift.Increased amplification can occur at higher input peak intensities if the comb is detuned from the linear cavity resonances.
Fig. 3 .
Fig.3.Cavity buildup dynamics.A detuned comb (Comb 2) can temporarily grow to higher amplifications if another pulse (Comb 1) is already present in the cavity when compared to the absence of the first pulse (dotted line).
Fig. 4 .
Fig.4.Effect of chirp on output spectrum.All simulated pulses start with the same input spectrum (a).Two transform limited pulses are shown in which the peak intensity (d) and the pulse energy (e) are the same as the chirped pulses (b) and (c).The fringe pattern in the output spectrum from experimental observations (f) show excellent qualitative agreement with equivalent simulated conditions (c).The observed energy enhancement is also indicated in each panel. | 4,690 | 2005-03-07T00:00:00.000 | [
"Physics"
] |
A Method Based on Curvature and Hierarchical Strategy for Dynamic Point Cloud Compression in Augmented and Virtual Reality System
As a kind of information-intensive 3D representation, point cloud rapidly develops in immersive applications, which has also sparked new attention in point cloud compression. The most popular dynamic methods ignore the characteristics of point clouds and use an exhaustive neighborhood search, which seriously impacts the encoder’s runtime. Therefore, we propose an improved compression means for dynamic point cloud based on curvature estimation and hierarchical strategy to meet the demands in real-world scenarios. This method includes initial segmentation derived from the similarity between normals, curvature-based hierarchical refining process for iterating, and image generation and video compression technology based on de-redundancy without performance loss. The curvature-based hierarchical refining module divides the voxel point cloud into high-curvature points and low-curvature points and optimizes the initial clusters hierarchically. The experimental results show that our method achieved improved compression performance and faster runtime than traditional video-based dynamic point cloud compression.
Introduction
Since the twenty-first century, the development of three-dimensional (3D) sensing technology has set off a wave of innovation in Augmented and Virtual Reality Production (AR/VR). In 2007, Google introduced the Street View, which allowed users to view and navigate the virtual world, and now there is a complete version that is commercially available [1]. In 2014, Samsung released the Samsung Gear VR for Galaxy smartphones to gain a completely untethered, easy-to-use experience [2]. In 2020, Coronavirus Disease catalyzed AR retail and VR conferences. Meanwhile, the use of 3D point clouds to represent real-world scenarios in an immersible fashion for AR/VR system has become increasingly popular in recent decades. Apple brought point cloud to mobile devices in 2020 and successfully created a more realistic augmented reality experience [3]. In 2021, Zhongxing Telecom Equipment built the AR Point Cloud Digital Twin Platform and said that the platform had formed a scaled deployment and demonstration effect [4]. Moreover, point clouds have achieved significant success in many areas, e.g., visual communication [5], auto-navigation [6], and immersive systems [7]. Usually, a point cloud comprises its geometric coordinates and various attributes such as color, normal, temperature, and depth. It always comes with a large amount of data, which leads to the heavy transmission 1.
An innovative curvature-based hierarchical dynamic PCC method is proposed to reduce the time complexity of the refinement operation and even the overall framework; 2.
The decision-making methods of voxel size and the number of iterations are proposed to obtain a good trade-off between compression quality and compression time.
The remainder of this paper is organized as follows.
Following this introduction, we describe the curvature-based hierarchical dynamic PCC method in detail in Section 2. In Section 3, we selected Redandblack, Soldier, Longdress, Loot, Basketball_player, Dancer, Ricardo and Phil eight sequences, presenting the comparison results between the proposed techniques and the classic video-based algorithm. The experiments showed that the proposed scheme can reduce the overall runtime by 33.63% on average, with clear Rate-Distortion (R.D.) performance benefit and subjective effect improvement. Finally, we conclude the entire paper in Section 4.
The Curvature-Based Hierarchical Dynamic Point Cloud Compression Method
In the past decades, the video-based theory has been proven to be a low-cost solution for promoting point cloud applications. This paper first utilizes normal similarity between real point and pre-orientated elemental planes to segment the point cloud frame. Next, curvature characteristic is used to hierarchically optimize the clusters of the first step. Then, these refined clusters are packaged into images, such as a geometric image and a color image. Finally, all images can be compressed with the existing versatile video codec. On account of the realization that the latter two steps can refer to the current literature, the proposed method can be summarized as the following three main steps: initial segmentation, curvature-based hierarchical refinement, and post-processing (including image generation and video-based coding).
Initial Segmentation
The premise of using 2D video coders is that the point cloud model is mapped to a 2D plane in a simple but efficient way. Projecting the model onto the six planes of its bounding box is given priority.
More precisely, we first calculate the normal vectors of all points in the point cloud. Then, the point cloud is divided into six basic categories using the six planes of the predefined unit cube, which can be represented by normal vectors as: (1, 0, 0), (0, 1, 0), (0, 0, 1), (−1, 0, 0), (0, −1, 0), (0, 0, −1), as shown in Figure 1. The segmentation basis is the similarity of the normal vectors between the real point and the orientation plane, maximizing the dot product of the point's normal vector and the plane's normal vector. The corresponding pseudo-code is Algorithm 1: Segmentation aims to find some patches that satisfy time-coherent, low-distortion, and are convenient for dimensionality reduction. Maximizing time-coherent and minimizing distance and angle distortion is beneficial to the video encoder to fully use the spatiotemporal correlation of point cloud geometry and attribute information. However, the previous work does not guarantee a good reconstruction effect due to auto-occlusions. To avoid it, we then refine the clustering results.
Curvature-Based Hierarchical Refinement
The input of the refine module is the clustered geometric coordinates and attributes. And we look forward to getting some clusters meeting the image or video compression technology requirements, such as smooth borders and less occlusion. Patches with soft edges are incredibly effective in succeeding geometric filling and attributing filling parts, which inspired us to consider adjacent points in the later refining process. Less occlusion means avoiding the projection of different points onto the same location as far as possible by controlling the projection direction. More collisions will cause more information loss, so the normal vector of the projection plane is also deemed as a considerable factor.
If scoreNormal > bestScore do bestScore = scoreNormal provides an overview of the curvature-based hierarchical refinement, which is a friendlier refining solution to reduce computational complexity and runtime and will not alter the bitstream syntax and semantics or the decoder behavior.
The main steps cover partitions of geometric coordinate space, neighborhood information search, and clustering based on scores. It should be noted that the neighborhood information search comprises the calculation of curvature and the hierarchical search; the final scores extrapolate from the normal vector score, voxel-smooth score, and smooth score. which inspired us to consider adjacent points in the later refining process. Less occlusion means avoiding the projection of different points onto the same location as far as possible by controlling the projection direction. More collisions will cause more information loss, so the normal vector of the projection plane is also deemed as a considerable factor. Figure 2 provides an overview of the curvature-based hierarchical refinement, which is a friendlier refining solution to reduce computational complexity and runtime and will not alter the bitstream syntax and semantics or the decoder behavior.
Partitions of Geometric Coordinate Space
Even if only ten neighbors are searched for each point, the clusters updating for whole cloud containing hundreds of thousands of points is laborious. As a result, the refining procedure was suggested to be simplified by adding a voxel-based constraint to the neighborhood search [22]. Inspired by this, we first use uniform voxels to divide geometric space, then perform a neighborhood search on the voxelized point cloud instead of the entire model. The first traversed point in each voxel is selected to identify the voxel instead of the geometric center point due to the following calculations being for integers. Specifically, the coordinates of identifying individual and contained points for each voxel are stored. In the searched nearest-neighboring filled voxels, all interior points that meet the distance limit are regarded as the neighborhood information of the current voxel.
The more extensive the voxel size, the fewer points need to be considered in each neighborhood search and the fewer identification points are explored, which means the difference between the searched neighbors and the actual situation is more significant, as shown in Figure 3.
whole cloud containing hundreds of thousands of points is laborious. As a result, the refining procedure was suggested to be simplified by adding a voxel-based constraint to the neighborhood search [22]. Inspired by this, we first use uniform voxels to divide geometric space, then perform a neighborhood search on the voxelized point cloud instead of the entire model. The first traversed point in each voxel is selected to identify the voxel instead of the geometric center point due to the following calculations being for integers. Specifically, the coordinates of identifying individual and contained points for each voxel are stored. In the searched nearest-neighboring filled voxels, all interior points that meet the distance limit are regarded as the neighborhood information of the current voxel.
The more extensive the voxel size, the fewer points need to be considered in each neighborhood search and the fewer identification points are explored, which means the difference between the searched neighbors and the actual situation is more significant, as shown in Figure 3.
Neighborhood Search for Each Filled Voxel
The above partition ideal provides convenient conditions for neighborhood search. However, redundant calculations may occur if all regions are considered equivalent and use the fixed search radius, as shown in Figure 4. If different clusters are marked with diverse colors, it can be found that there are plenty of points belonging to the same group in the chest and abdomen of the Loot point cloud. Wherein the clustering index is updated iteratively, the renewal results obtained by various search radii are consistent because all neighbors have identical indexes. Accordingly, it is reasonable to use a smaller search radius for some local regions.
Neighborhood Search for Each Filled Voxel
The above partition ideal provides convenient conditions for neighborhood search. However, redundant calculations may occur if all regions are considered equivalent and use the fixed search radius, as shown in Figure 4. If different clusters are marked with diverse colors, it can be found that there are plenty of points belonging to the same group in the chest and abdomen of the Loot point cloud. Wherein the clustering index is updated iteratively, the renewal results obtained by various search radii are consistent because all neighbors have identical indexes. Accordingly, it is reasonable to use a smaller search radius for some local regions. Considering that the update of the clustering index is related to the neighbors and inseparable from the normal vectors, the curvatures originating from normal vectors on local surfaces are selected to study the search radius. The points of low curvature located on a virtually flat surface need bits of neighbors to reflect the index-change trend. By comparison, the points with high curvature, which have an apparent diversification in the normals between them and adjacents, call for a more comprehensive neighborhood search to prevent the normals from unduly influencing the final result, as shown in Figure 5. In the intra-cluster, the small-scale neighborhood search increases the number of patches for high-curvature regions but does not affect low-curvature parts. On the contrary, the smallscale neighborhood search results in patches with sharp edges for high-curvature areas at cluster boundary but has little impact on low-curvature areas. Therefore, the search for nearest-neighboring-filled voxels in this paper was completed based on curvature grading to reduce the amount of calculation. Low-curvature zones implement a small-scale neighborhood search, while high-curvature zones conduct a large-scale neighborhood search. Considering that the update of the clustering index is related to the neighbors and inseparable from the normal vectors, the curvatures originating from normal vectors on local surfaces are selected to study the search radius. The points of low curvature located on a virtually flat surface need bits of neighbors to reflect the index-change trend. By comparison, the points with high curvature, which have an apparent diversification in the normals between them and adjacents, call for a more comprehensive neighborhood search to prevent the normals from unduly influencing the final result, as shown in Figure 5. In the intra-cluster, the small-scale neighborhood search increases the number of patches for highcurvature regions but does not affect low-curvature parts. On the contrary, the small-scale neighborhood search results in patches with sharp edges for high-curvature areas at cluster boundary but has little impact on low-curvature areas. Therefore, the search for nearestneighboring-filled voxels in this paper was completed based on curvature grading to reduce the amount of calculation. Low-curvature zones implement a small-scale neighborhood search, while high-curvature zones conduct a large-scale neighborhood search. on a virtually flat surface need bits of neighbors to reflect the index-change trend. By comparison, the points with high curvature, which have an apparent diversification in the normals between them and adjacents, call for a more comprehensive neighborhood search to prevent the normals from unduly influencing the final result, as shown in Figure 5. In the intra-cluster, the small-scale neighborhood search increases the number of patches for high-curvature regions but does not affect low-curvature parts. On the contrary, the smallscale neighborhood search results in patches with sharp edges for high-curvature areas at cluster boundary but has little impact on low-curvature areas. Therefore, the search for nearest-neighboring-filled voxels in this paper was completed based on curvature grading to reduce the amount of calculation. Low-curvature zones implement a small-scale neighborhood search, while high-curvature zones conduct a large-scale neighborhood search. Figure 5. The clustering results obtained by searching with various radii in a specific area (a1 is a low-curvature area, while a2 is a high-curvature area).
We apply the local surface fitting method to calculate curvatures. Firstly, principal component analysis is used to estimate normal vectors. Then, a minimum spanning tree is used to orient the normal vectors. The normal vector of i P is used as the vertical axis to establish a new local coordinate system. Finally, a quadric surface fitting is performed on the local coordinate system, and its fitted surface parameters are used to estimate the curvature at i P, as where 1 K and 2 K are the two principal curvatures of i P, respectively.
The predictions of curvatures are based on the identification points instead of the entire cloud model, as the number of voxels is far less than the number of actual points. In Figure 1, the curvatures histogram of the identification points shows that only a few individuals had a relatively high curvature as most areas had excellent flatness. Conse- Figure 5. The clustering results obtained by searching with various radii in a specific area (a1 is a low-curvature area, while a2 is a high-curvature area).
We apply the local surface fitting method to calculate curvatures. Firstly, principal component analysis is used to estimate normal vectors. Then, a minimum spanning tree is used to orient the normal vectors. The normal vector of P i is used as the vertical axis to establish a new local coordinate system. Finally, a quadric surface fitting is performed on the local coordinate system, and its fitted surface parameters are used to estimate the curvature at P i , as where K 1 and K 2 are the two principal curvatures of P i , respectively. The predictions of curvatures are based on the identification points instead of the entire cloud model, as the number of voxels is far less than the number of actual points. In Figure 1, the curvatures histogram of the identification points shows that only a few individuals had a relatively high curvature as most areas had excellent flatness. Consequently, all identification points are classified into low-curvature points and high-curvature points, according to the low-curvature determination ratio, as shown in Figure 6. Then, we can estimate the neighborhood information hierarchically to reduce complexity. quently, all identification points are classified into low-curvature points and high-curvature points, according to the low-curvature determination ratio, as shown in Figure 6. Then, we can estimate the neighborhood information hierarchically to reduce complexity.
Clustering Based on Final Score
For points with different curvature grades, various search radii are used. Traverse all the identification points and the nearest-neighboring voxels for each identification point are stored in a vector and used to compose the final score.
The normal vector score, which refers to the influence of the normal vectors on the clustering index, must be considered first to get a patch with less occlusion. These six projection planes, as mentioned above, are also used to estimate the normal vector score in the refine segmentation process, according to [ ] orientation p is the normal vector of the p-th projection plane.
The projection results for different planes are shown in Figure 7. The greater the normal vector score, the fewer points projected to the same position and the fewer collisions
Clustering Based on Final Score
For points with different curvature grades, various search radii are used. Traverse all the identification points and the nearest-neighboring voxels for each identification point are stored in a vector and used to compose the final score.
The normal vector score, which refers to the influence of the normal vectors on the clustering index, must be considered first to get a patch with less occlusion. These six projection planes, as mentioned above, are also used to estimate the normal vector score in the refine segmentation process, according to where normal[i] is the normal vector of the i-th point, and orientation[p] is the normal vector of the p-th projection plane. The projection results for different planes are shown in Figure 7. The greater the normal vector score, the fewer points projected to the same position and the fewer collisions occur. Therefore, the calculation of normal vector scores is necessary for each iteration.
are stored in a vector and used to compose the final score.
The normal vector score, which refers to the influence of the normal vectors on the clustering index, must be considered first to get a patch with less occlusion. These six projection planes, as mentioned above, are also used to estimate the normal vector score in the refine segmentation process, according to where [ ] normal i is the normal vector of the i-th point, and [ ] orientation p is the normal vector of the p-th projection plane.
The projection results for different planes are shown in Figure 7. The greater the normal vector score, the fewer points projected to the same position and the fewer collisions occur. Therefore, the calculation of normal vector scores is necessary for each iteration. The neighbors of each point will also affect its corresponding clustering index to avoid uneven boundaries, i.e., the smooth scores, which refers to the influence of adjacent points on the final clustering index. If the number of neighbors corresponding to i P is umNeighbors i n , the amount of computation for smooth scores is where N is the number of identification points. The neighbors of each point will also affect its corresponding clustering index to avoid uneven boundaries, i.e., the smooth scores, which refers to the influence of adjacent points on the final clustering index. If the number of neighbors corresponding to P i is numNeighbors i , the amount of computation for smooth scores is where N is the number of identification points. The appraisal of the smooth scores is undoubtedly demanding but can be simplified by the accumulation of the voxel-smooth score thanks to the partition of geometric coordinate space. The voxel-smooth score for each filled voxel associated with each projection plane needs to be computed first by counting the number of points in the voxel, which are clustered to the projection plane during the refining process. Then, the smooth score can be defined as where v is the index of the voxel containing the i-th point and p is the projection plane index; nnFilledVoxels[v j ] is the j-th neighboring voxel and voxScoreSmooth is the set of voxel-smooth scores for all the adjacent voxels. Hence, the smooth score would be identical for all points inside a voxel. The normal vector score and the smooth score, as mentioned above, are combined to determine the final clustering index through a weighted linear combination, as where λ is the influence coefficient of the smooth score on the final score, and its value is specified by [30]. After clustering each point to the projection plane having the highest final score as calculated at Equation (6), the cluster update is completed once.
In the refining process, the total number of loops would be where maxNumIters is the maximum number of iterations and numPlanes is the number of projection planes in the refinement. A small-scale neighborhood search is used for most points in the proposed method, so the numNeighbors i for most points is lower than that of the video-based method. Therefore, the total number of loops diminishes, which successfully reduces the computational complexity.
In summary, the pseudo-code for refinement is Algorithm 2:
Post-Processing: Image Generation and Video-Based Coding
Consistent with classical video-based methods, the connected component extraction algorithm is applied to extract the patches obtained by curvature-based hierarchical refinement, and then we map the connected components to a 2D grid. The mapping process needs to minimize the 2D unused area, and each grid belongs to only one patch. The geometric information of the point cloud is stored in the grid to generate the corresponding 2D geometric image. Similarly, the attribute image can also be easily obtained. To better handle the case of multiple points being projected to the same pixel, the connected component can be projected onto more than one image.
Finally, the generated images are stored as video frames and compressed using the video codec according to the configurations provided as parameters. Details are available in [21,30].
Test Materials and Conditions
We carried out many tests using dynamic point clouds captured by RGBD cameras. The Redandblack, Soldier, Longdress, and Loot four sequences in MPEG 8i Dataset [31] are complete point clouds with a voxel size close to 1.75 mm and a resolution of 1024 × 1024 for texture maps; the basketball_player and dancer two sequences in Owlii Dataset [32] are complete point clouds with a resolution of 2048 × 2048 for texture maps; the Ricardo and Phil are two other sequences in Microsoft Voxelized Upper Bodie's Dataset [33] and are frontal upper body point clouds with a voxel size compact to 0.75 mm and a resolution of 1024 × 1024 for texture maps, as shown in Figure 8. The point cloud is a set of points (x, y, z) constrained to lie on a regular 3D grid. In other words, it can be assumed to be an integer lattice. The geometric coordinates may be interpreted as the address of a volumetric element or voxel. The attributes of a voxel are the red, green, and blue components of the surface color. Note that each series takes 32 frames for experimentation and comparison. 2D geometric image. Similarly, the attribute image can also be easily obtained. To better handle the case of multiple points being projected to the same pixel, the connected component can be projected onto more than one image. Finally, the generated images are stored as video frames and compressed using the video codec according to the configurations provided as parameters. Details are available in [21,30].
Test Materials and Conditions
We carried out many tests using dynamic point clouds captured by RGBD cameras. The Redandblack, Soldier, Longdress, and Loot four sequences in MPEG 8i Dataset [31] are complete point clouds with a voxel size close to 1.75 mm and a resolution of 1024 × 1024 for texture maps; the basketball_player and dancer two sequences in Owlii Dataset [32] are complete point clouds with a resolution of 2048 × 2048 for texture maps; the Ricardo and Phil are two other sequences in Microsoft Voxelized Upper Bodie's Dataset [33] and are frontal upper body point clouds with a voxel size compact to 0.75 mm and a resolution of 1024 × 1024 for texture maps, as shown in Figure 8. The point cloud is a set of points (x, y, z) constrained to lie on a regular 3D grid. In other words, it can be assumed to be an integer lattice. The geometric coordinates may be interpreted as the address of a volumetric element or voxel. The attributes of a voxel are the red, green, and blue components of the surface color. Note that each series takes 32 frames for experimentation and comparison. Select the most popular V-PCC as the comparison item to analyze the advantages and disadvantages of the proposed method. For better evaluation of the R.D. quality, the Bjontegaard Delta-Rate (BD-rate) and Bjontegaard Delta Peak Signal-to-Noise Ratio (BD-PSNR) metrics [34] are calculated, which makes the comparison of different compression solutions possible when considering several rate-distortion points. The PSNR, which aims to report the distortion values, is calculated as where p and color p are the peak constant value for geometric distortions and color distortions of each reference point cloud, respectively, and MSE is the mean squared error.
Based on this, BD-rate is defined as the average difference between the area integral of the lower curve divided by integral interval and that of the upper curve separated by the integral interval: Select the most popular V-PCC as the comparison item to analyze the advantages and disadvantages of the proposed method. For better evaluation of the R.D. quality, the Bjontegaard Delta-Rate (BD-rate) and Bjontegaard Delta Peak Signal-to-Noise Ratio (BD-PSNR) metrics [34] are calculated, which makes the comparison of different compression solutions possible when considering several rate-distortion points. The PSNR, which aims to report the distortion values, is calculated as PSNR color = 10 log 10 ( (p color ) 2 MSE ), PSNR geometry = 10 log 10 ( 3p 2 MSE ) , where p and p color are the peak constant value for geometric distortions and color distortions of each reference point cloud, respectively, and MSE is the mean squared error. Based on this, BD-rate is defined as the average difference between the area integral of the lower curve divided by integral interval and that of the upper curve separated by the integral interval: where r = a + bD + cD 2 + dD 3 , r = log(R), R is the bitrate; a, b, c, and d are fitting parameters; D is the PSNR; D H and D L are the high and low end, respectively; r 2 and r 1 are the curves. A negative BD-rate indicates that the encoding performance of the optimized algorithm has been improved. On the other hand, BD-PSNR expresses the promotion in the objective quality at the same rate, which is given as where D = a + br + cr 2 + dr 3 , r H , r L , D 2 (r) and D 1 (r) are the highest logarithm of bitrate, the lowest one, the original curve, and the compared curve, respectively. The larger the BD-PSNR, the better the proposed algorithm. Furthermore, without losing fairness, experiments strictly implement the common test condition for dynamic PCC provided by MPEG [24]. Table 1 provides the BD-rate, BD-PSNR, and runtime savings for V-PCC with a voxel size of four compared to a voxel size of two. As the voxel size increases, the encoder's runtime is reduced by 53.73% on average, but the related geometric and color quality suffers a severe loss. In detail, the D1 bitrate increases by an average of 2.78%, while the Y bitrate rises by an average of 3.32%. MPEG explains that the large voxel size is more suitable for real-time applications because high-precision reconstruction is not required in this case. However, we are inclined to promote the real-time capability of compression schemes while ensuring high quality. Therefore, we focus on weakening complexity that retains a great deal of quality based on the voxel size equal to two. Figure 9 describes the impact of the maximum number of iterations on the geometric performance, color performance, and runtime. Results display that although the number of iterations increased eight times, D1-PSNR only increased by less than 0.1%. In terms of color, the compression result after ten iterations was not the worst, and the outcome of 90 iterations was not the best. Meanwhile, the cost of time steadily increased. In summary, the rise in the number of iterations has little impact on geometric performance, unstable attribute optimization, and time. Consequently, we directly suggest lessening the value in [24], which is 10.
The Low-Curvature Ratio and Search Radius
According to the common test condition, the search radius for high-curvature points is set to 96. The other radius suitable for low-curvature points needs to be further analyzed, as shown in Figures 10 and 11. We used two types of point clouds for experiments.
The Low-Curvature Ratio and Search Radius
According to the common test condition, the search radius for high-curvature points is set to 96. The other radius suitable for low-curvature points needs to be further analyzed, as shown in Figures 10 and 11. We used two types of point clouds for experiments.
The Low-Curvature Ratio and Search Radius
According to the common test condition, the search radius for high-curvature is set to 96. The other radius suitable for low-curvature points needs to be furthe lyzed, as shown in Figures 10 and 11. We used two types of point clouds for experi The R.D. performance is built up when the radius decreases slightly but de sharply with further decrease. Even when the radius is less than 25, both the geo and the color information show a huge loss. Regarding time consumption, when dius is greater than 36, the saving rate of encoder's runtime is not more than 30%. H the radii equal to 25 or 36 are substituted into the analysis of low-curvature ratio, as s in Figure 10. The radius of 25 is outstanding on time but poor on quality. On the con no matter what the low-curvature ratio is, the compression result of the encoder radius of 36 is satisfactory. Considering that the ultimate goal is to reduce the runtim low-curvature ratio is set as 0.92 and the search radius of low curvature is set as 36
R.D. Performance Evaluation
All point clouds have a similar change trend on their R.D. curves. Figure 12 pr a detailed performance description of Redandblack on geometry and color. The curv based hierarchical method has a performance like V-PCC at a low bpp, but less cost with better quality at a high bpp. It is undeniable that the proposed method is va the dynamic condition. From Tables 2 and 3, our method is much better than V-PCC, with a voxel size in geometry and color performance. However, there is also an average 50.52% incre runtime due to the reduction of voxel size which enhances the complexity of the borhood search and the determination of final score. This is negligible in most applic The R.D. performance is built up when the radius decreases slightly but declines sharply with further decrease. Even when the radius is less than 25, both the geometry and the color information show a huge loss. Regarding time consumption, when the radius is greater than 36, the saving rate of encoder's runtime is not more than 30%. Hence, the radii equal to 25 or 36 are substituted into the analysis of low-curvature ratio, as shown in Figure 10. The radius of 25 is outstanding on time but poor on quality. On the contrary, no matter what the low-curvature ratio is, the compression result of the encoder with a radius of 36 is satisfactory. Considering that the ultimate goal is to reduce the runtime, the low-curvature ratio is set as 0.92 and the search radius of low curvature is set as 36.
R.D. Performance Evaluation
All point clouds have a similar change trend on their R.D. curves. Figure 12 provides a detailed performance description of Redandblack on geometry and color. The curvaturebased hierarchical method has a performance like V-PCC at a low bpp, but less costly and with better quality at a high bpp. It is undeniable that the proposed method is valid for the dynamic condition. The R.D. performance is built up when the radius decreases slightly but decline sharply with further decrease. Even when the radius is less than 25, both the geometr and the color information show a huge loss. Regarding time consumption, when the r dius is greater than 36, the saving rate of encoder's runtime is not more than 30%. Henc the radii equal to 25 or 36 are substituted into the analysis of low-curvature ratio, as show in Figure 10. The radius of 25 is outstanding on time but poor on quality. On the contrar no matter what the low-curvature ratio is, the compression result of the encoder with radius of 36 is satisfactory. Considering that the ultimate goal is to reduce the runtime, th low-curvature ratio is set as 0.92 and the search radius of low curvature is set as 36.
R.D. Performance Evaluation
All point clouds have a similar change trend on their R.D. curves. Figure 12 provide a detailed performance description of Redandblack on geometry and color. The curvatur based hierarchical method has a performance like V-PCC at a low bpp, but less costly an with better quality at a high bpp. It is undeniable that the proposed method is valid fo the dynamic condition. From Tables 2 and 3, our method is much better than V-PCC, with a voxel size of fou in geometry and color performance. However, there is also an average 50.52% increase i runtime due to the reduction of voxel size which enhances the complexity of the neigh borhood search and the determination of final score. This is negligible in most application From Tables 2 and 3, our method is much better than V-PCC, with a voxel size of four in geometry and color performance. However, there is also an average 50.52% increase in runtime due to the reduction of voxel size which enhances the complexity of the neighborhood search and the determination of final score. This is negligible in most applications that do not have extreme real-time constraints. At the same time, data compared with a voxel size of two shows that the proposed approach improves performance and efficiency obviously, saving an average of 33.63% of the total time. After the tradeoff between real-time performance and accuracy, our method can achieve clear quality improvement and, in most cases, shorten the encoder's runtime. Besides, the visual effects of our method, as compared with V-PCC, are demonstrated in Figures 13-15. Clearly, the point clouds compressed by a voxel size of four are generally unsmooth and have apparent cracks. The point clouds condensed by a voxel size of two outperform voxel size of four, but there are also some cracks. The point clouds conducted by our approach are closest to the original point clouds. Therefore, the method proposed in this paper can achieve better visual effects than V-PCC, consistent with the results obtained from the analysis of R.D. performance, as described earlier.
Conclusions
In this paper, we proposed an improved dynamic PCC method based on curvature estimation and hierarchical strategy to reduce the runtime of the video-based compression scheme and obtain an apparent quality gain. Firstly, the proposed method segments original data into six primary clusters utilizing normal similarity. Secondly, we suggested a curvature-based hierarchical refining approach to optimize clusters. Finally, image generation technology and video codec were used to map point cloud to 2D image and compression.
The curvature-based hierarchical method's specific flow can begin with generating voxelized identification points by the partition of geometric coordinate space. Next, classify identification points into low-curvature points and high-curvature points. Then, estimate the neighboring voxels and final scores hierarchically. Last, converge each point to the cluster associated with the highest score to obtain patches with smoother boundaries and fewer repeated points.
The experimental results show that the proposed scheme can save 33.63% compression time on average, with clear R.D. performance benefits and subjective effect boost, and is suitable for most AR/VR applications. However, the curvature-based hierarchical method requires only the characteristics of geometric space. In future work, we will consider using the properties of the attribute space to improve compression quality. | 8,407 | 2022-02-01T00:00:00.000 | [
"Computer Science"
] |
An ECM-ISC based Study on Learners’ Continuance Intention toward E-learning
—E-learning has been developing rapidly in recent years; however, as the student number and market scale are growing fast, there is also a growing concern over the issue of high dropout rates in e-learning. High dropout rate not only harms both education institutions and students, but also jeopardises the development of e-learning systems. Understanding the behavioural mechanism of students’ continuance learning in online programmes would be helpful for reducing the dropout rate. In order to explain students (cid:2)(cid:2) continuance intention toward e-learning, this study combines theories from the fields of information management and pedagogy. By adding two constructs, namely academic integration and social integration, from the theories of dropout as antecedent variables, an improved ECM-ISC model for e-learners’ continued learning intention was put forward. Results from structural equation modeling (SEM) demonstrate a stronger explanatory power of the new model. Based on the results from empirical analyses, corresponding suggestions are proposed in the end of this paper.
INTRODUCTION
With the continuous development in communications and information technologies, the e-learning industry has been expanding due to its unique characteristic of being unconstrained by time or geographical limits. According to the Education Factbook published in 2012 by GSV Asset Management, an American education advisory group, the scale of the online learning industry reached US$90.9 billion in 2012 and was forecasted to grow at a compound annual growth rate of 25% in the next five years, reaching US$150 billion in 2017. Existing statistics have shown that the proportion of American higher education students enrolling in at least one online course increased from 9.6% in 2003 to 33.5% in 2013, and the total number of learners is estimated at 710 million [1]. While e-learning has been flourishing, the issue of low student retention rates in has become increasingly prominent. Data from a survey conducted by Duke University showed that the dropout rate of the Massive Open Online Course (MOOC) was up to 90% [2], and the dropout rate of the Open University, a well-known online education institution in the UK, was up to 78% [3]. The report on the development of online education in the United States stated that high dropout rate is a primary obstacle for the future growth of online education. Retaining online learners is more difficult than retaining learners in traditional face-to-face education [1]. High dropout rates not only increase the operation costs of education institutions, but are also a waste of time and money that the learners have previously invested. Widespread dropout will harm the future development of the online learning market. Therefore, understanding the mechanisms determining students' decision to continue/discontinue their studies is crucial in taking the necessary steps to reduce dropout rates.
E-learners can be seen as consumers of educational products from education institutions. A few studies have adopted a consumer psychology approach and used students' satisfaction towards online learning to explain students' continuance intention toward e-learning. Other studies have also applied models from the field of information management, such as Technology Acceptance Model (TAM) and Expectation Confirmation Model for Information Systems Continuance (ECM-ISC), to examine students' continuance intention toward e-learning from the perspectives of students' adoption or continuation of e-learning related information systems (IS). Another major line of research uses dropout theories or dropout models in pedagogy to analyse students' dropout behaviour. Given that students of e-learning are different from typical consumers in the market, while the learning information system is only the core part of Eleaning, using a single perspective to explain students' continuance intention is inevitably biased. This study attempts to explain students' continuance intention toward e-learning by treating students' satisfaction as a core factor and ECM-ISC as the theoretical basis, while also introducing two important contructs, namely academic integration and social integration, from pedagogical dropout theories into the analytical model. Subsequent sections of this article are organized as follows: (1) review of relevant existing theories and research, (2) construction of a conceptual model of continuance intention among e-learners based on the analyses of existing empirical data, (3) explanation of questionnaire design and data collection, (4) testing of the conceptual model using structural equation modelling (SEM), (5)summary and suggestion.
A. Dropout Theory
In pedagogy, dropout describes the behaviour where learners discontinue learning. The issue of dropout has been extensively studied by pedagogical scholars. Tinto [4] argues that the congruence of goal commitment and institutional commitment determines learners' dropout PAPER AN ECM-ISC BASED STUDY ON LEARNERS' CONTINUANCE INTENTION TOWARD E-LEARNING behaviour. When the two variables are incongruent, dropouts might occur. Goal commitment and institutional commitment change along students' learning process, and these changes are influenced by academic integration and social integration. Academic integration involves learners' academic performances and social integration involves the interaction between a leaner and the learning environment. Based on the concepts above, Tinto proposed a theoretical model of student dropout ( Figure 1). Based on Tinto's theory, Kember elaborated that dropout/continuance decisions of students could be understood as the result of cost-benefit analyses. That is, when the costs are greater than the benefits for continued learning, the student would drop out; or else, the student would continue learning [5]. Rovia pointed out, in an integrated model for explaining continuance decisions in distance learning, that pre-admission personal attributes (i.e. age, ethnicity, and gender), as well as post-admission internal factors (i.e. academic integration and social integration) and external factors (i.e. economic situation, number of working hours, family support, and life crises) would influence students' decisions to continue learning [6]. Therefore, pedagogical research treats academic integration and social integration as important factors to explain the decisions by students to continue (or discontinue) learning. At the same time, there is an emphasis on the concept that students' dropout decisions result from some form of congruence matching or comparison.
B. Theories on the Continued Use of Information
Systems Adoption of information technology and IS continuance are topics that have interested scholars in the field of information management. Building upon the Theory of Reasoned Action (TRA), Davis proposed a model on information technology acceptance [7]. The model suggests that perceived usefulness and perceived ease of use are two critical factors affecting users' adoption of information technology, and has been often used to explain users' behaviours in the early stages of information technology usage. In contrast, the Expectation Confirmation Model for Information Systems Continuance (ECM-ISC) model proposed by Bhattacherjee focused on understanding users' continuance behaviour during the process of IS usage ( Figure 2) [8]. This ECM-ISC model was proposed based on the expectation-confirmation theory and a paradigm of 'expectation -confirmation -satisfaction -intention'. As suggested by this model, a user's initial expectation and degree of confirmation after use will influence their level of satisfaction, while a user's continuance intention depends on their satisfaction towards the IS and its perceived usefulness. E-learners use IS, such as learning management systems, to perform learning activities. Hence, to a certain extent, students' continued learning behaviour is equivalent to their behaviour in IS continuance. Following this logic, some scholars have applied ECM-ISC in research on continued learning behaviour in e-learning. Alraimi used ECM-ISC as the foundation of research in order to further examine factors increasing individual learners' continuance intention in MOOC [9]. Empirical analyses showed that aside from the correlation between perceived usefulness and satisfaction, all other original paths in the model were supported. In addition, Chow expanded the existing ECM-ISC by adding four factors of post-adoption expectation (learning process, teachers interaction, peer interaction and course design) between expectation-confirmation and satisfaction. The results showed that the model had a stronger explanatory power on e-learners' continuance behaviour [10].
III. RESEARCH HYPOTHESES AND CONCEPTUAL MODEL
To investigate the factors influencing students' elearning continuance and correlations among the factors, this study introduced pedagogical theories to expand the existing ECM-ISC model. According to the ECM-ISC theory, users IS continuance is similar to consumers' repeat consumption in the business world, in that they are both determined by the level of satisfaction [11,12]. If expectation is confirmed in the early stage of IS use, then the user will be more satisfied and the level of perceived usefulness will also increase. A user's perceived usefulness towards an IS will affect their level of satisfaction and IS continuance intention. The five hypotheses in ECM-ISC have been supported by a large number of empirical studies, for which the research topics have not been limited to e-learning, but have also involved the issues of IS continuance in other areas [13,14]. The current study proposes the following five hypotheses according to ECM-ISC: H1: The higher the student's level of expectationconfirmation toward e-learning, the higher their level of perceived usefulness towards e-learning.
H2: The higher the student's level of expectationconfirmation toward e-learning, the higher their expectations toward e-learning.
H3: The higher the student's level of perceived usefulness toward e-learning, the stronger their continuance intention toward e-learning.
H4: The higher the student's level of perceived usefulness toward e-learning, the higher their level of satisfaction toward e-learning.
H5: The higher the student's level of satisfaction toward e-learning, the stronger their continuance intention toward e-learning.
PAPER AN ECM-ISC BASED STUDY ON LEARNERS' CONTINUANCE INTENTION TOWARD E-LEARNING
According to Tinto's dropout theory, academic integration was defined as students' grade performance and intellectual development in the learning process. Grade performance is related to how students fulfil the explicit standards of the education institutions, while intellectual development is related to whether students approve the rules within the academic system. Poor grade performance and difficulty in gaining intellectual development will cause students to feel that they do not fit into the learning environment. Thus, the student would not see the value of learning, often resulting in dropout [4]. As a large proportion of e-learners are part-time learners (with regular jobs), we believe that academic integration should take into account how online learning improves students' vocational skills. Students enroll in online learning with the intention to acquire knowledge and improve their skills. Hence, grade performance and intellectual development would affect a student's level of expectation-confirmation. The above analyses led to the following two hypotheses: H6: The higher an e-learner's level of academic integration, the higher their level of perceived usefulness.
H7: The higher an e-learner's level of academic integration, the higher their level of expectationconfirmation.
In Tinto's dropout model, social integration was defined as how a student interacts with the learningrelated environment during the learning process. Multiple factors, such as education institutions, teachers, and students, form an academic social system. Likewise, since working students exist within a larger social system, the interactions between their living and work environments with their participation in learning should be included in social integration. Jung et al found in their study that promoting positive interactions between online learners and their learning environment would raise the students' satisfaction level [15]. Baker noted that the relationship between teachers and students would affect how satisfied students are with the school [16]. Based on the above analyses, we propose the following hypothesis: H8: The higher an e-learner's level of social integration, the higher their level of satisfaction toward e-learning.
By combining the eight research hypotheses proposed above, we propose a conceptual model of learners' continuance intention toward e-learning, which is illustrated in Figure 3.
A. Questionnaire Design and Scale Development
The questionnaire used in this study is comprised of three parts, personal background information, variable measurement scales and open questions. Personal background information was used to collect data on the participants' basic information, such as age and gender. To ensure the reliability of data and protect the participants' privacy, sensitive information was not gathered in this part of the questionnaire. In the section of open questions, participants were asked about their reasons for continuing (or discontinuing) online learning. Open questions were added to provide additional qualitative analyses along with quantitative analyses.
The measurement scales of the variables were mainly based on existing studies in the area of IS continuance and student dropout, and modified according to the research context and characteristics of online learning. A few of scales were developed by our research group according to the research topic. The measurement scales for academic integration and social integration were established based on Tinto's [4] and Kember's [5] definitions and descriptions of the two concepts, while also referring to Pascarella s [17] measurement tool. As many online learners were working students, variables measuring the learning characteristics of working students were added to measure the two concepts precisely. The scale on perceived usefulness was created with reference to Davis' [7] and Bhattacherjee's [8] measurement tools. Bhattacherjee's scale on expectation-confirmation was adopted as well as part of the scale on satisfaction level [8]. The scale on continued learning intention was based on Bhattacherjee's scale, but the reverse coding item was removed. The instruments were developed by using fivepoint Likert scale, specifically, 'Strongly Disagree', 'Disagree', 'Neither Agree nor Disagree', 'Agree', and 'Strongly Agree', which were scored from 1 to 5 indicating the degree of agreement.
B. Sample Collection
The study participants were online learners of the Open University of Sichuan. The Open University of Sichuan is the largest online institution in southwestern China, which has approximately 190,000 students currently enrolled. The questionnaires were distributed online. Aside from the third section for open questions, which was optional, all other questions were compulsory. To improve the response rate, the link to completing the questionnaire was placed at an extremely visible position on the online learning platform. Students had to go through user authentication before filling in the questionnaire, thus improving the reliability of the source of information. Incentives were given to encourage respondents in completing the study; respondents were told that upon the completion of the questionnaire, there was a possibility that they could win some free mobile call credit. This method increased the willingness to participate in the study and the quality of the collected data.
Among the 1,458 collected samples, questionnaires that were completed in very short amount of time or had sequential repetitive answers were excluded; the final number of valid questionnaires was 1,347. In terms of the demographics of the respondents: for gender distribution, 50.2% (N=676) was male and 49.8% (N=671) was female; for age, 7.2% (N=97) was aged under 20 years old, 47.4% (N=638) aged between 21 to 30 years old, 33.9% (N=457) aged between 31-40 years old, 11.1% (N=150) aged between 41-50 years old, and 0.4% (N=5) aged 50 years old or above. The age and gender distributions were PAPER AN ECM-ISC BASED STUDY ON LEARNERS' CONTINUANCE INTENTION TOWARD E-LEARNING consistent with the reported demographics of other published research in the field of online education studies, hence the samples in this study is representative.
A. Reliability Analyses
Reliability analysis is used to determine the level of reliability of the scales, which includes internal and external reliability. Internal reliability analysis examines whether all items designed to measure the same construct express descriptions with the same meaning. External reliability analysis measures whether a questionnaire yields stable results when it is answered by different people at different times. This study used Cronbach's ! to assess the internal reliability of the scales, and ! >0.7 is considered as an acceptable level of internal consistency [18]. Results of reliability analyses using SPSS 18.0 are shown in Table 1. Results show that the values of ! ranged from 0.746 to 0.934 for individual variables, thus fulfilling the ! > 0.7 requirement, indicating the scales of the questionnaire had relatively high internal reliability. Removal of any of items did not increase the corresponding ! of a variable; therefore, no elimination was needed for any of the measured items in the scale. In terms of external reliability, as the majority of the scales used in the present study were based on existing literature, there is relatively high stability. Furthermore, it is difficult and costly to conduct multiple questionnaires among the same study sample pool, thus it was assumed that the scales had a good level of external reliability and no further tests were conducted.
B. Validity Analyses
Validity refers to the accuracy of the scales, consisting of content validity, criterion validity, and structural validity. Content validity and criterion validity involve the internal logic of a scale. As most of the scales used in this study were based on existing research and were designed after discussions by researchers and managers in the field, it was assumed that the scales had good content and criterion validity. In the following section, the scales were tested on their convergent validity and discriminant validity.
Convergent validity reflects the correlation of different question items within the same concept. Generally, when the standardised factor loading is greater than 0.6, composite reliability is greater than 0.7, and average variance extracted (AVE) is greater than 0.5, it is believed that the internal convergent validity of the model has reached the desired level [18][19][20]. Analyses and calculations on the initial testing of the measurement model were completed using AMOS 17, and results relevant to convergent validity are presented in Table 3. All 24 measured items had standardised factor loadings above 0.6, suggesting each measured item had a strong explanatory power for its corresponding latent variable.
PAPER AN ECM-ISC BASED STUDY ON LEARNERS' CONTINUANCE INTENTION TOWARD E-LEARNING
The composite reliability scores of the six latent variables were also above 0.7 (the suggested cut-off value), indicating each group of measured items had high internal consistency. AVE reached the level of 0.5, thus representing the characteristics of the latent variable. In summary, the overall measurement model had a satisfactory convergent validity.
Evaluation of discriminant validity tests whether a measured item only correlates with its corresponding concepts, rather than other concepts. The square root of the AVE of the latent variables and the correlation coefficient between the particular latent variable and other latent variables were compared. If the former is larger, then it indicates that the discriminate validity is better. The test results are shown in Table 2. If the square roots of AVE are larger than the diagonal corresponding correlation coefficient, it means the discriminate validity is good.
C. Structural Equation Modelling Analyses
Fit index analyses were conducted using multiple criteria, in which six indices from absolute fit indices, normed fit indices, and parsimonious fit indices, were adopted to evaluate the fit of the model. The criteria and actual results of the selected indexes are presented in Table 3. All fit indexes reached the required cut-off level, indicating a satisfactory fit of the model. In structural equation modelling, standardised path coefficients refer to the correlations between variables.When the T value of the path is greater than 1.96, the two variables on the path are considered to be correlated. The analyses of the summarised conceptual model are shown in Table 4. All eight hypotheses are confirmed, among which H4 was statistically significant at the level of p<0.05 and the remaining seven hypotheses were statically significant at the level of p<0.001. Figure 4 demonstrates the pathways analysis results of the conceptual model, where the pathway coefficients indicate the correlations between the latent variables and R 2 represents to what extent a dependent variable can be explained by a corresponding dependent variable. Among the four dependent variables in the model, R 2 of both perceived usefulness and expectation-confirmation level was greater than 0.5 and the R 2 of both satisfaction and continuous learning intention was greater than 0.8, thus suggesting a relatively strong power of this model for explaining e-learner's continued learning intention .
A. Research Conclusions
Using the theory of information systems continuance as the foundation, this study introduced the two key concepts of academic integration and social integration from dropout theory to investigate e-learners' continued learning intention. The following conclusions can be drawn from the results of empirical analyses.
(1) All original pathways from ECM-ISC model were supported by results of the empirical analyses, suggesting that ECM-ISC is applicable in explaining students' continued learning intention in e-learning. E-learning is based on information technology, and students' attitude towards learning technology will represent their attitude towards learning itself.
(2) As antecedent variables, academic integration and social integration can satisfactorily explain e-learners' perceived usefulness, expectation-confirmation, and satisfaction with e-learning. As the sole antecedent variable, academic integration can explain up to 54% of PAPER AN ECM-ISC BASED STUDY ON LEARNERS' CONTINUANCE INTENTION TOWARD E-LEARNING expectation-confirmation, and 64% of perceived usefulness. The explanation level of satisfaction is up to 83%, and of continued learning intention is 88%. The results suggest that the improved model, by introducing academic integration and social integration, has a relatively strong explanatory power for e-learners' continued learning intention.
B. Suggestions
(1) Students' satisfaction of e-learning greatly determines their persistence in completing online learning. Empirical data from this study support the idea that satisfaction affects e-learners' continued learning behaviour, which is consistent with some previous studies [21,22]. In order to increase e-learners' satisfaction with online learning programme, it is necessary to understand their needs and provide them with timely learning support.
(2) It is of great significance to pay attention to students' progress in social and academic integration, and offer support services to those who struggle in these two aspects in a timely manner. Considering that e-learners can feel isolated more easily, adding interactive learning and exchange modules are effective means to reduce students' feeling of isolation. In addition, students with poor academic performances should be identified early and be given necessary guidance to improve their academic integration. | 4,968.6 | 2015-09-22T00:00:00.000 | [
"Computer Science"
] |
Distributing epistemic and practical risks: a comparative study of communicating earthquake damages
This paper argues that the value of openness to epistemic plurality and the value of social responsiveness are essential for epistemic agents such as scientists who are expected to carry out non-epistemic missions. My chief philosophical claim is that the two values should play a joint role in their communication about earthquake-related damages when their knowledge claims are advisory. That said, I try to defend a minimal normative account of science in the context of communication. I show that these epistemic agents when acting as communicators may encounter various epistemic and practical uncertainties in making their knowledge claims. Using four vignettes, I show that the value of openness to epistemic plurality and the value of social responsiveness may best serve their epistemic and practical purposes across different contexts by reducing their epistemic and practical risks associated with the knowledge claims they communicated. The former may reduce the risks of prematurely excluding epistemic alternatives and is conducive to two types of epistemic plurality; the latter is supposed to reduce the risks of making self-defeating advisory claims and harmful wishful speaking by minimizing the values in tension that can be embedded in the social roles the epistemic agents play.
story if it happens in Tokyo, Beijing, or San Francisco as various, direct or indirect socioeconomic damages may be expected to occur in these populated areas (Radtke & Weller, 2021). In such cases, earthquakes become of general interest to scientists, policymakers, and society at large. This paper seeks to analyze and evaluate advisory knowledge claims about earthquake-related damages made by historical epistemic agents who were expected to carry out their respective non-epistemic missions, and relevant socioeconomic consequences resulting from their claims. I aim to provide epistemic agents of this kind, such as seismologists and civil engineers, as well as potentially wider audiences, such as public health experts, economists, and climate scientists, with some normative guidance in communicating their knowledge in the public space.
In order to defend my minimal normative account of science in the context of communication, I argue that the value of openness to epistemic plurality (OEP hereafter) and the value of social responsiveness (SR hereafter) should have been best suited for reducing their epistemic and practical risks when these epistemic agents "communicated" their knowledge for policy or action. In this paper, the term "risk" means uncertain harmful consequences. Risks can be reduced in the sense that relevant epistemically and practically harmful consequences would not result from inappropriate speech acts. Harms, however, could still result from other factors such as false understanding, ineffective action, and insufficient vigilance, resources, and time for preparation, which are not the objects of study in this paper.
In other words, my proposal is to tackle normative issues in epistemic agents' speech acts potentially bringing about epistemically and socially harmful consequences. Moreover, my more ambitious goal is to add substance to the ethics of science communication, where communication norms such as openness and honesty are rejected to obtain their fundamental importance (John, 2018). In this paper, I use the term "communication" in a rather narrow sense, meaning speech acts or ways of speaking. In short, OEP encourages epistemic agents to avoid the premature exclusion of epistemic alternatives, which is furthermore conducive to two types of epistemic plurality (discussed in Sect. 3); For SR, I advance Kitcher's account of the socially responsible scientist with an additional duty to prevent oneself from making self-defeating knowledge claims (discussed in Sect. 4).
My proposal to defend the minimal normative account of science in the context of communication comes up in the light of practical difficulties based on various epistemic and non-epistemic values in tension. Below I discuss three types of value-based uncertainty when epistemic agents have to (1) manage the information involved, (2) estimate losses, and (3) choose a timeframe of interest to focus on. They are three main channels where knowledge claims involved can result in conflicting advice. This typology aims to conceptually characterize the claims made by the historical epistemic agents in Sect. 2, and it prepares the ground for my analyses in Sects. 3 and 4. The value tensions in this typology are not exhaustive but enumerative. The listed tensions are used to show that values are not generally conducive to reducing epistemic and practical uncertainties because all of them could be valid evaluative benchmarks by themselves in specific contexts. These values in tension could, therefore, sustain epistemic plurality in scientific research as well as multiple socioeconomic and political demands on the use of the earthquake sciences. More-over, this highlighted typology may cast doubt, again, on the conventional wisdom in the philosophy of science literature that values may help solve the problem of underdetermination and the problem of inductive risk (Brown, 2013). However, although my proposal may not directly resolve the value tensions, the two emphasized values about science communication should demonstrate their potential to reduce communication-induced, epistemically and socially harmful consequences.
Managing information
The first value-based uncertainty appears as the epistemic agent needs to manage relevant information in forming advice. The two examples below show that values in tension may result in conflicting advice in managing information in order to achieve practical goals (Pielke, 2007;Sarewitz, 2004;Rayner, 2012).
For instance, the U.S. government sought to secure a permanent nuclear waste repository. Policymakers started with their policy to be consistent with the recommendation based on nuclear experts' models reassuring the stability of the geological conditions of the Yucca Mountain nuclear waste repository (Shrader-Frechette, 2014). The reevaluations of the chosen location since the 1980s had been performed to justify the planned policy. The tension between the values of consistency and accuracy appeared between different expert groups as geological accuracy was pursued by drilling. These reevaluations by geologists turned out to reveal nuclear experts' ignorance of its actual geological conditions, which were inconsistent with the models. Information about the seismic risk of the chosen location unsettled the originally planned policy, assuming its practical manageability and safety. The government's goal of securing a permanent nuclear waste repository was then practically unachieved (ISSO, 2012). In this case, the conflicting pieces of information produced by the experts involved furthermore increased the government's practical uncertainty.
The other example shows the tension between the values of accuracy and simplicity arising in mapping seismicity to minimize earthquake damages. The global mapping of seismicity produced by the international seismological community in the early 20th century largely helped narrow down the scientific focus on specific seismic areas on the Earth (Agnew, 2002;Bolt, 2003;Westermann, 2011). This achievement demanded accurate observations and records through seismological instrumentation and archival research, which was conducive to the development of mechanistic explanations of earthquakes and the improvement of earthquake-resistant designs. Nevertheless, endlessly accurate descriptions of past earthquakes did not exactly solve the problem of reducing earthquake damages. A proper solution was expected to have a sort of predictive power with simple patterns. To attempt a forecast (while some aimed at a prediction) of a catastrophic earthquake at a specific point in time, researchers had to actively reduce accuracy in establishing some simple relationships by focusing on chosen parameters such as that between main shocks and aftershocks (Omori's law) and that between magnitude and occurrence (Gutenberg-Richter law). As the different demands of timeframes can be a limiting factor for these mentioned pursuits (Shaw, 2022), without decreasing accuracy, these modelers could not achieve any "predictive" goal (Cartwright, 1983;Hitchcock & Sober, 2004;Forsyth, 2011;Bokulich, 2013). However, "predicting" an earthquake, i.e., anticipating its magnitude on a specific spatiotemporal range in quantitative terms, still belongs to future science (Agnew, 2002;Oreskes, 2015;Hough, 2016;Stark & Freedman, 2016;Mulargia et al., 2017). In short, in order to approach reasonable advice on socioeconomic impacts of earthquakes with different demands of time, seismological research requires such values in tension as described.
Estimating losses
The second value-based uncertainty appears as the epistemic agent makes an advisory claim in view of various socioeconomic losses, which should be understood as a more general consideration of inductive risk (Douglas, 2000). However, inductive methods such as statistical and probabilistic inferences in the classical toxicological case study provide few indications regarding a preferable default hypothesis to be tested, because the consequences of incorrectly accepting a false positive or a false negative in seismological research seem to be equally undesirable. Moreover, it appears to be difficult to agree on a "threshold" for acceptance in non-experimental settings as the dominant "risk assessment" of earthquakes in probabilistic terms is criticized to be epistemically inappropriate and socially harmful (Mulargia et al., 2017). The reason for such criticism is that quantitative expressions of uncertain harmful consequences often give misleading messages of epistemic certainty and practical manageability (discussed in Sect. 4).
In making a "predictive claim" about earthquakes, the epistemic agent, ideally, has to minimize socioeconomic losses by maximizing true positives and true negatives in her claims which remains practically unavailable (ISSO, 2012;Mulargia et al., 2017;Stark & Freedman, 2016). In doing so, she has to be cautious about harmful consequences from overestimating due to mistakenly accepted false positives, such as fear, long-term economic depression, or overinvestment in unaffordable earthquake-resistant designs. She also has to be cautious about harmful consequences from underestimating due to mistakenly accepted false negatives such as immediate casualties and damages resulting from insufficient preparation and lack of vigilance. To arrive at appropriate advice, weighing between these two types of errors in the light of various harmful consequences may thus reflect values in tension (de Melo-Martín & Intemann 2016).
Choosing timeframes
The last value-based uncertainty appears as the epistemic agent must choose a timeframe for a catastrophic scenario in discussion. Such a choice may assume various values in tension. In addition, the epistemic imminence complicates such a choice because there is no reliable reason to exclude that a catastrophic earthquake will not occur in the immediate future (Hough, 2016;Oreskes, 2015). For instance, it may be possible to assume that a catastrophic earthquake will happen in a few years, a few decades, or even a few centuries. Scientific specifications for choosing such a timeframe require statistical approaches to historical earthquakes and seismic mechanisms. However, many regions with a catastrophic earthquake were not even mapped with high risk before the actual occurrence of an earthquake (Mulargia et al., 2017).
Think, for example, about the investment in the seawall prior to the Fukushima Daiichi nuclear disaster. The designers of the nuclear power plant had to decide on the height of the seawall as being key to protecting the reactors from tsunamis. Although they had recognized the possibility of an earthquake of this magnitude, a higher seawall, however, would demand higher construction costs, which could appear to be higher than necessary as the likelihood of a tsunami was very low. A catastrophic earthquake may have been perceived as occurring in a very distant future so its risk was mistakenly ignored. This underinvestment due to the chosen timeframe for a catastrophic scenario thus led to a seawall lower than necessary to protect the reactors.
On the contrary, similar considerations about choosing such a timeframe may influence the costs of earthquake-resistant designs. To increase the safety of a city, one might find it desirable to tighten building regulations so that increasing the costs of earthquake-resistant designs is required for improving long-term urban planning and resilience. Such a step, however, may result in increasing house prices on a short-term basis, which many people cannot afford. The latter might thus be unjustly excluded from the consideration of safer urban planning. It can be difficult to give general advice on improving urban earthquake resistance in view of the values in tension.
So far, Sect. 1 has characterized three types of value-based uncertainty that arise in the processes of making advisory claims: managing information, estimating losses, and choosing timeframes. The challenge is thus how individual epistemic agents, such as scientists who are expected to carry out non-epistemic missions, communicate their knowledge in ways that reduce their epistemic and practical risks in view of the values involved, which can result in conflicting advice. Note that such a challenge does not amount to finding ways to eliminate conflicting advice or values in tension or conflict.
The rest of this paper is organized as follows: Sect. 2 presents four vignettes to contextualize this typology of value-based uncertainty. The analyses of their associated epistemic and practical risks are presented in Sects. 3 and 4, which constitute my minimal normative account of science in the context of communication. Accordingly, Sect. 3 centers on OEP, and Sect. 4 on SR. The article concludes with Sect. 5.
Four vignettes of advising in response to earthquake damages
Section 2 contextualizes the typology characterized in Sect. 1, using the advisory claims made by epistemic agents in response to earthquake damages. The four vignettes include the Church authorities in the 1755 Lisbon earthquake, the Yatoi architects in the 1891 Nobi earthquake, the commissioned scientists as well as "mass scientists" in the 1975 Haicheng and 1976 Tangshan earthquakes, and the commissioned scientists in the 2009 L'Aquila earthquake. These four vignettes provide a historically and geographically wide range of practical contexts, where epistemic agents made their advisory claims given sufficiently different non-epistemic missions.
The shared feature of these epistemic agents across contexts is that they were expected to carry out non-epistemic missions. This led them to make specific advisory claims about earthquake damages while committing to particular values. Their knowledge claims, however, resulted in social harms in relevant contexts. Thus, their credibility as epistemic agents was challenged (discussed in Sect. 4).
I suggest that these epistemic agents should have committed themselves to the minimal set of communication norms, that is, OEP and SR, in public, to reduce their communication-induced risks to science and society.
The wrath of God, the ruin of churches, and the 1755 Lisbon earthquake
The first vignette is about the church authorities' false reassurances regarding earthquake damages. During the 18th century, natural philosophy and natural history, and, more importantly, the church authorities still dominated the epistemic authority about nature in society (Rudwick, 2014). The common explanation of earthquakes was that they resulted from the wrath of God. In contrast, Kant's reflection after the 1755 Lisbon earthquake and the development of his naturalistic account marked an interesting move in understanding such events (Reinhardt & Oldroyd, 1982, 1983. Along with Buffon's idea, Kant thought that these earthquakes could be explained with purely mechanistic and chemical explanations such as inflammable materials exploding in underground cavities (which is wrong in today's seismology). Here, Kant took up the traditional task of naturalists collecting bits and pieces of nature with accuracy and explained the existence of earthquakes in a simplistic account without appealing to God. As for his practical purposes, based on his scientific account, Kant advised his readers to avoid earthquake damages by not living around underground cavities.
On the contrary, the situation after the Lisbon earthquake became painful for other intellectuals who favored deistic philosophy such as Voltaire because obviously "there is evil in the world" (Reinhardt & Oldroyd, 1983). They found that the cruel nature could not at all justify the benevolence of God. As this ground-shaking event occurred, unfortunately, on All Saints Day in 1755, numerous pious priests, nuns, and believers were all buried in the ruins of churches, while many of the impious or disbelievers survived (Reinhardt & Oldroyd, 1983;Musson, 2012). The result was not the one the church authorities had propagated: only disbelievers were to be punished by God. The Lisbon earthquake revealed that pious believers had apparently been punished by God for no reason. The epistemic authority of the churches was thus heavily challenged when people saw the cruelty of nature created by an omnibenevolent God. For instance, Voltaire, since then, relentlessly satirized and frequently referred to this catastrophic event as revealing the depravity of churches.
Yatoi architects and the Meiji architectural programs before the 1891 Nobi earthquake
The second vignette is about the misleading advice of yatoi architects on improving the urban earthquake resistance. Employed by the newly organized Meiji government in Japan, the first generation of European intellectuals, including scientists and engineers, were called yatoi (雇), meaning "the employed." Their missions were straightforwardly connected to the Japanese national policy of Westernization. Here, we focus on two of them from Britain who were responsible for various government-sponsored civil engineering and architectural projects, and later transformed the development of modern seismology.
The yatoi were hired by the government and worked at the newly founded College of Technology (工部大学校, today's Faculty of Engineering at the University of Tokyo) since the 1880s. One of their most significant missions was to modernize Japanese buildings to build up a stronger empire of Japan (Clancey, 2006). Japanese buildings of this period were characterized by the practices of traditional craftsmanship called daiku (大工), whose work could neither be identified with engineering nor architecture in the eyes of the British yatoi. For example, based on the Euclidean geometry, architects then often criticized the traditional heavy roof as an "irrational and redundant" design 1 for a building in an earthquake nation. It should have been removed as soon as possible. The yatoi could offer an alternative based on a simpler model. Most of the yatoi were stunned by the fact that the main body of Japanese construction was made of wood, which was characterized as being "frail" and "temporary." They wondered how buildings made of such materials could resist imminent catastrophic earthquakes. The call for urgent transformation of Japanese construction occurred hereafter.
By contrast, the mid-19th -century British architecture was characterized by geometrical and physical principles and "robust" and "permanent" materials: architectural design, brick, and stone. Along with this tradition, the founder of Japanese modern architecture and professor Josiah Conder (1852-1920) argued that it was essential to transform wooden Japanese construction based on the British model. He believed that this should be the essential move to modernize Japan and manage catastrophic earthquakes as well as urban fires. This epistemic commitment led Conder to ignore the documents that went against his theory, such as that by the designer Christopher Dresser (1834-1904) on thousand-year-old Buddhist pagodas. The known oldest one, Horyuji (法隆寺) in Nara, was built in the 7th century. These ancient buildings had been entirely made of "frail and temporary" materials and maintained by "irrational and redundant" practices of daiku.
As an intellectual opponent to Conder and a co-founder of the Seismological Society of Japan (日本地震学会) in 1880 (Clancey, 2006;Davison, 1937), the British professor of geology John Milne (1850-1913 cast serious doubt on the applicability of Conder's project of bringing British architecture to Japan. This was due to his own experience of recording the moderate Tokyo-Yokohama earthquake in 1880 in light of his European seismology. This seismology then was developed by the Irish civil engineer Robert Mallet in recording earthquakes in Europe and South America. His method was to study the patterns of cracks imprinted on the shaken building structures made of stone and bricks. With these patterns based on observation Mallet could retrodict epicenters and depths of earthquakes on mechanics (Mallet, 1846;Gillin, 2020).
The difficulty that faced Milne's research after the Tokyo-Yokohama earthquake was that Mallet's method did not apply to wooden buildings at all. He had a difficult time recording the patterns of cracks or serious damage on Japanese wooden structures. Flexibility, rather than the mentioned frailness and temporality, character-ized the Japanese constructions hit by this earthquake. Eventually, he found a proper "seismograph," which was composed of a bunch of European buildings around Yokohama. However, it remained a problem for him to locate the epicenter by tracing the effects of the earthquake on the wooden structures beyond the areas without clustered European buildings.
On the same occasion, the epistemic tension between the architect and the geologist loomed large in an interesting way in their scientific interpretations of hundreds of ruined European-style buildings of the modern Ginza, which had been commissioned by the Meiji government in the 1870s. Until 1880, the modern Ginza was the model Japanese city completely composed of European-style buildings. It demonstrated a template for a nationwide modernization project of Japanese buildings to follow. The Tokyo-Yokohama earthquake in 1880 did not change Conder's belief in modernizing Japan based on the British model. He could argue that the ruins of the model city were not so much about the British architectural design. Rather, it was due to the inability of the Japanese construction workers who could not follow the architectural plans and aptly use stone and bricks.
By contrast, Milne saw these systematically built columns of European buildings as an array of seismometers because the ruins did demonstrate some specific patterns such as the direction of the collapsed chimneys, which later became a research focus of his famous student Omori Fusakichi (大森 房吉). In addition, this experience led Milne to gather engineers and physicists at the College of Technology who were also interested in seismicity and advancing its instrumentation. This seismological community formed a new approach that was remarkably distinct from Mallet's observational seismology. With the seismographs, Milne and his colleagues and students opened up instrument-based seismology, going beyond the previous limitations to recording simple patterns only on building structures. They rearranged seismology under the brand of geophysics, which should be studied in terms of physics, rather than solving problems of engineering. In Milne's first seismology textbook published in 1886, he praised the flexibility of Japanese wooden construction and criticized the rigidity of Victorian architecture using masonry and bricks. He supposed the problem of the upending of European-style buildings was due to the wrong earthquakeresistant design because his architect colleagues did not understand the mechanics of earthquakes.
This intellectual conflict between Conder and Milne and their peers lasted until 1891 when the catastrophic Nobi earthquake occurred unexpectedly. The original Meiji architectural programs with the model Ginza terminated, and the ruins of the European-style buildings again attracted attention on the disadvantages of over-Westernized Japanese buildings. The government, intellectuals, and society started a reevaluation of the traditional Japanese wooden constructions. To build up a longterm seismic-resistant Japanese Empire, this apparently robust and permanent British architecture would not work on a short-term basis.
Chinese mass seismologists and earthquake prediction in the 1970s
The third vignette is about commissioned scientists' mistaken beliefs and strategies in earthquake prediction. The Cold War politics shaped seismology and disaster policy in Maoist China in dramatic fashion, in particular, the science of predicting earthquakes by learning from the people for the people. This form of cooperation between science and policy was characterized as being in contrast to Weberian, technocratic, value-free science.
The Communist party's enthusiasm for predicting earthquakes appeared during the Cultural Revolution (1966)(1967)(1968)(1969)(1970)(1971)(1972)(1973)(1974)(1975)(1976), which was an unusual period with more than ten large earthquakes killing a quarter of a million people in China Proper (Fan, 2012). As a result, the 1975 Haicheng earthquake and the 1976 Tangshan earthquake within a very short time, by luck, tested this enthusiasm for earthquake prediction and dashed it.
Traditionally, a large earthquake in China Proper would be rendered as a sign of political turmoil or an end to an emperor or dynasty (Agnew, 2002;Fan, 2012). This epistemic-political system assumes that Tian (the Heavens or nature) brings about a catastrophe because the emperor's virtues are not strong enough to justify the continuation of his reign. The Communist party could feel this pressure when a large earthquake occurred. The solution the leaders came up with was to actively educate people that this belief was merely superstitious, and that, more importantly, earthquakes "could be predicted" in the spirit of the Engels-Maoist philosophy.
Mao believed that the goal of science is to "conquer nature, to free oneself from nature" (人定勝天), which is parallel to Engels' "labor creates humanity." That was the optimism of human capacity to intervene in the course of nature. For this, Mao emphasized the utilitarian value of science while dismissing the value of theoretical science such as Einstein's theory of relativity. As long as earthquakes could be predicted, their socioeconomic impacts could be prevented. Seismological research was in this sense of utilitarian value and supported by the Communist government. The Chinese geologist Li Siguan (李四光, 1889-1971) and geophysicist Weng Wenbo ( 翁文波, 1912-1994), who endorsed this political commitment, helped initiate and implement the projects of predicting earthquakes during the time the Chinese scientific community became relatively isolated from the West since the beginning of the Cold War, and from the Soviet Union since the late 1950s.
During this period of isolation, Chinese seismology adopted an approach that was distinct from other countries (Fan, 2012). First, the Chinese scientists mapped the earthquakes in the historical records of thousands of years, which were lacking in their foreign counterparts. This helped in locating the places with a higher likelihood of earthquake occurrences and linked various precursory phenomena with large earthquakes. Second, instead of focusing on monitoring seismic activities with sensitive instruments, the Chinese scientists first focused on macroscopic phenomena that laypeople could observe, say, abnormal animal behavior. The scientists believed that these phenomena could be used to accurately predict earthquakes on a short-term basis. This was at the same time deemed unlikely by the international scientific community. Third, while lacking sufficient seismological stations and trained staff in the vast land, laypeople were advised to learn and do science by themselves to contribute to this nationwide defense policy. In other words, there was a demand from above to blur the distinction between scientific experts and laypeople for the epistemic and political goal of predicting earthquakes.
The government officials and teachers at all levels were enthusiastic about teaching people and students the science of predicting earthquakes by paying attention to the "precursory phenomena" of changes in biological behavior, water and sea levels, tides, ground deformation, climate, and weather as well as earthquake sounds and light. This required assistance from specialists in a wide range of scientific disciplines besides geophysics (Fan, 2012(Fan, , 2017(Fan, , 2018. According to this view, earthquakes were seen as a collection of experiences of complex changes in the environment rather than single geophysical events. These efforts led to a successful evacuation in 1975. Until today, this is the only recognized case of a successful earthquake evacuation policy based on "earthquake prediction," which had only a shaky scientific basis (Fan, 2012(Fan, , 2017(Fan, , 2018Musson, 2012;Jordan et al., 2011;Hough, 2016). A couple of weeks before the Haicheng earthquake, there were reported cases of strange behavior of snakes crawling from their hibernation holes in the freezing-cold winter. A large portion of the population was evacuated a couple of hours before the main shock. People who had not been willing to follow the Party's lead were blamed for not believing in the Party and thus were considered as having deserved their losses. Until the summer of next year, the Tangshan earthquake came without sufficient clues and time for an evacuation measure and caused hundreds of thousands of victims. The hope of predicting earthquakes for disaster policymaking was thus relentlessly dashed shortly after an ephemeral glory over "conquering nature."
Regretful scientist-officials after the 2009 L'Aquila earthquake
The last vignette is about commissioned scientists' false reassurances of no danger. The lawsuits against the scientists after the L'Aquila earthquake constitute a nice example to rethink how scientists should communicate their knowledge under great uncertainty. Relevant debates on the relationships between scientific expertise and policymaking, and between the roles of scientists and policymakers remain until today (Mitchell, 2004;Pievani, 2012;Peters, 2021;Feldbacher-Escamilla, 2019).
In the trial in 2012, nearly three years after the L'Aquila earthquake, six scientistofficials in the governmental commission were sentenced to jail for failing to provide adequate information about the situation of the earthquake, which led to 309 deaths among other losses (Oreskes, 2015). The accusation was not the failure of predicting the earthquake itself, but how their advice misled the public (Feldbacher-Escamilla, 2019).
Regarding a series of swarms that happened in early 2009 before the mainshock, these scientist-officials claimed before the public that there was no evidence that such swarms can predict a large earthquake. They correctly indicated that any prediction of this kind was scientifically unwarranted. By contrast, a predictive claim had been made in the meantime by a retired technician who had worked in Gran Sasso National Laboratory. The technician of astrophysics Giampaolo Giuliani, who had long observed changes in the underground radon concentration, concluded his study without peer review along with the mentioned swarms and alarmed the public about a coming earthquake. Before long, Giuliani was officially forbidden to make claims of this kind in public.
Among the scientist-officials, the professor of volcanology Franco Barberi and the professor of hydraulics and then vice-director of the Department of Civil Protection Bernardo De Bernardinis openly criticized Giuliani's predictive claim on television, and further reassured the public that the series of swarms would pose "no danger." De Bernardinis even claimed that these swarms helped lessen the seismic potential by dissipating the accumulated seismic energy in this region.
Unfortunately, their reassurances turned out to be wrong. The mainshock came with serious damages. Their advisory and scientific claims had thus been false. The earthquake further disturbed the credibility of the scientist-officials who had not denied the potential falsity of their hypothesis and misled the public to believe that there would be no danger. To some degree, these scientist-officials even self-defeated their own claim that any prediction of this kind was scientifically unwarranted. Their claim of "no danger" should have been seen as a predictive claim about these earthquake damages as it suggested a probable development of the status quo (discussed in Sect. 4).
At least, in this case, the situation could have been better if the alarm by Giuliani had not been officially forbidden in public (discussed in Sect. 3). Without the false reassurances made by the scientist-officials, some deaths could have been avoided even if the victims merely followed their folk seismology and earthquake mitigation measures they inherited from their ancestors (Oreskes, 2015). Table 1 provides a summary of the four vignettes and makes it easier for the reader to compare claims made by the above epistemic agents and their consequences. In the subsequent two sections, I use a quasi-inductive, counterfactual argument 2 for each emphasized value across contexts. If OEP and SR were not absent in the processes of making advisory claims in the four vignettes, epistemic and practical risks resulting from communicating knowledge for action in the relevant contexts could have been reduced. 2 The reader may, however, doubt that the justification for my proposal, which relies on various counterfactual statements, is merely speculative and defeasible because advice based on the results of the counterfactual analysis may not always hold in complicated situations. Many factors other than such values could have made a difference in such an analysis. Values are, after all, not necessary like causes. I agree and think this doubt is constructive for approaching acceptable and desirable ethics of science communication. My proposal may work as a starter for this and should be examined extensively. In this regard, I suppose my proposal should be used to critically reflect on itself. In an attempt to show the inappropriateness of my proposal, my opponents might show that the commitment to the two suggested values does create more epistemic and practical risks resulting from speech acts, at least, in some cases. One might start with a statement like "if OEP and SR were not present in such and such cases, the situations in these cases could have been better." Using a similar argument, John (2018), for instance, indicates the inappropriateness of values, such as openness and honesty, in some cases. If such cases were identified, my proposal would be shown to conflict with my set goal, so it fails to exhibit my social responsiveness (see also Sect. 4). Before such empirical scrutiny, I am not only open to alternative normative suggestions or alternative ethics of science communication (see also Sect. 3) that can better promote my goal but also curious about alternative justifications using no counterfactual statements. Note that such attempts are mostly based on a speculative assumption that some proper ethics of science communication are worth pursuing.
In this paper, OEP is characterized as a value of communicating knowledge claims. It encourages epistemic agents to make a normative commitment by avoiding the pre- The international scientific community's claim that earthquake prediction is now highly unreliable The prediction of the Haicheng earthquake succeeded. The quality of data collection from laypeople was unreliable in contrast to instrumentbased seismology by the international scientific community An evacuation policy based on "earthquake prediction" was successfully implemented once but not a second time, and led to a decrease in the social vigilance regarding earthquakes without sufficient precursory phenomena. The credibility of the government and scientists' ideal of earthquake prediction was challenged. The costs of many false alarms were not calculated. L'Aquila The scientistofficials' claim that the swarms could dissipate seismic energy and posed no danger The retired technician's claim that the swarms might indicate a coming earthquake. The folk seismology advising people to stay outdoors when frequent trembles appear The scientist-officials' claim was falsified by a real earthquake. Though scientifically unreliable, the heretical claim without peer-review and folk seismology in indicating the possibility of a coming earthquake could be harm-reducing The credibility of the scientist-officials was challenged. Their reassurances misled the citizens' risk aversion behavior and an indirect reason for their deaths mature exclusion of epistemic alternatives. It is, moreover, conducive to two types of epistemic plurality, which are of epistemic value and positively contribute to improving advisory claims. Why did some historical epistemic alternatives appear to be eventually more successful? The reason was normally that they survived empirical scrutiny, while their competitors did not. If one is to unfairly stifle such alternatives "prematurely," that means one gives a false verdict of them before such empirical scrutiny. Comparing the above four vignettes, I show that OEP should be, firstly, a communication norm for avoiding such a verdict so that some epistemic and practical possibilities would not be prematurely killed. And then it is conducive to enlarging a comprehensive knowledge basis for dealing with earthquake damages. Without this commitment, individual epistemic agents run the risk of distancing themselves from reaching these goals.
As can be seen, individual epistemic agents could make their advisory claims by committing themselves to particular values, which could be, to various degrees, in tension with other value commitments in relevant contexts shown in Sect. 1. The false reassurance of church authorities was consistent with their belief in such events as being God's punishment of disbelievers. The yatoi architects gave their advice on transforming Japanese buildings without critically examining their mistaken assumption that British architecture should be directly adopted in Japan although their advice was consistent with their professional training in Britain. And the Italian commissioned scientists' simplistic reassurances of no danger gave misleading advice to the public although scientists had long known that the real situation could be more complicated. Epistemic agents in these three cases exhibit epistemic resistance to considering more information. On the contrary, the Chinese commissioned scientists appeared to switch on all channels of information. They advised "mass scientists" to look for a very wide range of phenomena with accuracy to increase relevant information for predicting earthquakes.
The advisory claims, however, turned out to be epistemically and practically harmful because the epistemic agents were not open to epistemic plurality when communicating their advisory claims about earthquake-related damages to their audiences. Eventually, the exclusive advisory claims made by the church authorities, yatoi architects, and Italian commissioned scientists resulted in understating the earthquake damages they could experience by implicitly excluding short-termed damages. In contrast, the strategies of identifying precursory phenomena proposed by the Chinese commissioned scientists led to many false warnings, while the missed catastrophic one was seriously underestimated (discussed below). They gave a false impression that such damages could be managed on all timeframes of interest simply by looking at "mass observation" of all kinds of precursory phenomena. Table 1 provides a comparison. The epistemic agents making these advisory claims in question were generally expected to carry out various non-epistemic missions, responding to potential earthquake-related damages, in particular. However, they failed this misplaced expectation while their advisory claims did not survive empirical scrutiny.
Their non-epistemic missions could have been better achieved if, first of all, their advisory claims had not been made to their audiences and they had not prematurely excluded alternative knowledge claims. Worse, dismissing the epistemic alternatives would create blind spots and mistakenly lead their audiences not to consider other options for action that could have brought about mitigating effects on the socioeconomic impacts of the earthquakes. In hindsight, their situations could have become better if OEP were appreciated by those making advisory claims. OEP, as a shared value commitment, sheds light on blind spots that individual epistemic agents could create so that a more comprehensive knowledge basis could be established for producing more options for action.
Here one might start feeling worried that the shortcomings of "epistemic plurality," such as inappropriate dissent, will be included accordingly as one can see endless climate denialism in American society (Oreskes, 2004;Oreskes & Conway, 2010;de Melo-Martín & Intemann 2014;Leuschner, 2018;Kourany & Carrier, 2020). Stressing too much epistemic plurality in deliberating advice may delay necessary urgent action or sustain inaction by including conflicting knowledge.
However, "epistemic plurality" is not the value I seek to argue for in this paper. I do not suggest that individual epistemic agents should commit themselves to closely examining their opponents' claims before coming up with their advice. Such examination takes time, energy, and resources which can be well in tension with the demand for urgent advice (Shaw, 2022). Instead, I argue that the value of "openness to epistemic plurality" as a communication norm is essential for these epistemic agents to "avoid the premature exclusion of epistemic alternatives." In our four vignettes, this "avoidance of exclusion" itself would have already had a mitigating effect on the harmful consequences resulting from their mistaken advisory claims. For instance, if the warning from Giuliani had not been banned by the civil protection committee from the public fora, it could have kept members of society vigilant so they could have somehow avoided damages by autonomously considering other possible scenarios and taking their own protective actions. But that does not suggest the committee should have included whatever Giuliani said in their official announcement, including his empirically unscrutinized knowledge claims. They should just have avoided making false reassurances and creating blind spots that mistakenly narrowed down the public attention. This distinction between "exclusion" and "inclusion" is important because my emphasis on "avoidance of premature exclusion" aims at reducing the potential harms resulting from their advisory claims, instead of aiming at increasing inconsistency in their advice. Thus far, my characterization of OEP may be more relevant to time-sensitive actions such as communicating advisory claims of imminent earthquake damages.
In the four vignettes, the presented unforced alternative plurality of knowledge claims at a point should not have been actively excluded, as the epistemic alternatives could have constituted a harm-reducing factor for the mistaken advice. For instance, Table 1 shows that the insufficient vigilance regarding the Tangshan earthquake could have resulted from the commitment to observing precursory phenomena, but the international seismological community did not assume that all great earthquakes would come with such phenomena. Another example is yatoi architects' dismissal of the earthquake resilience of Japanese daiku's works, which was exactly the reason why cracks were not recorded on the traditional buildings. My suggested commit-ment to OEP seems to be a relatively acceptable solution when the time for critically examining opponents' knowledge claims is limited and urgent advice is expected. However, the time for examining opponents' knowledge claims is not necessarily always limited or insufficient. Alternative plurality on the community level as a result of committing to OEP can be empirically scrutinized on a long-term basis. Therefore, OEP can be compatible with and conducive to increasing epistemic plurality in science for establishing its objectivity (Longino, 1990;Lacey 2005) by incorporating previously missing empirical scrutiny, which usually requires much time. The commitment to OEP does not exclude the possibility that epistemic agents can at some point reasonably exclude some alternatives, openly criticize inappropriate dissent, and avoid malicious epistemic plurality like climate denialism or vaccine hesitancy. If the epistemic community could constantly compare various knowledge claims, they are positioned to better understand their matters of concern. The reason is that the reliability of knowledge claims can be buttressed when the community sustains its ability to retain reliable knowledge claims and sift out unreliable ones through experience and interaction with nature. As the relevant knowledge basis increases its reliability, epistemic agents may know which advisory claims are safe to make and which are not. Thus, epistemic agents who commit themselves to OEP in communicating their knowledge can still benefit from alternative plurality and improve their advisory claims.
The other kind of epistemic plurality becomes clearer when epistemic agents appreciate some appropriate division of cognitive labor throughout the development of knowledge (Bokulich, 2013). It can be seen that the alternative knowledge claims in Table 1 could outperform the dominant views in their contexts. Even the alternative in the fourth vignette might seem controversial or potentially false, but the radon hypothesis is yet to be rejected conclusively. This gives epistemic agents an additional reason to consider committing to OEP not only when communicating their knowledge in public but also within their epistemic community. If these epistemic alternatives were unfairly dismissed and failed to be communicated within the epistemic community in the long run, they could run the risk of failing to develop a correct understanding of nature. For instance, one might still hold the misplaced confidence in the earthquake resistance of 19th -century Victorian architecture or the predictability of earthquakes. And such dismissal or failed communication of epistemic alternatives should be undesirable for improving advisory claims. Distributing the risk of misunderstanding nature by dividing a family of research questions into various epistemic alternatives has been a good strategy for approaching successful science on the community level (Wray, 2011;Bokulich, 2013). Additionally, the commitment to OEP encourages epistemic agents to avoid the premature exclusion of their opponents' claims.
As a result, additive plurality is the plurality of these historical epistemic alternatives that add up to our successful understanding of nature. They could result from various disciplines, research programs, traditions, paradigms, communities, and so on. The wide range of disciplines about earthquake damages, such as the listed natural history, architecture, civil engineering, urban planning, geology, and seismology, constitute a representative case of the making of our contemporary understanding of seismic risk assessment. As the additive plurality of those successful knowledge claims adds up to a comprehensive knowledge basis, the failures of the knowledge claims demarcate where the limits of science may be. This complementary feature should be generally desirable for demarcating reasonable scientific advisory claims from unreasonable ones.
In sum, to avoid prematurely giving a false verdict of advisory claims and mistakenly rejecting knowledge claims or options for action, I suggest that epistemic agents should commit themselves to the value of openness to epistemic plurality which avoids epistemic and practical risks resulting from communicating their knowledge. This value should be conducive to and compatible with two types of epistemic plurality on the community level: alternative plurality and additive plurality. The former reduces the risk of being wrong about nature on a short-term basis as epistemic divides appear. The latter enriches the knowledge basis for being right about nature in the long run as various disciplines develop and interact with one another to increase their reliability and credibility. Both of them are supposed to positively contribute to deliberating reasonable scientific advice.
The value of social responsiveness
Social responsiveness may mean different things for different people as one might think of the difference between what a socially responsible scientist and a socially responsible policymaker would do. The difference relies much on what society expects of these social roles which can change much across contexts. However, this fact does not mean that it is not possible to characterize respective social responsiveness more generally. For instance, given that political systems and social value commitments can vary strongly in different cultures, policymakers should still actively consider a sufficiently wide range of social values in their contexts to deliberate on policy consequences informed by science. This could prevent them from dodging their responsibilities by inappropriately relying on scientific imports (Yu, 2022). In this paper, my discussion of SR aims at adding substance to Philip Kitcher's characterization of what a socially responsible scientist should do (Kitcher, 2001;Kourany, 2010). Socially responsible scientists should contribute their knowledge to good policies or should generally do science in order to enhance the social good, human flourishing, and so on. Socially beneficial consequences brought about by scientists exhibit their SR. I agree. In this section, I look into various processes of "making advisory claims" as a special form of action taken by the epistemic agents who were expected to carry out their non-epistemic missions. I show that the harmful consequences resulting from their advisory claims did not exhibit their SR, and argue that epistemic agents should be socially responsive by committing themselves to avoid making self-defeating claims.
Why do epistemic agents occasionally make self-defeating claims? For example, why do some claim to know something while they actually do not know? Why do some claim that there is nothing dangerous to come while in fact danger exists? They do not necessarily mean to bring about harmful consequences associated with such self-defeating claims with bad intentions. They might just tell white lies. Instead, I suggest that this is because the social roles individual epistemic agents play require them to make claims in certain ways, which might lead to wishful speaking (John, 2019). Moreover, it is not necessarily in itself a problem if individual epistemic agents play multiple social roles, for instance, scientists and officials. Problems arise, however, only when the multiple ways of communication involved can lead to their self-defeating, advisory claims, being counterproductive to their respective missions. According to Kitcher, such advisory claims exhibit their social irresponsibility. They failed their own goals because they made their advisory claims by dismissing other knowledge claims that could have otherwise enhanced their goals.
I suggest, aside from committing themselves to OEP, epistemic agents have an additional duty of keeping transparent the tensions between their knowledge and advisory claims associated with their non-epistemic missions. This is because SR demands epistemic agents to avoid making self-defeating claims. Such tensions might be irresolvable so that the epistemic agents have to, sometimes, distance themselves from certain non-epistemic missions, or just keep the epistemic and non-epistemic social roles they play separate.
For instance, the church authorities failed to offer a consistent epistemic account of why pious believers were punished by God's wrath with earthquakes for no reason; the yatoi architects failed to explain why British architecture seemed vulnerable to earthquakes rather than robust, whereas they claimed to know how to avoid earthquake damages. They would have to modify their architectural theory based on a better understanding of the mechanics of earthquakes with higher accuracy, developed by instrument-based seismologists. In both cases, the advisory claims made by the church authorities and yatoi architects were self-defeating and resulted in underestimated harms on a short-term basis.
The latter two vignettes show that some epistemic agents could find it hard to live up to SR because they could not avoid making self-defeating claims. When the Chinese government and commissioned scientists actively blurred the distinction between earthquake specialists and lay observers for the sake of a shortage of trained personnel, laypeople could not advance the scientific tasks. They were not able to explain why not all macroscopic precursory phenomena indicated a large earthquake, and why not all large earthquakes had sufficiently observable precursory phenomena. However, this task had usually not been relegated to laypeople. When the government, commissioned scientists, and "mass scientists" mistakenly assumed the predictability of earthquakes, macroscopic precursory phenomena easily led to a cried wolf. In contrast, insufficient preparation and vigilance could happen when such phenomena did not appear. The socioeconomic costs of using these problematic estimations were not actually calculated. This was contradictory to the idea of their planned economy. Moreover, laypeople were supposedly not able to calculate them if this was possible at all. Worse, mixed quality of data collection made the mass science project more difficult to put into practice and to further develop it because the "mass scientists" were not adequately paid and not able to apply scientific methods rigorously. Forgetting to record, missing data, and filling out missing data from memory or resorting to guesswork were quite common. Verifying the mixed quality data from "mass scientists" by specialists was an additional large cost.
This blurring of the responsibilities of experts and non-experts demanded by the government complicated the task of earthquake prediction and conflicted with the governmental goal with which the policy of turning everyone to be a "mass scientist" was employed. Although the mass seismology project in China could be meaningful in terms of science education or science communication, which was of course socially valuable, the division between experts and non-experts for useful advice seems to remain practically necessary, for we saw that non-experts could complicate the implementation of national policies or even nullify the national goal. The social roles of lay observer and of scientist should thus be kept apart.
In contrast, the L'Aquila case exemplifies a common case of the social role that scientists or experts are appointed to be government officials, which is not alien to contemporary politics. They are expected to serve various governmental missions. We can furthermore see cases in intergovernmental organizations such as the WHO and the IPCC, in which scientists are appointed to make advisory claims to achieve intergovernmental missions, such as Tedros Adhanom Ghebreyesus in the WHO and Rajendra K. Pachauri in the IPCC.
Sometimes, the combined social role of scientist-official can become difficult to play because the value commitments of scientists and those of officials can be in tension, making advisory claims self-defeating. For example, what would it mean when a scientist-official claimed that "the swarms indicated no danger" or that "the swarms helped lessen the seismic potential by dissipating the accumulated seismic energy in this region?" Are they pure knowledge claims or advisory claims? For scientists have long known that roughly half of the large earthquakes had foreshocks, it is scientifically possible that a series of swarms indicate a coming large earthquake. Yet they also know that most swarms are just a series of small earthquakes that pose no serious danger (Hough, 2016). Thus, as scientists, the L'Aquila scientist-officials should have known that both scenarios are possible. They, however, chose to emphasize only one in their public announcements. This choice was partial in light of their full expertise.
Moreover, they actively excluded the warning of a coming earthquake from the heretical science. This exclusion led them to fail their political missions of civil protection. That said, their advisory claims came into conflict with their political missions, whereas their knowledge was compromised in the processes of communicating it, which could have been more comprehensive. The upshot is that making their advisory claims self-defeating, the scientist-officials neither fulfilled their role as scientists nor their role as officials.
Oreskes (2015) draws an interesting analogy of estimating this earthquake with climate change communication. She claims that earth scientists generally tend to underestimate risks, which has been criticized as mistakenly interpreting IPCC climate scientists' model projections against actual observations (Pielke, 2019). However, her reason for translating this analogy to earthquake damage communication seems to be missing here for I find no basis for the claim that the scientist-officials either over-or underestimated the situation before the actual earthquake because they never aimed to "predict the unpredictable" (Hough, 2016). The L'Aquila scientists just said something that conflicted with their claimed epistemic aim. In Oreskes' characterization of that episode, the scientist-officials actively avoided making a predictive claim and rejecting such a possibility since the beginning, whereas climate scientists generally make some projections. There is an apparent contrast between rejecting and accepting predictive claims.
My diagnosis of the problem with the scientists' claim of no danger is that they made a self-defeating knowledge claim in conflict with what they knew. They, however, did not give such unreflective advice intentionally based on their knowledge but on wishful thinking, which resulted in wishful speaking. The problem with Oreskes' evaluation of the scientists' false reassurances is different (2005). She seems to implicate these scientists' understatement in the scientists' own acknowledgment of their inability to predict. A serious epistemic commitment to making no predictive claims, however, leads to neither overstatement nor understatement of future scenarios. Oreskes' reappraisal of the L'Aquila case seems, thus, self-defeating if scientists make such a commitment as epistemic agents. I suppose this self-defeating reappraisal results from her hope to justify her claim that earth scientists tend to understate catastrophes, "so they should not." If this is exactly her unstated normative assumption, then she might run the risk of falling into the problematic ideologicallydriven research she argues against (Shaw, 2021). Her claim turns out to be unfounded and leads to a skew towards a particular error type. Either over-or understatement, in general, can be intertwined with specific ideologies and value judgments, while a predictive claim without either one is practically unavailable (see also Sect. 1.2). It is more important and realistic to make the hidden ideologies and value judgments more explicit if such predictive claims play a significant role in policymaking (which did not happen in the L'Aquila trial, Oreskes 2015), rather than giving groundless normative guidance to scientists.
Here, I propose two ways to avoid self-defeating advisory claims using the L'Aquila scientist-officials as my example.
The first solution is to split the social role of scientist-official. Scientists should make knowledge claims based on their expertise as comprehensive as possible; officials should make clear relevant socioeconomic considerations they have in mind. Their one-sided emphasis on the avoidance of overestimating socioeconomic damages from a future earthquake was unsatisfactory because such damages can be as expensive as underestimating damages. There should be no fundamental reason for preferring one over the other. As a result, the relatively comprehensive scientific expertise and socioeconomic considerations should be communicated to members of society, and let them decide what to do to best suit individual situations, and which risks are acceptable (Mulargia et al., 2017;Parker & Lusk, 2019). The scientists would thus avoid making self-defeating claims because they are not expected to carry out certain non-epistemic missions such as civil protection. Deciding which scenarios are dangerous to society is not part of their job. The officials, instead, have to take up the tasks of making judgments about scenarios deliberated by scientists and see whether they are dangerous (Betz, 2013).
The second solution is stressed if the first solution is not possible, which means that the officials should be scientists anyway. The scientists can still avoid making their advisory claims self-defeating by keeping their claims transparent as pure knowledge claims or as advisory claims. For individual scientists, that means they have an additional duty to keep their epistemic and non-epistemic goals separate to hold their made claims accountable. Their claims can be furthermore evaluated on epistemic and non-epistemic values involved, respectively. For instance, if they unavoidably had to claim "no danger" as a pure knowledge claim, they should add some complementary information such as "no danger means that in most cases small earthquakes are not followed by a large earthquake although we know that a large portion of large earthquakes has some small foreshocks." As an advisory claim, they should make clear that "no danger means that we suggest you to just stay at home because we don't want to scare you unnecessarily and bring about some unnecessary socioeconomic cost but it's, of course, possible that we have unfortunately underestimated the risk, and you'd better stay in an open ground for two months avoiding such a potentially catastrophic earthquake." A similar struggle arose as many Western countries were reluctant to suggest the simple measure of face-masking in public in the early phase of the COVID-19 pandemic (Chen et al., 2021). When emphasizing that there is no scientific evidence regarding the efficacy of face-masking to avoid infection, did Dr. Tedros Adhanom Ghebreyesus in the WHO suggest "not to wear a face mask" because he made a pure knowledge claim as a scientist considering the relevant scientific grounds? Or rather, did he make an advisory claim as an international official careful not to expose the shortage of the global face mask supply? Did face-masking exactly require robust evidence with high scientific rigor, against the fact that most medical doctors have used them for a long time without scientifically robust evidence (Zagury-Orly, 2020; Howard et al., 2021)? The two ambivalent roles he played led him to give self-defeating advisory claims that delayed urgent, necessary action.
These two examples show that, sometimes, it could be an overloaded task if these epistemic agents were expected to carry out various non-epistemic missions. I suggest that, if overloaded, they should either split their social roles or keep their epistemic and non-epistemic goals transparent in order to keep the consequences resulting from their claims in check.
In this section, I attempt to add substance to Kitcher's characterization of what a socially responsible scientist should do by clarifying what the historical epistemic agents should have done. His chief argument assumes that scientific knowledge can be translated into policymaking and that society should listen to scientists for advancing the social good and human flourishing (2001). I agree and would like to highlight some provisos. I argue that this translation without considering their epistemic and practical risks can lead to more harm than good. Moreover, whether it is always good to listen to or trust scientists should be extensively (re)evaluated on consequences brought about by scientists' claims (Oreskes & Macedo, 2019). The deaths and the injured in the ruins after the L'Aquila earthquakes could have well listened to and trusted the scientists. Thus, socially responsible epistemic agents should actively engage in communicating with society about situations in which they find it difficult to share their knowledge qua epistemic agents AND to carry out their non-epistemic missions qua advisors or officials. They must tell society the harms they may bring about if they fail the expected roles they play.
Concluding remarks
In this paper, I use four vignettes to contextualize three main channels where knowledge claims can result in conflicting advice due to epistemic and non-epistemic value commitments in tension. I stress that the value of openness to epistemic plurality and the value of social responsiveness should be constitutive of proper ethics of science communication for making advisory claims in public. Epistemic agents should commit themselves to these two values to avoid communication-induced risks. I argue that the value of openness to epistemic plurality is best suited to reduce the risks of prematurely or mistakenly excluding epistemic alternatives and possible courses of action, which are beneficial for science and society. This "avoidance of exclusion" has already had a harm-reducing effect on their mistaken claims. This value is, furthermore, conducive to alternative plurality and additive plurality in science. On the community level, the former may minimize its short-termed epistemic and practical risks and the latter may increase its long-termed reliability and credibility. Moreover, I sympathize with Kitcher's account of socially responsible science and argue that we should include the commitment to avoid making self-defeating advisory claims and harmful wishful speaking. This is achieved by minimizing the values in tension embedded in the social roles individual epistemic agents play. | 13,882.4 | 2022-08-22T00:00:00.000 | [
"Philosophy"
] |
Analysis of binding affinity of sugars to concanavalin A by surface plasmon resonance sensor BIACORE
The observed dissociation constant kd of the lectin concanavalin A (ConA) from glycoprotein asialofetuin (ASF) changed with the concentration of inhibitory sugars. The reciprocal of kd showed a linear relationship with the reciprocal of sugar concentration. This regression line was found to be theoretically available in the analysis of the kinetics in the interaction of sugars with ConA under conditions where the binding constant Ki of sugars to ConA is more than about 333.
Introduction
We previously proposed a quasi-equilibrium equation for the interaction between sugars and concanavalin A (ConA), which has been associated with the immobilized glycoproteins such as asialofetuin (ASF) on an optical biosensor chip [3].Equation (1) represents the theoretical correlation of sugar binding to ConA.
In this equation, k d represents the observed dissociation constant of ASF-ConA complex in the presence or absence of inhibitory sugars.The kinetic constants k + and k − are the intrinsic association and dissociation constants of the complex, respectively.The k tr represents mass-transport rate constant of the free ConA from the stationary phase to mobile phase in the flow-cell of the biosensor chip.R 0 and R t are biosensor responses correlated to the ConA binding amount to the immobilized ASF at time 0 and time t, respectively.Under equilibrium state, the inhibitory sugar S, the free concentration of which is shown by [S], binds to ConA according to the association constant K i .We found that k d changes with [S] according to Eq. ( 2).
Equation ( 2) holds under conditions [S] 1/K i .The parameters κ and ν represents the affinity of the sugar with ConA and the intrinsic association of ConA with ASF in the absence of sugars, respectively, as shown by Eqs (3) and (4).
All the sugars should take the same ν, being independent of the inhibitory sugars.In fact, the obtained ν values with sugars having higher values were the same, whereas those with the lower κ values were not, being greater than those with higher κ values.These results could be due to the fact that sugars with higher κ values bind to ConA under conditions of [S] 1/K i , but the binding of those with lower κ values to ConA do not satisfy the above conditions.In the present study, we examined further the binding of various sugars to ConA for understanding the theoretical background of Eqs ( 1)-(4).
Experimental
Chemicals and reagents were of the highest grade commercially available.The interaction of sugars to ConA was determined with an optical biosensor as described previously [3].
Results and discussion
The effects of D-glucose (Glc), D-glucosamine (GlcNH2), N -acetyl-D-glucosamine (GlcNAc), D-mannose (Man), D-mannosamine (ManNH2), and N -acetyl-D-mannosamine (ManNAc) on the dissociation of the ASF-ConA complex were examined.As shown in Fig. 1, Gal showed no appreciable effect on the ASF-ConA complex.On the contrary, other sugars increased the dissociation of the ASF-ConA complex in a dose-dependent manner, the effect of Man being the highest.Then, we analyzed these effects according to Eq. ( 2), as shown in Fig. 2, and the parameters ν and κ are summarized in Table 1.We found that values of ν and κ for Man, Glc, and GlcNAc were agreed with the values reported previously [1][2][3].2).c Square of correlation coefficient.
In Table 1, the obtained ν values were not the same, being increased with κ values.This could be due to the error caused by the fact that the logistic term of the theoretical Eq. ( 1) is not taken into consideration in Eq. (2).This error seems significant with higher κ values.To know the limitation of Eq. ( 2), we examined how the parameter ν changes with κ for various K i .We simulated the theoretical curves according to Eq. ( 1 It was found that the ν values were almost the same in the region of K i > 330.Therefore, it is concluded that Eq. ( 2) is available for K i > 330.Namely, Eq. ( 2) can be used practically for sugars with κ > 0.01.
Conclusion
We previously proposed that Eq. ( 2) is available for the interaction between sugars and ConA, which has been associated with the immobilized glycoproteins.In this study, we examined further the binding of various sugars to ConA for understanding the theoretical background of Eq. (2).It was found that this equation can be used for quantitative analysis of the binding of sugars to ConA with K i less than about 333.
Fig. 2 .
Fig. 2. Double reciprocal plots for the effects of inhibitory sugars on the ConA-ASF complex.The results shown in Fig. 1 were plotted according to Eq. (2).Sugars are as for Fig. 1.
Table 1
Experimental parameters ν and κ determined for various sugars according to Eq. (2) a The intercept and slope were obtained by linear regression analyses of the 1/[S] and 1/kd plots.b The reciprocal of the slope according to Eq. ( | 1,146.6 | 2002-01-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
In Silico Engineering of Proteins That Recognize Small Molecules
Protein engineering is a rapidly evolving discipline, with much
research currently focused on understanding the catalysis,
specificity, stability, and folding of proteins,
structure-function relationships and protein design principles.
Engineered proteins are now successfully being used in
biotechnology and in therapeutic settings. Advances in
technology and genetic and chemical techniques have enabled a
competitive edge in developing improved protein and peptide
based therapeutics with reduced immunogenicity, improved safety
and greater effectiveness.
Introduction
The ability of proteins to recognize other molecules in a highly selective and specific manner and to create supramolecular complexes has many biological implications. For example, interactions between receptor-ligand, antigen-antibody, DNA-protein, lectin-sugar are involved in many biologically important processes including transcription of genetic information, enzyme catalysis, transmission of nervous and hormonal signals, host recognition by microbes etc. Therefore, characterizing the structure and energy profile of such supramolecular complexes appears as a key factor in understanding biological function. This may have, in many cases, direct pharmacological consequences. The function of many proteins is driven by reversible binding to small molecules, with either activating or inhibitory effect over the protein's activity. Under these circumstances, it is clear that, in any drug design endeavor, where the goal is to find or build a small molecule that can regulate the function of a protein, it is absolutely essential to understand the stability and behavior of protein-ligand complexes.
However, there is also another kind of an approach to determine protein recognition ability and selectivity mechanisms. It is called protein engineering, and it is based on altering the affinity/selectivity of a protein by substituting some amino-acid residues by other ones in order to identify the most important residues and their specific contribution to the binding activity. Protein engineering is useful not only in the characterization of a protein's binding abilities, but also has applications in bioanalysis and biotechnology. For example, a protein may be engineered (i.e., modified by substituting amino acid residues) to bind specific carbohydrates on the cell surface, and subsequently be used as a marker for diseases characterized by such glycosylation. Another pharmacologically relevant event that can benefit from protein engineering is pathogen/host recognition. In this case, protein engineering may be employed, for instance, to mimic bacterial mutations that lead to multidrug resistance, to understand their mode of action and to develop new antibacterial drugs. This is certainly a timely issue, as infectious diseases are a leading cause of death worldwide, and they are often connected with a drug resistance. A similar situation occurs in the case of viruses, where the high rate of mutation turns the protein of interest into a continuously moving target, making it tedious to develop drugs or vaccines, e.g., for HIV or influenza viruses.
Protein engineering is typically performed in vitro, with in vivo consequences and applications. In some cases, it may be very efficient to perform computer modeling and simulations before starting wet laboratory experiments. In such cases we are talking about in silico protein engineering, and the goal is to design appropriate mutations in a much faster and cheaper way. In this chapter, we will cover the majority of in silico approaches used for protein engineering. The chapter describes not only procedures involved in the in silico engineering process itself, but also the description what kind of information is necessary to be able to start in silico process. The chapter is composed of several sections. The first describes methods for 3D structure prediction, a necessary step to perform any in silico engineering, but not involved in the engineering itself. We further describe various approaches for in silico mutagenesis. Afterwards we introduce a number of techniques which enable the prediction of the preferred orientation of the ligand in the binding pocket, as well as the calculation of the binding free energy, again a technique not directly included in protein engineering itself, but necessary to perform it. Some successful examples of in silico protein engineering are also given. The whole process is schematically shown in the flowchart in Fig. 1.
3D-Structure as the key prerequisite
A number of proteins that are involved in the cell recognition machinery bind small molecules. In this case, we call these proteins receptors, and the small molecules ligands. The 3D structure of a receptor is the starting point in the in silico protein engineering process. Current experimental methods for protein structure determination are very well established. If the experimental structure is not available, computational approaches are used to model the 3D structure of the receptor.
Experimental 3D-structure
The 3D protein structure can be obtained by X-ray crystallography or by NMR spectroscopy. Both methods allow to refine the atomic coordinates against experimental structural restraints and constraints. The final 3D model is obtained when the refinement statistics reach relevant global minimum values. The quality of a structure from X-ray crystallography or NMR spectroscopy is defined by experimental data, but the quality of the refined model is based on the interpretation of the model through the personal view of the scientist. In most cases, this freedom in model interpretation is the main source of uncertainty in the results obtained by refining approaches.
X-ray crystallography
The first protein structure determined by X-ray crystallography was solved in the late 1950s. Since that success, over 60 thousand X-ray crystal structures of proteins, nucleic acids and other biological macromolecules have been determined. X-ray crystallography is used to determine the arrangement of atoms in a crystal lattice. The procedure of the 3D structure obtaining is composed of four key steps (see Fig. 2). The crystal under investigation is placed in the way of beams of X-rays, which, upon collision with the crystal, are diffracted in a specific pattern based on the structure of the lattice. The diffraction pattern is used to compute the electron density map of the crystal, from which the mean positions of atoms in the crystal can be determined, together with other information. The resulting electron density map is an average electron density of all the molecules within the crystal. Structure refinement refers to the process by which structural models are fit to the information gained from the electron density map. During structure refinement, automated tools for chain tracing, side chain-building, ligand building and water detection are used. The structure refinement continues until the correlation between the diffraction data and the model reach a global minimum (Giacovazzo, 2002).
The atomic positions and their respective B-factors (Debye-Waller factors) can be refined to fit the observed diffraction data. The B-factor, also termed the temperature factor, describes the degree to which the electron density is spread out, accounting for thermal motions and reflecting the fluctuation of atoms about their average positions. Thus, for proteins, the Bfactor allows for the identification of areas of large mobility, such as disordered loops, but it can also be the marker of errors in the process of model building (Yuan et al., 2005). The relative agreement of the structure with regard to the experimental data is measured by the R-factor and the "free" R-factor (R-free). The R-free is analogous to the R-factor, which is calculated from a subset (~5%) of reflections that were not included in the structure refinement. The value of R-free is monitored during the whole refinement process, and it prevents any over-refinement and over-interpretation of the data (Brunger, 1992). Four main steps to solve a protein structure by X-ray crystallography: (1) to crystallize the protein, (2) to collect the diffraction, (3) to calculate the electron density map, (4) to refine and validate the model of the structure of the protein A number of factors contributes to the final quality of an X-ray structure. The first factor relates to the crystal characteristics and its diffraction properties, and is evaluated in terms of resolution. Here, the term resolution refers to the level of detail that can be inferred from the electron density map. For proteins, resolutions of less than 2.5 Å are considered meaningful, though the goal is to obtain resolutions of under 1.5 Å, where individual atoms can be clearly pinpointed from the electron density map. Most errors result from highly disordered areas in the electron density maps, like flexible loops of proteins. The electron density of atoms with high residual disorder is smeared in the electron density map, and is no longer detectable. Atoms that give weak scattering (i.e., diffraction of the X-ray beams), such as hydrogen, are normally invisible. Single atoms of protein side chains can be detected multiple times in an electron density map, because of multiple conformations of those respective residues (di Luccio & Koehl, 2011).
NMR spectroscopy
NMR spectroscopy is often the only way to obtain high resolution information on protein dynamics as well as on the protein structure in a solvent. NMR spectroscopy uses the magnetic properties of nuclei that possess a spin. To facilitate NMR experiments, it is www.intechopen.com necessary to isotopically label the protein with 13 C and 15 N (for 1 H there is no need to label the protein because this isotope has a natural abundance of 99.9%). The procedure is schematically pictured in Fig.3. Fig. 3. For solving a protein structure by NMR in solution it is needed: 1) to know the amino acid sequence, 2) to measure the multidimensional spectra 3) to calculate the distances by NOE and J-coupling effects and 4) to refine and validate the 3D structure of the protein The molecule of interest is placed in a strong magnetic field, and each of these nuclei is characterized by a unique resonance frequency, depending on the electron density of the local chemical environment (chemical shifts), but also on the combination of the local magnetic field and the external field. In the case of proteins, the number of nuclei involved can be large, therefore multidimensional experiments (2D, but also 3D and 4D experiments) are usually performed. The most important method for protein structure determination utilizes NOE (Nuclear Overhauser effect) experiments to measure the distances between pairs of atoms within the molecule that are not connected via chemical bonds (throughspace coupling effects). Other NMR experiments are performed in order to measure the distances between pairs of atoms that are connected through chemical bonds (J-coupling). The goal is to assign the observed chemical shifts from multidimensional spectra to their specific atoms (nuclei) in the protein. All the values are then quantified and translated into angle and distance restrains. Most of these restraints correspond to ranges of possible values instead of precise constraints. These restraints are subsequently used to generate the 3D structure of the molecule by solving a distance geometry problem (Wüthrich, 1990(Wüthrich, , 2003. The structure determination of macromolecules by NMR spectroscopy shares similarities with X-ray crystallography in terms of possible sources of errors. The errors in an NMR structure can result from an improper experiment setup, as well as from the human www.intechopen.com misinterpretation of the experimental data (Saccenti & Rosato, 2008). Molecular modeling techniques are used to generate a set of models for the protein structure that satisfy the obtained experimental restraints, as well as standard stereochemistry. Analogously to X-ray methods, the quality of NMR measurements affects the quality of the structures. The value of the root mean square (RMS) difference between each model and a "mean" structure defines the precision of a set of models for a protein structure. The quality of each model is evaluated by the number of the experimental restraints violations in the final model.
Homology modeling
Despite significant progress in X-ray crystallography and NMR spectroscopy, the structures of many biotechnologically and therapeutically relevant proteins remain undiscovered for various reasons. In such a case, homology modeling can be used to obtain their 3D structure. Homology modeling is a purely computational procedure that consists of building a protein model using a structural template, normally coming from proteins with a known structure. The procedure is composed of four key steps as seen in Fig. 4. Fig. 4. Homology modeling consists of: 1) Identification of the template, 2) Alignment of the target sequence with the template sequence, 3) Building the target protein backbone, loops and side chains and 4) Refining and evaluating the final model.
Template selection and sequence alignment
An initial step for comparative modeling is to check whether there is any protein in the current PDB database having a similar sequence as the protein of interest. If so, the structure of this protein will be used as a template. The search for the template has to proceed using a sequence comparison algorithm that is able to identify the global sequence similarity (i.e., the degree to which the sequence of amino acids is conserved in the protein under investigation compared to the template protein). Homology modeling of a target protein sharing over 30% sequence identity with its template is expected to generate structural models whose accuracy is close to that of an experimental structure, however, Roessler and coworkers showed that even proteins sharing 40% of sequence identity can display different folds (Roessler, 2008).
The sequence of the protein with unknown structure is aligned against the sequence of the template protein, meaning that the sequences are arranged in such a way that the regions which contain the same amino acids in both proteins are superimposed. Then the C coordinates of the aligned residues from the template are copied over to the target protein in order to form the skeletal backbone (Nayeem et al, 2006). Commonly used alignment techniques are: standard pairwise sequence alignment, where only 2 sequences are compared at a time, or multiple sequence alignment, where more sequences are compared at a time and which is generally used when the target and template sequences belong to the same family. There are complex sequence alignment algorithms that optimize a score based on a substitution matrix and gap penalties. Most errors are caused by the sequence alignment technique. Errors appear frequently in the loop regions between secondary structures, as well as in regions where the sequence similarity is low. Structural alignment techniques are also available, which attempt to find areas of structural similarity between proteins. Recent techniques aim to use as much information as possible while performing the sequence alignment (amino-acid variation profiles, secondary structure knowledge, structural alignment data of known homologs) (Nayeem et al., 2006;Zhang, 2002).
Loop building
Loops participate in many biological events and contribute to functional aspects such as enzyme active sites formation or ligand-receptor recognition. The flexible nature of loops causes problems in the prediction of their conformation. Databases of loop conformations or modeling by ab initio methods are used in order to determine the proper structure of loops. In the database approach, a library of protein fragments is scanned for fragments whose length matches to the corresponding lenght of the modelled loop (for short loops) (di Luccio & Koehl, 2011;Zhang, 2002). The ab initio loop prediction approach relies on a conformational search guided by various scoring functions and is used for longer loops (Olson et al., 2008;van Vlijmen et al., 1997).
The side-chain positioning problem
Most of the side-chain positioning methods are based on rotamer libraries with discrete side-chain conformations. Rotamer libraries contain a list of all the preferred conformations of the side-chains of all twenty amino acids, along with their corresponding dihedral angles (Lovell, 2000). Side chain prediction techniques choose the best rotamer for each residue of the protein based on a score that includes both geometric and energetic constraints (combinatorial problem). The combinatorial problem is solved by heuristic techniques such as mean field theory, derivatives of the dead-end elimination theorem or Monte Carlo techniques (Vasquez, 1996).
Refinement and validation of the final model
When determining the structure of a protein by homology modeling, the last step is refining the model. However, it was shown that refining a structural model by energy minimization www.intechopen.com only (i.e., without experimental constraints) many times leads to structures that are different compared to those obtained by X-ray crystallography. To avoid such problems, several approaches can be applied including evolutionary derived distance constraints (Misura et al., 2006), the combination of molecular dynamics and statistical potentials (Zhu et al., 2008), adding a differentiable smooth statistical potential (Summa & Levitt, 2007) or considering the solvent effects (Chopra et al., 2008).
For the model validation step, scoring functions are used. These are functions based on statistical potentials, local side-chain and backbone interactions, residue environments, packing estimates, solvation energy, hydrogen bonding, and geometric properties. The validation of models can also come from experiments, and further later experimental constraints/restraints can be used to improve the accuracy of the respective models (di Luccio & Koehl 2011).
Generally, the quality of the homology model is dependent on the quality of the sequence alignment and of the template structure. The presence of alignment gaps (commonly called indels) in the target but not in the template complicates the model building process. In addition, it's very hard to deal with the gaps in the template structure (e.g, caused by the poor resolution of an X-ray structure). At 70% sequence identity between the model and the template, the root mean square deviation (RMSD) between the coordinates of the corresponding C atoms is typically ~1-2 Å. The RMSD can rise to 2-4 Å at 25% sequence identity. The errors are significantly higher in the loop regions, because of the increased flexibility in these areas, both in the target, as well as in the template. Errors in side chain packing and positioning increase with decreasing amino acid sequence identity, and are caused also by the fact that most side chains can exist in several conformations. These errors may be significant, and they imply that homology models must be utilized carefully. Nevertheless, homology models can be useful in reaching qualitative conclusions about the biochemistry of the query sequence (conserved residues can stabilize the folding, participate in binding small molecules or play a role in the interaction with another protein or nucleic acid) (di Luccio & Koehl 2011). The state of the art in homology modeling is assessed in a biannual large-scale experiment known as the Critical Assessment of Techniques for Protein Structure Prediction, or CASP. A particularly interesting example is provided by the application of homology modeling to virtual screening for GPCR (G-protein coupled receptor) antagonists (Evers & Klabunde, 2005).
Online portals, such as the Protein Structure Initiative (PSI) model portal (http://www.sbkb.org), or the Swiss-Model Repository (http://swissmodel.expasy.org), bring to the community a large database of models. The PSI model portal ((http://www.proteinmodelportal.org) currently provides 22.3 million comparative protein models for 3.8 million distinct UniProt entries with relevant validation data.
A variety of software is currently in use for homology modeling of protein structures: GeneMine: Homology modeling in GeneMine (Lee & Irizarry, 2001) uses SegMod, a segment match modeling protocol (Levitt, 1992). The target sequence is divided into short segments. Corresponding structural fragments are taken from a structural database and then matched to the sequence. The fragments are then fitted onto the framework of the template structure. The program generates 10 independent models, from which an average model is constructed and stereochemically refined to minimize conformational repulsion.
www.intechopen.com DS MODELER: The protein homology modeling program DS MODELER (Accelrys Software Inc.) includes the software tool MODELLER (Sali & Blundell, 1993). MODELLER makes structure predictions based on distance restraints obtained from the template, from the database of crystal structures in the PDB, and from a molecular force field. Loops are generated de novo, by a process that incorporates knowledge-based potentials from known crystal structures.
ICM:
The homology modeling option in ICM (Abagyan & Batalov, 1997) is completely automated. The template is used for matching the backbone, as well as the side chain conformations for the residues that are identical to the template. Loops are inserted from conformational databases with matching loop ends. The non-identical side chains are given the most preferred rotamer, and then optimized by torsional scan and minimization.
SWISS-MODEL: SWISS-MODEL is an automated protein structure homology modeling server accessible from the ExPASy Web server (Schwede et al., 2003). The input for SWISS-MODEL is a sequence alignment and a PDB file for the template. The homology model is constructed using the ProModII program (Peitsch, 1995). Model construction includes backbone and side chain building, loop building, validation of the quality and of the packing of the model. The model coordinates are returned in PDB format.
Threading
Threading is used to model the structure of a protein when no homologs with a known 3D structure are available. Protein threading is based on the idea that there is a limited number of different folds in nature (approximately 1000), and thus a new structure has a similar structural fold to those already deposited in the PDB. The threading approach is a specialized sub-class of fold recognition. It works by comparing a target sequence against a library of potential fold templates using energy potentials and/or other similarity scoring methods. The template with the lowest energy score (or highest similarity score) is then assumed to best fit the fold of the target protein. The procedure is composed of three key steps shown in Fig. 5.
Threading improves the sequence alignment sensitivity by introducing structural information (the secondary or tertiary structure of the targets) into the alignment. For example, some amino acids are preferred in helical secondary structure, some can appear more frequently in hydrophobic environments, etc. This different behavior of amino acids produces different secondary and tertiary structures of proteins, depending on what environment they are exposed to.
The earliest threading approach was the '3D profiles' method (Luthy et al., 1992), in which the structural environment at the position of each residue of the template is classified into 18 classes, based on the position status, local secondary structure and polarity.
Frequently used threading methods are based on the Profile Hidden Markov Model method (HMM) (Durbin, 1998). All the sequences in the database are clustered into a set of families. In an HMM algorithm, the target is represented by the predicted secondary structure, while the template structures are represented with the template's secondary structure patterns. The majority of current threading methods are based on residue pairwise interaction energy methods, where, in each step of the threading procedure, the alignment score is calculated by adding up all the pairwise interaction energies between each target residue and the template residues surrounding it.
www.intechopen.com Threading methods are not able to give a good sequence-structure alignment. The first reason is that the structure information has many approximations. Most of the threading methods use a 'frozen' approximation. It means that the target residues are in the same environments as the template residues if they belong to the same structural fold. But, especially in loop regions, two homologous structures can have slightly different environments. Therefore, only conserved regions are used in threading (Madej et al., 1995).
A variety of threading software is available:
GenTHREADER is a fast and powerful protein fold recognition method (Jones, 1999a). It is used to make structural alignment profiles in the construction of the fold library. PSI-BLAST (Position-Specific Iterated -Basic Local Alignment Search Tool, Altschul et al., 1997) profiles, bidirectional scoring and secondary structures predicted by PSIPRED (Jones, 1999b), have also been incorporated into the modified protocol. Because of these implementations, the sensitivity and the accuracy of alignments is increased (McGuffin & Jones, 2003). New implementations for structure prediction on a genomic scale and for discriminating superfamilies from one another were added recently (Lobley, 2009). (Kelley et al., 2000) is using PSIPRED to predict the secondary structure of target proteins, and PSI-BLAST for sequence-profile alignments. The target profiles are aligned against 3D position-specific scoring matrices (PSSMs), which are generated for the templates within the fold library. For each template, PSI-BLAST is used to generate an initial 1D sequence based PSSM, which is then further enhanced using solvation potentials, secondary structures and structural alignments, resulting in a 3D-PSSM.
3D-PSSM
Phyre2 (Kelley & Sternberg, 2009) is a major update to the original Phyre server. It is designed to predict the 3D structure of a protein from its sequence. Phyre2 uses the alignment of hidden Markov models via HHsearch (Söding, 2005) in order to significantly improve the accuracy of the alignment, as well as the rate of detection of homologous regions. For regions that are not detectable by homology, ab initio folding simulations called Poing are used (Jefferys et al, 2010).
In Silico mutagenesis of proteins
The ultimate goal of protein engineering is to design a protein with novel properties, starting from existing proteins. Protein engineering in the field of recognition has been particularly successful in changing ligand specificity and binding affinity. Consequently, we are interested in changing the structure of a macromolecule in a predetermined way, such that we can affect its recognition ability. During the last years, the availability of computational and graphical tools, which allow to display and explore the three dimensional structures of proteins, has made in silico mutagenesis easier and more feasible.
Basically, two approaches are available -mutation of a single, or of multiple residues.
Performing in silico mutagenesis
The 3D structure of a protein molecule is generally stored as a text file which contains information about the chains, residues, atoms and atom types, atomic coordinates and their occupancy. Performing in silico protein mutagenesis basically means changing the lines of the text that encode the information about the residue being mutated, followed by a set of additional operations meant to properly integrate the mutated residue into the structure.
The mutation of one residue to another does not change anything in the backbone atoms. In addition, the protein side chains all start by the carbon atom, which is the same for all the amino acids except for the glycine. Therefore, the single amino acid mutation is straightforward, since only the side chain atoms need to be changed. The most critical step is to check for steric clashes that may occur, especially when an amino acid with a short side chain is mutated into another one having a longer side chain. Moreover, the new amino acid may adopt several side chain orientations. This problem is handled using the concept of rotamers, which are defined as low energy side-chain conformations, and are sampled according to their occurrence in proteins. Computational chemistry tools are able to include all the possible side chain conformations by using rotamer libraries. Several molecular modeling platforms facilitate single point mutation using the concept of rotamers.
Some of commonly used software packages to perform single point or multiple point mutations at selected positions: Swiss-Pdb Viewer: an application that allows to analyse several proteins at the same time (Guex & Peitsch, 1997). The proteins can be superimposed in order to deduce structural alignments and compare their active sites. Swiss-Pdb Viewer allows to browse a rotamer library for amino acids side chains. Amino acid mutations, H-bonds, angles and distances between atoms are easy to obtain.
Pymol: an open-source molecular visualization system (Schrodinger LLC). It can produce high quality 3D images of small molecules and biological macromolecules, such as proteins.
www.intechopen.com
PyMol has a mutagenesis wizard to perform mutations. Several side chain orientations (rotamers) are possible. The rotamers are ordered according to their frequency of occurrence in proteins.
MODELLER: contains the routine 'mutate_model', which allows in silico side chain replacement, as well as modeling the final structure of the mutated protein. The routine introduces a single point mutation at a user-specified residue, and optimizes the mutant side chain conformation by conjugated gradient and a molecular dynamics simulation (Sali & Blundell, 1993).
Triton: a graphical interface for computer aided protein engineering. It implements the methodology of in silico site-directed mutagenesis to design new protein mutants with required properties, using the external program MODELLER mentioned above. The program allows to perform the one-, two-or multiple-point amino acid substitutions in a very user-friendly and automated way . Output data can be easily visualized, written or organized as input files for any of the other computational chemistry modules that Triton interfaces. Routines to study enzyme kinetics and protein/ligand binding are available.
Alanine scanning mutagenesis
Alanine scanning mutagenesis is a method usually used to determine the contribution of a particular residue to protein function by mutating that residue into alanine. Alanine scanning involves substituting of a larger group of atoms with a smaller one. Alanine is the residue of choice because it removes the side chain beyond the carbon of the amino acid in question, and, most importantly, because it does not alter the main-chain conformation (Wells, 1991). Additionally, it does not impose extreme electrostatic or steric strain in the system. Glycine would also cancel the contribution of the side chain, but could introduce conformational flexibility into the protein backbone, and therefore is not commonly used.
Alanine-shaving is the process of making multiple simultaneous alanine mutations and can be helpful, e.g., in investigating the cooperativity between side chains (Bogan & Thorn, 1998). Cooperativity can be detected by multiple mutation cycles (Carter, 1986), in which the free energy change caused by the simultaneous mutations at selected residue positions in a protein is compared with the sum of the free energy changes associated with single mutations at each of the selected positions. This technique has also been used experimentally (Bogan & Thorn 1998).
Qualitative and semi-quantitative approaches to evaluate the recognition ability of proteins
Molecular recognition can be viewed as the ability of a certain biomacromolecule to interact preferentially with a particular target molecule. A necessary prerequisite for any in silico protein engineering approach is the ability to evalute how strong the recognition is. In biological systems, the process of recognition, governed by non-covalent interactions, results in the formation of a complex, where one biomacromolecule interacts with another biomacromolecule or a small molecule. Modern computer modeling and simulation methods, such as docking or free energy calculations, make it possible to study the molecular recognition process between two molecules in silico. Evaluation of the recognition ability of biomacromolecules is performed in two steps: (i) docking the small molecule into the biomacromolecule and (ii) analyzing the interactions and factors that determine the binding affinity.
Principles of molecular docking
Molecular docking is a widely-used computational tool for the study of molecular recognition, which aims to predict the preferred binding orientation of one molecule to another when bound together in a stable complex. Docking can be performed between two proteins, a protein and a small molecule, a protein and an oligonucleotide or between an oligonucleotide and a small molecule. We use the terms receptor and ligand to describe the role of binding partners in docking. Receptor denotes the system we are docking to (most commonly a protein), while ligand denotes the molecule being docked (drug-like compounds, peptide, carbohydrate, etc.). The docking product is commonly referred to as the complex. Inside the complex, the position of the ligand relative to the receptor is called the binding mode. The space within the receptor where binding modes are explored is commonly known as the search (or grid) space.
As already mentioned, receptor-ligand docking programs usually run in two primary parts. The first stage is searching the grid space and it leads to the generation of possible binding modes of the ligand within the predefined search space in the receptor. The second stage of docking is scoring, and it refers to the process of quantifying the binding strength of each mode of binding using a function called a scoring function. We describe each stage in the following pages.
Receptor site characterization
In the process of docking, the first issue is where to dock the ligand, i.e., how to define a search space on the receptor where the search will be performed. If the 3D structure and the binding site of the receptor is known, the search space is defined within and around this binding site. However, it can happen that the 3D structure of the receptor has not been solved, or there is no experimental evidence indicating a possible region for the ligand binding. In this case, it is recommended to do a prior identification of the binding site by using specialized tools such as PASS (Brady & Stouten 2000), Q-sitefinder (Laurie, 2005), ICM Pocketfinder (An J, 2005) etc. The ligand binding site prediction itself is a complex and tedious problem, thus we will not discuss the details here (for further reading see: Huang & Zou, 2010;Yuriev et al. 2011).
If no prior identification of the binding site is done, it is indeed possible to define the search space around the whole receptor. This approach is known as blind docking. A library of ligands is docked into the receptor in order to get an idea of the potential binding regions. The reliability of the blind docking results highly depends on the correct prediction of the binding regions, and represents a compromise between speed and accuracy.
Sampling protein and ligand conformational flexibility in docking
The main docking operations focus on the ligand. However, during the docking, several preliminary assumptions need to be made about the receptor flexibility. About 85% of proteins undergo conformational changes upon ligand binding, mainly movements in the essential binding site residues (Najmanovich et al., 2000). Therefore, performing accurate www.intechopen.com molecular docking is quite difficult, because of the many possible conformational states of both the biomacromolecule, and the ligand flexible areas. Depending on how conformational flexibility is handled during the docking, we distinguish between two classes. The rigid body docking method handles both binding partners as rigid bodies. The bond angles, bond lengths and torsion angles of the docking partners are not modified at any stage of the docking. By contrast, in flexible docking procedures, binding partners are considered as flexible molecules. This kind of procedure allows the specified atom or group of atoms to acquire the preferred position upon binding. Flexible docking is further categorized into two types: flexible ligand docking, where only the conformation of the ligand changes during the docking, and flexible receptor docking, where both the conformation of the ligand and the conformation of the receptor can change.
Sampling conformational and configurational space
Search space where we sample the structural arrangement of two molecules without changing the conformation of any of the molecule is called configurational search space. This term can be used for search space on rigid docking. Whereas in flexible docking, we search for the configurations of the system with two molecules, each of them being able to adopt several conformations. The configurational and conformational search is done via a set of algorithms that sample all the desired degrees of freedom of the ligand in order to find the correct binding mode. The set of operations performed to improve a binding mode is often referred to as optimization. Optimization is a difficult problem in docking, because it requires successful conformational search combined with an effective global sampling across the entire range of possible docking orientations.
There are basically three general categories of such algorithms, based on shape matching, systematic search and stochastic search, respectively. Shape Matching is an approach based on the geometrical overlap between two molecules. The algorithm first generates a "negative image" of the binding site starting from the molecular surface of the receptor, which consists of a number of overlapping spheres of varying radii. The ligand is placed into the binding site using the surface complementarity approach, i.e., the molecular surface of the ligand has to attain maximum close surface contacts to the molecular surface of the binding site of the protein. To do this, ligand atoms are matched to the sphere centres of the negative image. The ligand can then be oriented in the binding site by performing least squares fitting of the ligand atom positions to the sphere centres. The degree of shape complementarity is measured by a certain score function. Maximizing this score function leads to the docked configuration. Note that this is not the function used in the second docking stage, though that one is also referred to as score or scoring function. Examples of docking programs which are based on this approach are DOCK (Kuntz et al., 1982), FRED (McGann et al., 2003) and MS-DOCK (Sauton et al., 2008).
Systematic Search algorithms try to explore all the conformational degrees of freedom of the ligand and combine them with the search on the system with the receptor. Depending on the way how the search is carried out, there are three main subclasses of systematic search algorithms.
A-Systematic or pseudosystematic search, where a huge number of poses are generated by rotating all the rotatable bonds by a given interval (in degrees). These poses are then filtered by using some geometrical and chemical constraints. The remaining poses are subjected to more accurate optimization. This hierarchal sampling method is currently used by the Glide and FRED (McGann et al., 2003) docking programs.
B-Fragmentation methods divide the ligand into small fragments (both rigid and flexible). First, a rigid core fragment is placed into the active site. Then, the more flexible fragments are sequentially linked by covalent bonds by using the "place-and-join" approach. Currently, docking programs like LUDI (Böhm, 1992), DOCK (Ewing & Kuntz, 1997), FlexX (Rarey et al., 1996) and eHiTs (Zsoldos et al., 2006) provide this methodology.
C-Database or conformational ensemble methods use an ensemble of pre-generated ligand conformations to deal with ligand flexibility, which is then combined with a search for proper receptor/ligand orientation. Databases or libraries of conformations can be generated within the docking program or separately, using other programs such as OMEGA (OpenEye Scientific, NM). FLOG (Miller et al., 1994) is a typical software using this methodology, but some other programs like MS-DOCK (Sauton et al., 2008) and Q-Dock (Brylinski & Skolnick, 2008) also offer this approach.
Random or stochastic methods are also available. They attempt to sample the space by making random changes to the receptor/ligand system. Whether a geometry change is accepted or rejected is decided using a predefined probability function. This may result in non-reproducible results, even if the docking is repeated with the same parameters. There are mainly four types of stochastic search algorithms.
A. Monte Carlo (MC) is used for a large set of optimization problems, ranging from economics, mathematics to nuclear physics or even regulating the flow of traffic. In docking, the ligand is first placed into the binding site of the receptor, and this binding mode is scored. A new geometry is generated by applying random changes to the rotatable bonds or the position of the ligand with respect to the receptor. The new binding mode is then scored. If the score of the new binding mode is better than that of the old one, this change is accepted. Otherwise, a probability (P) to accept the change is calculated as P ≈ exp (-ΔE/K b T). Here ΔE is the change in score, K b is Boltzmann's constant and T is the absolute temperature of the system. A random number (r), between 0 and 1, is generated, and if r < P, the change is accepted. After such an evaluation, another random change is applied to the ligand and the whole procedure is repeated until a reasonable number of orientations is obtained. AutoDock (Morris et al. 1998), ICM (Abagyan et al. 1994) and QXP (McMartin & Bohacek, 1997) are key examples of programs that use MC-based optimization procedures.
B. Genetic Algorithms (GA) are based on ideas derived from natural evolution, such as mutation, crossover, inheritance and selection. To solve the optimization problem, GAs simulate the survival of the fittest among individuals over consecutive generations. Each geometry of the ligand with respect to the protein is defined by a set of state variables called genes. Genes describe the translation, rotation and orientation of the ligand. A full set of a ligand's state variables is referred to as the genotype, whereas the phenotype is represented by the atomic coordinates. Genetic operations such as mutation, crossover, inheritance and selection are applied to the population until the fitness criterion is fulfilled.
Some of the most popular programs like AutoDock (Morris et al., 1998), GOLD (Jones et al., 1995(Jones et al., , 1997, and Lead finder (Stroganov et al., 2008) include GA or hybrid approaches to find the optimal orientation of the ligand.
www.intechopen.com C. Tabu search (TS) is a meta-heuristic approach where a local search is combined with storing a list of previously considered geometries, along with a probability criterion, which ensures that only a new geometry will be sampled further. A random change is only accepted if the RMSD between the new conformation and any of the previously sampled geometries is greater than a threshold. The programs PRO_LEADS (Baxter et al., 1998) and PSI-DOCK (Pei et al., 2006) are TS based software.
D. Particle Swarm optimization (PSO) is one of the evolutionary computational techniques inspired by the social behaviour. SO exploits the population of individual to probe the premising region of search space. The population is called swarm and the individuals are called particles. These algorithms maintain a population of geometries by modeling swarm intelligence, a concept referring to the collective behaviour of otherwise fully independent particles. A number particles is randomly set into motion through this space. At each iteration, they observe the fitness of themselves and their neighbours and emulate successful neighbours (those whose current position represents a better solution to the problem than theirs) by moving towards them. The major advantage of PSO, compared with GA, is its relative simplicity and quick convergence. Examples of docking programs that use swarm optimization are SODOCK (Chen et al. 2007), Tribe-PSO (Chen et al., 2006), PSO@AutoDOck (Namasivayam & Günther, 2007).
Scoring ligand poses
Once a reasonable set of receptor/ligand geometries has been generated, ranking these modes is the second critical aspect of the docking procedure. To recognize the true binding modes from all the geometries, the binding affinity is scored using scoring functions, i.e., each binding mode is analysed by a set of equations and compared to the other binding modes. If the search algorithms predict a "correct" binding mode but the scoring function fails to rate this as a top scoring orientation, then the suggested output will be a false negative binding mode. Therefore, scoring functions should be able to distinguish between a true binding mode and all other modes explored. However, using a rigorous scoring function for several hundreds of binding modes is computationally expensive. Hence, computationally feasible empirical scoring functions are commonly used by all available docking software. Numerous scoring functions developed and evaluated so far can be grouped into three basic categories.
A. Force field based: A force field is a way to express the potential energy of the system by using a mathematical function and a set of parameters. A basic functional form of a force field encapsulates both bonded terms (between atoms that are linked by a covalent bond) and non-bonded terms (also called " non-covalent "). Non-bonded terms describe van der Waals and long range electrostatic forces. The generic equations (1-3) for force fields such as in AMBER (Weiner & Kollman, 1981) or CHARMM (Brooks et al., 1983), are expressed as: (3) where r N denotes geometry of the system, V(r N ) is its potential energy, r i and r eq are the actual and equilibrium bond lengths, respectively, for the bond i, the i and eq is the same for bond angles, the and 0 is the same for dihedral angles, q i and q j are partial charges on the atom i and j, respectively; r ij is distance of atoms i and j, D is dielectric constant and the remaining symbols are force field parameters.
Force field based scoring functions calculate the binding score as a sum of individual contributions made by various interactions in the bound complex. Force field based scoring functions commonly used in docking software mainly use non-bonded and torsion terms. The binding process normally takes place in water, so the desolvation energies of the ligand and the protein are sometimes taken into account implicitly. Since hydrogen bonding is one of the dominating interactions for the majority of complexes, some of the docking software, like AutoDock (Morris et al., 2009) and G-Score (Kramer et al., 1999), include a separate term for the treatment of hydrogen bonding.
B. Empirical scoring functions:
Empirical based scoring functions, as in the case of force field methods, calculate the binding score of a complex as a sum of several weighted empirical energy terms that account for various types of non-bonded interactions. However, as opposed to the force field methods, empirical based scoring functions are much less systematic and general. The final score ΔG is calculated as a sum of weighted empirical energy terms, ΔG=∑ W i * ΔG i , where ΔG i represents individual empirical energy terms, such as vdW energy, electrostatic energy, hydrogen bonding, desolvation, hydrophobicity, entropy etc., while W i is the corresponding weight coefficient for a particular energy term, determined by linear fitting to an experimental data set. A set of X-ray receptor ligand complexes and their corresponding experimental binding energies are usually used as training data to calculate the weight coefficients by regression analysis. Due to the simple nature of the equation, these methods are computationally much more efficient compared to force field based methods. However, there are also significant drawbacks. General applicability of these functions is strongly dependent on the experimental data set used for their parametrization. It is not reliable to use such a scoring function for a data set that is structurally different from the training set. Glidescore ), LigScore (Krammer et al., 2005), and X-Score (Wang et al., 2002) are examples of software using empirical scoring functions.
C. Knowledge based scoring Functions:
Knowledge based scoring functions use the sum of the potential of mean force (PMF) between the protein and the ligand, using data derived from 3D structure databases. These scoring functions are based on capturing the protein ligand atom pair frequency of occurrence in the structural database. It is assumed that each interaction type between a protein atom of type i and a ligand atom of type j, found at a certain distance r ij , has an interaction free energy A(r), which is defined by an inverse Boltzmann relation (Eq. 4).
where K b is the Boltzmann constant, T is the absolute temperature, (r) is the density of occurrence of the atom pair at distance r in the training set and * (r) is this density in a reference state where the atomic interactions are zero.
The advantage of knowledge based scoring functions over empirical scoring functions is that there is no fitting to the experimental free energy of the complexes in the training set, whereas solvation and entropic effects are included implicitly. It should be noted that knowledge based scoring functions are used to reproduce the experimental structures rather than to predict binding energies. They can identify non-binders on their own or in combination with some other docking software during virtual screening. Since not all the possible interactions can be inferred from the crystal structure, these scoring functions may not be so robust and accurate, but they usually offer a good balance between speed and accuracy.
Techniques to improve the performance of scoring functions
Consensus scoring: This is a combination of the information obtained from different scores. The approach is helpful in balancing out the error of individual scoring functions, thus improving the probability of finding an appropriate solution. Several published studies show that combining the scores from different methods performs better than considering only the individual scores. MultiScore (Terp et al., 2001) and X-Score (Wang et al., 2002) are the most popular examples using consensus scoring.
Clustering: We often find an incorrect geometry with a slightly more favorable binding score than the correct geometry. However, these incorrect geometries are found with a very low frequency (~1-2%) when multiple docking experiments are performed. Thus, RMSD based clustering of all the docking solutions can be performed. To get the correct pose, the best energy conformation from the most populated cluster should be chosen. Table 1.1 summarizes the main features, license type and source for the most popular docking programs. We further provide a more detailed description of a few selected pieces of docking software. We would like to state that these methods are not necessarily the most accurate ones, but they are definitely the most widely used and the most cited in the docking community.
Description of some commonly used docking programs
AutoDock: AutoDock3 (Morris et al., 1998) and AutoDock4 (Morris et al., 2009) are force field based docking programs which have been widely used for the automated docking of small molecules, such as peptides, enzyme inhibitors and other ligands, into macromolecules, such as proteins, enzymes and nucleic acids. AutoDock offers optimization procedures like simulated annealing, genetic algorithm (GA) for global searching, a local search (LS) method to perform energy minimization, or a combination of both (GALS) for getting the accurate docked complex. The scoring function used in AutoDock is inspired by the MD programs AMBER, CHARMM or GROMOS, it includes terms for the Lennard-Jones potential, Coulombic electrostatic potential, hydrogen bonding, partial entropic contribution, desolvation upon binding and a hydrophobic effect. The scaling parameters for these terms were derived from a set of 30 protein-ligand complexes. (Rarey et al., 1996) a Sampling methods can be Genetic Algorithm ( The advantage of AutoDock4 over AutoDock3 is that it allows receptor flexibility, and also an improved new force field is used to calculate the binding energy. The force field of AutoDock4 includes a new intramolecular term, and a full desolvation model for desolvating polar and charged atoms. AutoDock facilitates the clustering of all the docked orientations by defining a root mean square tolerance, which can also be used to find the potential binding regions. It was seen that the lowest energy structure in the most populated cluster successfully reproduces the crystal structure. AutoDock Vina (Trott & Olson, 2009): It is a new generation of docking software (referred to as Vina) from the Molecular Graphics Lab, the developer of the other versions of AutoDock. It is a user friendly, open source piece of software, capable of predicting binding modes with better accuracy, while it is significantly faster than AutoDock4. It uses a combination of optimization algorithms, such as the genetic algorithm, swarm optimization and simulated annealing, to place the ligand in the binding site. The scoring function used in Vina is more based on machine learning rather than directly on a force field. Similarly to AutoDock4, it allows receptor flexibility.
The philosophy behind the development of Vina was to make the software easy to use, so most of the parameters used during docking are set by default, reducing the possibility of making manual mistakes. A further speed up in docking is achieved by multithreading. Thus, overall, Vina is very suitable for docking a large set of different compounds.
DOCK:
The program package DOCK (Kuntz et al., 1982, currently version 6.4) basically works in a few subsequent steps. First, the program "sphgen" is employed in order to identify a binding site and to generate spheres within the active site. Secondly, the program "grid" is used to generate scoring grids. Then the last program "DOCK" matches the sphere with the ligand atoms and uses the scoring grid to evaluate the ligand orientation. It constructs the ligand in the binding site step by step using the Anchor-and-Grow algorithm. Initially, the rigid anchor fragment of the receptor is placed at a selected position, and then is gradually enlarged by adding the flexible fragments of the ligand. An additional extension to DOCK allows rescoring the docked configuration of the ligand using several secondary scoring functions.
ICM:
The Internal Coordinates Mechanics (ICM) software is a set of modules for various purposes, such as visualization, chemical drawing and editing, homology modeling, docking and virtual screening (Abagyan et al., 1994). The ICM-Docking and chemistry module performs flexible ligand docking in a grid based receptor field. The scoring function used in ICM primarily accounts for electrostatics, van der Waals, hydrogen bonds, and the hydrophobic term. ICM needs the protein structure to be converted into an ICM object before docking. It provides a simple, object based GUI which can be used for docking. The binding site can be defined by entering the binding site residues, using the graphical selection tool, or the implemented icmPocketFinder function. It generates receptor maps within the defined boundary, which are further used in the docking. It is necessary to define the initial position where sampling will begin. ICM facilitates interactive, as well as batch docking. In interactive docking, one ligand is docked at a time in the foreground, whereas batch ligand docking runs in the background and is thus ideal for large scale docking jobs and virtual screening of huge ligand libraries. ICM offers an attractive feature to visualize and browse the docking results, and scan the hit compounds.
TRITON:
We previously introduced our in house graphical tool TRITON, and mentioned its use for homology modeling and mutagenesis. Another functionality of TRITON relates to in www.intechopen.com silico engineering of protein-ligand binding properties . The program can be used as a graphical user interface for the docking software AutoDock3, AutoDock4 and Vina. It enables the user to do common pre-docking tasks, like creating a project directory, reading structures, manipulating structures, calculating various types of charges and finally preparing input files for docking. Docking wizards make the job easy for new users, where the step by step procedure decreases the possibility of missing any of the docking parameters. It includes and offers certain optimized docking parameters that can be used as basic starting points if the user is not sure about certain docking parameters used in the AutoDock suite of programs. TRITON also includes parameters for ions taken from case specific studies, so it is easy to handle ions during the docking. Another important feature of TRITON is that it facilitates interactive analysis and visualization of the docking results.
Free energy calculation
As discussed above, various docking software can successfully predict the correct binding mode of the ligand into the receptor (Taylor et al., 2002;Warren et al., 2006). However, the previously described empirical scoring functions, which are based on a single receptor/ligand structure, do not provide accurate enough predictions of the binding free energy (∆G), the key quantity characterizing the strength of the receptor/ligand interaction.
To tackle this problem, molecular dynamics (MD) or Monte Carlo (MC) based methods for free energy calculation were developed in the mid 1980s (Jorgensen & Ravimohan, 1985). These methods, formally rooted in statistical thermodynamics, are now frequently used to compute receptor/ligand binding free energy. The methods use molecular mechanics force fields and Newtonian physics to evaluate the dynamics of the system. In the case of MD, we follow the evolution of the dynamics of the system in time. The dynamics allows the system to accommodate various protein side chains as well as ligand conformations, and also ligand configurations with respect to the protein. Simulations are usually performed in the bound state. Here, we will discuss the methods most commonly used to evaluate the binding free energy between the receptor and the ligand, namely Free Energy Perturbation (FEP), Thermodynamic Integration (TI), and Molecular Mechanics Poisson-Boltzmann Surface Area (MM-PBSA). We will also give some notes about the combined molecular mechanics/quantum mechanics (QM/MM) techniques.
Free Energy Perturbation (FEP) and Thermodynamic Integration (TI)
The FEP and TI approaches for free energy calculation are based on statistical thermodynamics and are generally formulated not to calculate the absolute value of the free energy, but always a relative value, i.e., the free energy difference, ∆G, between two equilibrium states. This is of a great importance, since for in silico mutagenesis applications we always need only relative values.
The FEP and TI free energy calculations are carried out using a thermodynamic cycle. Such a cycle, adapted for in silico mutagenesis purposes, is shown in Fig. 6. It involves a mutation of either the receptor alone, or the receptor/ligand complex (start state) into another state (end state), where the receptor is mutated. The simulation can be performed in either implicit or explicit solvent. The final calculated quantity is ∆∆G. This number will tell us whether the mutated protein (P M ) exhibits higher or lower affinity to the ligand L compared to the wild type protein (P W ).
www.intechopen.com
As the start and the end state can be arbitrarily different, these calculations are sometimes referred to as computational alchemy. As the free energy is the state function, Eq's (5 and 6) must hold.
The FEP calculations are based on the Zwanzig's formula (Zwanzig, 1954) to calculate the free energy difference ∆G between two states (see Eq. 7).
where k B is the Boltzmann constant, T is the absolute temperature, <> A denotes the MD or MC ensemble average over a simulation run for state A, V A and V B are the potential energies of state A and B, respectively. In general, an ensemble is an average set of systems, that are identical in all respect apart from the dynamics of the atom (k/a ensemble), considered all at once, each of which represents a possible state that the real system might be in. The potential energy difference can be averaged over an ensemble generated using the start and end state potential function for the forward and backward process, respectively.
The goal is to obtain the convergence of the values resulted from Eq. 7 within a reasonable time. It is assumed that the relevant geometries sampled on the potential energy of state A have a considerable overlap with those of state B.
The transition of state A into state B may also yield high energy geometries in the complex because of steric clashes with the neighbouring atoms. To overcome this issue, transition is done via many non-physical intermediate states that are usually constructed as a linear combination of the potential calculated for the start and end state. The potential energy of an intermediate state between A and B is given as shown in Eq. 8, where λ varies from 0 to 1. This state is a hypothetical mixture of states A and B: when λ=0, V λ =V A, and when λ=1, V λ =V B. Therefore, the transformation of state A into state B is done smoothly, by changing the values of the parameter λ in small increments, dλ. In practice, the free energy difference between the states A and B is computed by summing over all the intermediate states along the λ variable (Eq. 9).
This approach of breaking down the transitions into multiple smaller steps shares similarity with another approach used to compute free energy, namely Thermodynamic Integration (TI) (Kirkwood, 1935). TI is based on integrating a different equation from statistical thermodynamics, where the free energy difference between two states is obtained by integrating the derivative of the mixed potential function over λ (Eq. 10).
In this case, the mixed potential V(λ) is defined numerically by evaluating the linear interpolation between the potential function of the start and end state, respectively.
In principle, both FEP and TI should give the same results, as the free energy is a state function.
The relative binding free energy difference ∆∆G between the wild type protein P W and its mutant P M can easily be calculated from Eq's. 5 and 6 (for denotation see Fig. 6), and where ∆G F mut and ∆G B mut are calculated using the above described FEP or TI methods for the free and bound state, respectively.
As mentioned before, with the FEP or TI approach, the free energy associated with the two unphysical paths P W → P M (mutation in the free state) and P W (L) → P M (L) (mutation in the bound state) is calculated by sampling the degrees of freedom of the free protein or the complex using molecular dynamics (MD) or Monte Carlo (MC) methods. At regular intervals, the atoms of the residue which is being mutated are replaced by atoms of the residue which is desired at that place, and the potential energy along the paths is recorded. This quantity, averaged over the complete simulation, gives a proper free energy change ∆G mut . However, the convergence of the free energies is a first critical issue in the accurate calculation of the binding free energy. This requires exhaustive sampling of the system, which is much more time consuming than docking or normal MD simulations. Moreover, the mutation may cause steric clashes with the neighbouring atoms, which makes the sampling issue even more complicated.
Molecular mechanics poisson-boltzmann surface area (MM-PBSA)
Another approach well suited for estimating the binding free energy of molecular complexes and their mutants is the Molecular Mechanics Poisson-Boltzmann Surface Area (MM-PBSA) method (Srinivasan et al., 1998). The MM-PBSA approach was initially used to study the stability of nucleotide fragments, but also to compute the relative or absolute binding free energy of protein-ligand complexes. Later extensions (see Kollman et al., 2000;Hou et al., 2011) have enabled employing the method for free energy calculation in in silico mutagenesis approaches, which is helpful in making predictions for protein engineering. Unlike FEP and TI, MM-PBSA is an endpoint method that calculates binding free energy without consideration of any intermediate state.
The MM-PBSA approach is used to calculate the free energy change ∆G bind upon ligand binding according to equation 11. It combines the molecular mechanical energies with the continuum solvent approaches, and approximates the average of each state in order to calculate the binding free energy.
The single terms are defined by Eq's 12-14.
TS
where X stands for the complex, receptor or ligand, T is the absolute temperature. The E MM , G sol and S are the gas phase molecular mechanics energy, solvation free energy and entropy, respectively. The E MM includes several energy terms: E Internal for bond, angle and dihedral contributions, E electrostatics for coulomb interactions and E vdw for van der Waals energies. G sol is the sum of electrostatic solvation energy and non-polar contribution to the solvation free energy. The electrostatic contribution to the solvation free energy is calculated by solving either the linearized Poisson Boltzman (PB) or Generalized Born (GB) equation, while the non-polar contribution is estimated from the solvent accessible surface area (Connolly, 1983). If the solvation free energies are computed from the Generalized Born (GB) model, the method is termed also MM-GBSA. The last term, TS , includes the solute entropy S, which is usually calculated by quasi-harmonic analysis of the snapshots using normal mode analysis (Srinivasan et al., 1998).
Ideally, this approach is based on post-processing molecular dynamics trajectories. The free energy contributions are calculated for each component of the system (protein, ligand and the complex) from the snapshots taken from MD trajectories. In order to get the binding free energy of a ligand, two alternatives are used (see Fig. 7). The first is a multi trajectory approach, where we use the trajectories from three separate molecular dynamics simulations (on the complex, receptor and ligand). Snapshots of each component (protein, ligand and complex), taken from their corresponding simulation trajectories, are used to calculate the free energy terms. Note that this approach takes into account the influence of conformational changes upon binding on the final binding free energy. In the second approach, molecular dynamics simulations are run on the complex only, in order to reduce noise and cancel out the errors in the simulations. Conformational snapshots for the receptor alone and the ligand alone are extracted from the MD simulation of the complex by removing the respective binding partner from the complex. Therefore, it is assumed that the structure of the receptor and ligand is the same in the bound and the free state, and no major conformational changes occur upon binding. In this approach, E Internal is canceled out between the complex, protein and ligand, which reduces the noise in calculations.
In principle, the first approach of running three independent molecular dynamics simulations of three species is more accurate than the single trajectory approach. In practice though, the multi trajectory approach seems not to be used extensively. This is understandable, since there is no proper way to get the convergence of E MM values for the receptor within reasonable computational time. Hence, the regular implementation of this method is usually based on the second approach, where only the MD trajectory of the complex is used to compute the binding free energy.
A fundamental issues associated with the MM-PBSA approach is entropy calculation. The normal mode analysis (NMA) approach is usually employed to calculate entropy. However, this approach overestimates the loss of entropy upon ligand binding. In order to get meaningful absolute binding free energies, the entropy contribution must be determined in a consistent fashion. The best approach is to compute the relative binding free energies of a series of similarly sized ligands, where the entropy contribution is expected to cancel out www.intechopen.com (Massova & Kollman, 1999). The in silico mutants of a protein are also expected not to have a significant change in entropic contribution to the binding.
Alanine scanning mutagenesis using MM-PBSA
In the in silico mutagenesis section we discussed the basis of the methodology of performing single point or multiple alanine mutations using computational approaches. Here we will only show how to use alanine scanning mutagenesis coupled with the MM-PBSA approach.
We are distinguishing between two complementary problems of mutagenesis where binding free energy is calculated by the MM-PBSA approach. The first refers to the change in binding free energy upon alanine mutation at any location. This can be solved using the previously described single trajectory MM-PBSA approach on two systems, the wild type and the mutant. Molecular dynamics simulations of two systems (ligand complexed with the wild type and with the alanine mutant, respectively) are run under the same conditions. These two different trajectories are subjected to the MM-PBSA calculation previously described. The change in binding free energy upon mutation is now the difference between the binding free energy of the mutant and that of the wild type. In principle, this approach is accurate and recommended because it samples the conformational changes of the system upon mutation and takes into account their effect on the change in free energy.
The second issue refers to the individual contribution of each residue to the binding. MM-PBSA was first used in this respect in a study where a single MD simulation was used to compute the individual contribution of each residue to the binding in protein−protein complexes. Snapshots of mutants are generated from a single molecular dynamics trajectory of a wild type system. Mutations are performed by removing side chain atoms beyond the carbon of the amino acids under investigation (Massova & Kollman 1999). These snapshots are used for binding free energy calculations by the MM-PBSA approach. The approach used in alanine scanning mutagenesis is depicted in Figure 8.
On the one hand, this is not a very accurate way to get the change in free energy upon alanine mutation. On the other hand, it is very fast, as the mutations can be performed at any location without running the molecular dynamics simulation of the mutant system. Therefore, once we have the MD trajectory of the wild type, a possible primary scan for all the locations could be done in minimal computational time. Since the method uses the MD trajectory of the wild type to create the mutants, it is assumed that the receptor/ligand complex adopts the same geometry upon the mutation. This is a limiting factor, as mutations of the residues around the ligand binding site can substantially affect the binding geometry. Nevertheless, it is expected that this approach can estimate the free energy contribution made by a particular residue compared to the wild type system (Moreira et al., 2007). This approach is mainly recommended for finding the hot spots in protein-protein interactions. Hot spots are residues which make a contribution of about 2.0 Kcal/mol to the total binding free energy of the system, and are very important from the protein engineering point of view because they can be used as key points to alter the protein's recognition ability. Alanine scanning also gives an idea about the residues which are close to the binding region, but do not contribute substantially to the binding energy. These locations in the protein can be used to make the binding stronger if a residue with favourable properties is placed at that location. The MM-PBSA approach is quite fast. The calculations for hundreds of ligands and hundreds of mutants are feasible using high performance computing facilities. One must mention here that these approaches are approximate, and the relevant predictions should be verified using FEP or TI before experimental trials. Fig. 8. showing a single trajectory alanine scanning mutagenesis approach used with MM-PBSA or MM-GBSA calculations.
Hybrid Quantum Mechanics/Molecular Mechanics (QM/MM) approaches
A combination of quantum mechanics and molecular mechanics (QM/MM), accompanied by the increasing computational power of modern parallel and vector-parallel platforms, has brought a real breakthrough in the simulation of large systems. Here we describe the current QM/MM strategies used for quantifying the binding energy of complexes involved in molecular recognition.
The seminal contribution made by Warshel et al. in 1976 marks the beginning of the QM/MM era (Warshel & Levitt, 1976). In brief, to model large biomolecules one uses a QM method to model the active region (originally substrates and co-factors of an enzymatic reaction), and an MM method for the treatment of the surroundings (e.g., protein and solvent). The QM/MM approaches are relatively new in the field of molecular docking. A few years ago, a combined QM/MM docking approach for the investigation of protein ligand complexes was presented for the first time, and very promising results were obtained by combining the fast docking technique with the subsequent QM/MM optimization of the docked structure (Beierlein & Clark, 2003). Later, in an attempt to develop a docking algorithm which can predict poses accurately for the cases where the conventional approach fails, QM/MM calculations were integrated in the scoring phase (Cho et al., 2005). A protein-ligand docking study of 40 complexes investigated through QM/MM based docking calculations suggests that the use of fixed charges during the docking exhibits on-trivial www.intechopen.com errors. Therefore, polarization of the QM region is suggested to be crucial for docking studies. It was found that including also some protein atoms in the QM region, along with the ligand atoms, increases the success rate of QM/MM docking procedures (Cho & Rinaldo, 2009).
There are also examples in literature where a QM/MM approach was used to calculate the binding free energy. For example, Gräter and coworkers evaluated the performance of a QM/MM approach combined with MM-PBSA to obtain the protein/ligand binding free energy for a set of 47 benzamidine derivatives binding to trypsin. The QM/MM-PBSA methods reproduced the experimental binding energy well, with a root-mean-square (RMS) error of 1.2 kcal/mol (Gräter et al., 2005). Later, QM charge densities were used to solve the PB equation in a test case of binding of balanol and its derivative to the protein linase A (Wang & Wong, 2007). Even if this approach is not being used very frequently in the field of binding free energy calculation, the availability of packages (e.g. Amber Tools 1.5) that facilitate such a QM/MM-PBSA calculation in protein/ligand complexes, along with recent developments, is expected to make the QM/MM-PBSA method more user friendly.
Nowadays, all the statistical mechanics techniques to determine free energy differences through sampling, e.g., TI, umbrella sampling, or FEP are being used in conjunction with semiempirical QM/MM methods (Chung et al., 2009;Tuttle, 2010). The continuous increase in computer power has played an essential role in the development of these methods. QM/MM methods are expected to be especially important in the field of molecular recognition for systems where ions are present, i.e., in the area of metalloproteins.
Case studies
We present a few studies where in silico protein engineering was successfully used to study molecular recognition. We compare our results with other recently published studies on altering the binding specificity of a receptor by using in silico mutagenesis. The citations particular to the studied systems and to the methods mentioned below are omitted here, and can be found in the respective papers.
Engineering of the PA-IIL lectin to understand its sugar preference
Lectins are proteins of non-immune origin that recognize carbohydrates with high specificity and affinity. They belong to a large family of proteins whose unifying feature is the ability to decode the information stored in the glycome. Lectins are involved in a diverse set of biological processes, such as cell-cell recognition, differentiation, signalling or the adhesion of infectious agents to host cells. Many of these functions are connected with the recognition of specific saccharide structures on the cell surface. Carbohydrate-binding proteins play a key role also in host cell recognition by pathogens, as their specific adhesion to the host cell tissue is the first stage of their infectivity. Thus, lectins from pathogens represent a primary target for anti-adhesion therapy, having a great potential in the field of drug design.
The selected study (Adam et al. 2007 is based on the in silico protein engineering of the protein PA-IIL, a lectin from an opportunistic human pathogenic bacterium Pseudomonas aeruginosa, which causes lethal complications in cystic fibrosis patients. PA-IIL is a tetrameric lectin characterized by an unusually high (micromolar) affinity to L-fucose, which is atypical in protein-carbohydrate binding. Lectins homologous to PA-IIL later identified in other microorganisms, display high sequence and structure similarity, but strongly differ from each other in terms of sugar preference. For example, the lectin RS-IIL from Ralstonia solanacearum strongly prefers D-mannose over L-fucose. Three amino acid residues, at positions 22−23−24, were identified as the key residues that describe the relationship between structure and binding specificity for these lectins, and were named the "specificity binding loop". Given the capital relevance of this loop, in silico approaches were applied to understand the precise role of the specificity loop in the sugar binding preference. The dimeric structure of PA-IIL was used as a template structure for the homology modeling of three single-point mutants (S22A, S23A, and G24N matching amino acids in RS-IIL) of PA-IIL using our in house developed software TRITON, interfaced with MODELLER. In order to understand the role of a particular mutation with respect to sugar preference, different monosaccharides were docked into PA-IIL and its mutants using AutoDock3 and DOCK. Since PA-IIL has two Ca ++ ions in the binding site, which mediate the sugar binding, the effect of their charge on the docking energy was also evaluated. A formal charge on Ca ++ equal to 1.8 and 2.0 gave results in good agreement with experiment. The value of 1.8 was chosen as a compromise, because Ca ++ surrounded by several negatively charged oxygen atoms adopts a smaller charge in reality (about 1.5, see Mitchell et al, 2005).
The docking simulations produced a series of binding energies for the possible complexes of saccharides bound to PA-IIL and its mutants. The results can be seen in Table 1.
Overall, the docking results from AutoDock3 confirm that PA-IIL has higher preference for Me--L-Fuc (-10.7 Kcal/mol) over Me--D-Man (−9.23 Kcal/mol), and the sugar preference switches from Me--L-Fuc (-9.26 Kcal/mol) to Me--D-Man (-10.47 Kcal/mol) upon the S22A mutation. Docking inside two other mutants S23A and G24N also shows the order of preference again similar to what can be observed experimentally. Qualitatively, DOCK overestimates the binding energy in more cases than AudoDock3. Compared to experimental results, the AutoDock3 results reproduced the experimental order of saccharide preference to a large extent. The authors conclude that the automated docking methods are capable of identifying preference trends, and, therefore, using in silico approaches in pre-planning the in vitro mutations can help to identify the best potential candidates for mutagenesis.
Double mutant avian H5N1 virus hemagglutinin
The study (Das et al., 2009) shows how the free energy perturbation approach is used to compute the binding affinity of hemagglutinin (HA) to sialylated glycan epitops. A typical influenza infection, caused by avian influenza A viruses (H1N1, H2N2, H3N2 and H5N1 subtypes), requires binding of the viral surface glycoprotein hemagglutinin (HA) to sialylated glycans present on the host cell surface in order to initiate the infection. A change in the binding specificity of the HAs from -2,3 (common in avians) to -2,6-linked (common in human) sialylated glycans is expected to facilitate transmission of the virus from avians to humans. Therefore, molecular recognition of the particular glycans, considered as a key point for such infections, was inspected using mutagenesis studies.
HAs are homo trimers, with each monomer comprising of two subunits. The Receptor Binding Domain (RBD) of HAs, formed by basically 3 loops, requires at minimum two mutations to switch receptor specificity from avian to human. It is also known that hemagglutinin H1 changes its specificity from human to avian epitopes after two mutations (D190E and D225E). The authors were interested in finding whether a double mutation in hemagglutinin H5 enable it to recognize the human analog, as it is seen for the H1 HA subtype.
The authors used in silico approaches to interpret and predict the critical mutations responsible for HA-receptor binding. In order to achieve this, the change in relative binding affinity of H5 HA to sialylated glycans upon mutation was calculated through free energy www.intechopen.com perturbation approaches. The change in binding energy due to a mutation is evaluated using a thermodynamic cycle (see Fig. 6), where ∆∆G bind is calculated from the free energy change caused by the same mutation for the bound and free states respectively. This simulation was performed over 22 λ points, where each window was simulated for 0.3 ns. Therefore, a total of 66-ns of simulation were performed for each mutation in order to get proper sampling. The authors claim that before analyzing the effect of novel mutations on the H5 HA receptor, they validated their protocol by comparing the calculated binding affinities against experimental data for other mutants.
The authors conclude that the FEP calculations are in a fairly good agreement with the glycan array data, which was available for only a few H5 HA mutants. Most of the evaluated mutations resulted either in no change, or in weak binding affinity to -2,6-linked sialylated glycans compared to -2,3. They identified that a double mutation (V135S and A138S) in H5 HA enhances the specificity towards -2,6-linked sialylated glycans: ∆∆G bind = -2.56+-0.73 Kcal/mol for the human receptor, compared to ∆∆G bind = 0.84+-1.02 Kcal/mol for the avian receptor. To validate the results, the authors repeated the calculations for the same mutants on H5 HA obtained from a different isolate, which also revealed a substantial increase in the binding affinity for the human receptor. In order to understand the forces behind the recognition, they performed a free energy component analysis and saw that the electrostatic interactions are the driving forces for change in binding specificity upon mutation.
Thus, this study used computational approaches to provide valuable insight into the molecular recognition of glycans. This is another example where in silico protein engineering approaches were used as a complementary tool to interpret and understand molecular recognition.
Structural basis of NR2B-selective antagonist recognition
The third example (Mony et al., 2009) gives a detailed characterization of the ifenprodil binding site in the NMDA receptor (NMDAR) by both in silico and in vitro approaches. The NMDA receptor is an ionotropic glutamate receptor, which serves as the predominant molecular device for controlling synaptic plasticity and memory function. Therefore, controlled activation of the NMDA receptor is of great interest as a potential therapeutic target.
In order to stop receptor overactivation, several NMDAR competitive antagonists were developed in the 1980s. However, these compounds failed in clinical trials because of their inability to discriminate between the various NMDAR subtypes, and caused a generalized inhibition. In the study we report here, the authors used the most promising NMDAR antagonist at that time, ifenprodil, and its derivatives, in order to characterize the ifenprodil binding site using both computational and experimental approaches. The ifenprodil binding site on NMDAR was mapped on NR2B subunit's N-terminal domain (NR2B NTD), and the authors were able to describe the structural determinants responsible for the high-affinity binding of ifenprodil on the NR2B subunit.
A homology modeled structure of NR2B NTD was generated using the sequence to structure alignment functionality within the comparative modeling tool of MODELER 9.0. The ifenprodil was docked into the modeled structure using LigandFit. During the docking, the structure of the protein was kept rigid and 20 conformers of the ligand were subjected to www.intechopen.com energy minimization in the molecular modeling tool CHARMM. A 1 ns MD simulation of the minimized structures was used to generate the pharmacophore model of the system. In this case the in silico approach was used to get a clear picture of the system before extensive experimental validation was achieved by site directed mutagenesis.
Docking showed that ifenprodil adopts a unique and well defined orientation in the central crevice of the NR2B NTD. Based on the in silico model, site directed mutagenesis proved 5 NR2B NTD residues (Thr76, Asp77, Asp206, Tyr231, Val262) are essential for the high affinity ifenprodil binding and receptor inhibition. The proposed model of ifenprodil binding to NR2B NTD shared some similarities with a previously proposed model, which had had no experimental validation (Mirienelli et al., 2007). The authors suggest that the differences in the models could be caused by the use of different sequence alignment for the loops situated in binding cleft. However this study showed that a suitable combination of in silico approaches can provide a good picture of what we can expect before starting any kind of experiment.
Concluding remarks
We have shown in this chapter how in silico protein engineering can be used in the field of molecular recognition. The particular steps one has to go through when using these techniques were described. They comprise of 3D structure determination, in silico mutagenesis, docking as the first approximation of the binding affinity, and, finally, accurate calculation of the binding free energy.
It should be highlighted that, in many cases, in silico approaches provide information complementary to that obtained by experimental approaches. A number of such methods have been implemented and are available in specialized software packages. Therefore users can test the different tools easily and select the ones able to perform well for the particular system they are interested in. We have provided also a brief list of the most frequently used computer programs for the particular tasks described. It is probably fair to say that in silico approaches are mostly useful for the visualization and intelligent design of protein engineering projects. As the computer power increases and software products become more and more sophisticated, it is highly probable that in silico protein engineering of proteins recognizing small molecules will become an even more useful tool in the future. | 19,279.4 | 2012-02-24T00:00:00.000 | [
"Chemistry",
"Biology",
"Engineering"
] |
Interest linkage models between new farmers and small farmers: Entrepreneurial organization form perspective
Improving the interest linkage models between new farmers and small farmers is an important measure to realize the organic connection between small farmers and modern agricultural development. Based on the survey data of 572 new farmers in 16 provinces in China, this study uses the ordered probit model to empirically analyze the impact of entrepreneurial organization form on the interest linkage models between new farmers and small farmers. The results show that: (1) The choice of different entrepreneurial organization forms such as individual operation, cooperative operation, partner operation and company operation by new farmers will significantly affect the degree of interest linkage and then the linkage models. Partner operation and company operation have significantly improved the tightness of interest linkage between new farmers and small farmers. (2) The form of entrepreneurial organization significantly impacts the interest linkage between new farmers and small farmers. The higher the stability of entrepreneurial organization form, the closer the interest linkage and the more significant the impact on the interest linkage models. This effect remains significant after considering potential endogeneity issues and robustness tests. (3) In addition, further research also found significant regional differences and group differences in the impact of entrepreneurial organization form on the new farmers and small farmers’ interest linkage models. The impact of the western region is more significant than that of the eastern and central regions, and government entrepreneurship support policies can significantly strengthen the interest linkage models. The research results of this paper have vital reference significance for exploring the path of agricultural modernization under the "big country with small farmers".
Introduction
Agriculture is the foundation of the country.The report of the 19th National Congress of the Communist Party of China and the government's Central No. 1 Document have repeatedly focused on the development of new farmers, such as family farms, farmer cooperatives, agricultural leading enterprises, and the group of individuals who have returned to rural areas for entrepreneurial activities, encouraging new farmers and small farmers to build a "community of interests."After years of policy implementation, China has explored a variety of interest linkage models between new farmers and small farmers, including new order models, shareholding cooperation models, service-driven models, and multi-level integration models, etc. [1][2][3].Nevertheless, the current model exhibits several issues, including issues related to its reliability, such as loose connections and a lack of stability in contractual relationships.Therefore, it is necessary to guide the formation of a closer relationship of interests between new farmers and small farmers, and promote the effective connection between small farmers and the development of modern agriculture [4].
According to the difference in the closeness of interest linkage between new farmers and small farmers, the interest connection model can be divided into loose type, semi-close type and tight type.However, at present, due to the lack of an effective connection mechanism, the interests between new farmers and small farmers in China are mainly loosely and semi-closely connected.There are still many problems in the connection of interests between them, mainly in the lack of an interest adjustment mechanism, and the connection method is relatively loose; the construction of interest protection mechanism lags, and the stability of the contract relationship is not strong; the interest distribution mechanism is unreasonable, and existence of government over-regulation [5][6][7].Aiming at the problems existing in the interest linkage between new farmers and small farmers, studies have discussed them from two dimensions of interest competition and relationship coordination.It is suggested that the interests of both parties should be taken into account, the awareness of social responsibility of new farmers should be enhanced, and the voice of small farmers should be improved so as to support the strong and strengthen the weak and small [8,9].
As research progresses, the organizational form of new farmers' entrepreneurship has evolved into a prominent area of investigation.The study found that the form of entrepreneurial organization has a scale effect that measures the closeness of interest linkages, and the rationality of the choice of entrepreneurial organization form is directly related to the economic benefits obtained by new farmers [10,11].Different entrepreneurial organization forms have unique advantages in different agricultural fields and production processes.Appropriate entrepreneurial organization form can promote the effective of interest connection between new farmers and small farmers [12][13][14].However, the existing studies have not further explored the impact of entrepreneurial organization form on the interest linkage models of new farmers and small farmers, nor have they compared the impact of different entrepreneurial organization forms, which will affect the formulation and implementation of policies.
Under the national conditions of "big country with small farmers", there is a compelling need to comprehensively examine the influence of entrepreneurial organization form on the connection mode of interests between new farmers and small farmers.The carry out of study is conducive to promoting the organic connection between small farmers and modern agricultural development, promoting the establishment of a long-term cooperation mechanism for agricultural management entities, and improving China's agricultural modernization system.In addition, it is also conducive to gradually breaking the dual structure of rich and poor, narrowing the income gap, continuously improving the income level of low-income small farmers, and more actively promoting common prosperity.At present, the interest linkage tends to be stable, but the linkage mechanism needs to be improved.Analyzing from the perspective of new farmers' entrepreneurial organization forms has far-reaching significance for building a more effective interest linkage mechanism, realizing the organic connection between small farmers and modern agriculture, and promoting the high-quality development of agriculture and rural areas [15][16][17][18][19].
Therefore, based on the sample survey data of 572 new farmers in 16 provinces in China, this study theoretically and empirically analyzes the impact of entrepreneurial organization form on the interest linkage models between new farmers and small farmers, and explores the regional differences and group differences of the influence.The development of the research can provide a theoretical explanation for the innovation of the new farmers' entrepreneurial organization form, and provide a reference for the choice of interest connection models.It is of great practical significance to improve the closeness of the interest connection between new farmers and small farmers, stabilize the production cooperation relationship and promote the construction of new farmers and small farmers' interest connection model system.
Interest linkage model comparison between new farmers and small farmers
New farmers are operators and managers engaged in modern agricultural production.Guiding new farmers and small farmers to establish a stable and effective interest linkage mechanism is crucial to promoting small farmers' income [20,21].The key to building a stable and effective interest linkage model is to reduce transaction costs.Generally speaking, the higher the transaction costs, the more stable the interest linkage model is needed to maintain the cooperative relationship between new farmers and small farmers [22].Studies have found that information asymmetry, bounded rationality, asset specificity, opportunism, etc. will all affect the transaction costs of interest linkage of new farmers and small farmers, thereby affecting the stability and effectiveness of the interest linkage model [23][24][25].In addition, the entrepreneurial organization form of new farmers' is also an important factor that affects the pattern of interest linkage between them.
According to the three dimensions of new farmers' entrepreneurial years and entrepreneurial scale, employment relationship and cooperative relationship, and local government entrepreneurship support policies, the interest linkage models of new farmers and small farmers can be divided into three different models, that is, loose type, semi-close type and tight type.Table 1 shows the differences, advantages and disadvantages of different models.
The choice of new farmer's entrepreneurial organization form and interest linkage models
Entrepreneurial organization form is a framework formed by division of labor and collaboration, which reflects the production and cooperation relationship among organization members [26].According to different ways of entrepreneurship, the entrepreneurial organization forms of new farmers can be divided into four types, including individual operation, cooperative operation, partner operation and company operation.Different entrepreneurial organization forms have different impacts on the stability of the interest linkage between new farmers and small farmers, which will affect the choice of different interest linkage models for new farmers [27].Fig 1 shows the relationship between the new farmers' entrepreneurial organization form and the interest linkage of new farmers and small farmers.Individual operation form.Individual operation is a relatively simple form of entrepreneurial organization.New farmers engage in agricultural production and self-management on an individual or family basis, the production cooperation period between new farmers and small farmers is relatively short, and the interests between them are not closely connected, which is suitable for small-scale agricultural production.In this form, the cooperation between new farmers and small farmers is mostly realized through limited markets, lacking a stable and effective cooperation foundation, and the interest is relatively loosely linked.
Cooperative operation form.Cooperative operation is a form of entrepreneurial organization for new farmers to provide production management services and information consulting services as the main content.Under the operation of cooperatives, new farmers and small farmers sign service contracts to clarify their respective rights and obligations, and build a group size based on transaction frequency.It is easy to form a long-term and stable cooperative relationship, and the interest is relatively closely linked [28,29].
Partner operation form.Partner operation is an organizational model in which new farmers and small farmers work together to share benefits.Under partnership management, new farmers and small farmers are co-owners of the organization, and the cooperation between them is relatively stable, usually with long-term and stable cooperation agreements, and the interest is closely linked [30].
Company operation form.Company management is a cooperation model with a high degree of marketization.Under the management of the company, new farmers and small farmers have a stable employment relationship or long-term cooperative relationship.The two have relatively clear agreements on the division of rights and responsibilities, risks and benefits, and the interest is more closely linked [31,32].
Data sources
The empirical research data of this study come from the sample survey of new farmers in 16 provinces in China in 2021, including Zhejiang, Anhui, Gansu, Guangdong, Guangxi, Hainan, Hebei, Henan, Hubei, Hunan, Jiangsu, Jiangxi, Shanghai, Tianjin, Chongqing and Sichuan.All the surveyed new farmers are adult household heads.The survey adopted random stratified sampling method.In each province, two agricultural counties were selected, and within each county, two townships with a primary focus on agriculture were chosen.Approximately 10 representative new farmers were selected from each township.To ensure the representativeness of the sample, the selection of new farmers was primarily based on random sampling from lists registered with the County Agricultural Department or the Industrial and Commercial Bureau, including family farms, cooperatives, and agricultural entrepreneurs, etc.A total of 32 counties, 64 towns, and 585 new farmers were surveyed.After excluding samples with missing key information, 572 valid samples were obtained.The survey was conducted in the form of questionnaires.During the survey, the staff of the local township agricultural station first contacted the new farmers, and then the researchers conducted a questionnaire survey.The main contents of the survey include basic information of new farmers (e.g.personal information, family information, etc.), entrepreneurial status (e.g. years of entrepreneurship, entrepreneurial scale, main business, etc.), status of cooperation between new farmers and small farmers (e.g.cooperation method, cooperation purpose, cooperation benefits, etc.), as well as relevant support policies of local governments for interest linkage of new farmers and small farmers.Besides, this study also collects data related to the regional socio-economic development through the Agricultural Bureau and Statistics Bureau where the new farmers are located.
According to the previous analysis, this study divides the interest linkage models of new farmers and small farmers into three types: loose type, semi-close type and tight type.Table 2 shows the cross-statistical data of the new farmer's entrepreneurial organization form and interest linkage models.From the perspective of the interest linkage models, the interest linkage between new farmers and small farmers is still at a relatively low level, where the loose type
Variable description and statistics
Explained variable.According to the classification of the interest linkage models of new farmers and small farmers, combined with the actual research situation, in this study, loose type, semi-close type and tight type are assigned as 1, 2 and 3, respectively.The new farmers have significant differences in the choice of different interest linkage models, and the proportion of choosing the loose type is relatively high.Explanatory variables.That is, four types of entrepreneurial organization forms.In this study, individual operation, cooperative operation, partner operation and company operation are assigned a value of 1, and other corresponding forms are assigned a value of 0. Different entrepreneurial organization forms reflect different production and organizational relationships, and the proportion of new farmers choosing different entrepreneurial organization forms is relatively close.
Control variables.Referring to the existing studies [1,21,33], entrepreneurial supports, entrepreneurial experiences, entrepreneurial purposes, characteristics of entrepreneurs, and production characteristics are 5 main factors that affect the entrepreneurial behavior among new farmers and are likely to influence the interest linkage models between new farmers and small farmers.According to the survey data, in this study, variables such as the effect of entrepreneurial support policies, the difficulty in obtaining entrepreneurial support, the entrepreneurial period, etc. are controlled from these 5 dimensions respectively.Additionally, provincial dummy variables are also controlled in the empirical analysis.
Variable descriptions and descriptive statistics are shown in Table 3.
Empirical model construction
In order to quantitatively analyze the impact of entrepreneurial organization form on the new farmers and small farmers' interest linkage models, this study chooses the ordered probit model for empirical analysis.In the equation, the explained variable Y i is the interest linkage models.The general form of the model is as follows: In the equation, Y i is an ordered value within the range of {1, 2,. ..k}, which represents the interest linkage models between new farmers and small farmers, 1 = loose type, 2 = semi-close type, 3 = tight type; Y * i is a latent variable of interest linkage models of the new farmers and small farmers.X i represents the influencing factor vector of the interest linkage models, which mainly includes four key explanatory variables of individual operation, cooperative operation, partner operation and company operation, as well as other factors related to entrepreneurial supports, entrepreneurial experiences, entrepreneurial purposes, entrepreneurial characteristics and production characteristics.The estimated equation is: In the equation, F(�) is the cumulative density function of the standard normal distribution.β is the parameter vector to be estimated.ε i is the random error subject to the standard normal distribution, and μ i (j = 1, 2) is the threshold.
Benchmark regression results
The empirical estimation results are shown in Table 4.In Table 4, Model 1, Model 2, Model 3 and Model 4 are the estimation results of different entrepreneurial organization forms.From the estimated results, the four entrepreneurial organization forms of individual operation, cooperative operation, partner operation and company operation all have a significant impact on the interest linkage models.The influence coefficients are -0.399,-0.205, 0.414 and 0.446, respectively.It can be seen that with the improvement of the stability of entrepreneurial organization form, the influence coefficient turns from negative to positive, and the interests of new farmers and small farmers are becoming more and more closely linked.
In particular, the regression coefficients for individual operation and cooperative operation on the interest linkage models between new farmers and small farmers demonstrate a negative effect, indicating that new farmers are more likely to opt for the loose interest linkage model under these two entrepreneurial organization forms.Under the loose interest linkage model, the cooperative relationship between new farmers and small farmers is relatively liberalized, which can minimize the cost of cooperation.In contrast, the impact coefficients of partner operation and company operation are positive, indicating that under these two entrepreneurial organization forms, new farmers are more inclined to choose the close interest linkage models.Under the circumstances, the cooperation relationship between the two is relatively stable, and the cooperation cost is also lower.
Within the scope of control variables, the variable entrepreneurial purpose significantly affects the interest linkage models between new farmers and small farmers.Both collaborate to reduce production cost and collaborate to improve product quality have a significant positive impact on the interset linkage models, indicating that the new farmers have a clear purpose in choosing the models of interest linkage.In addition, the government's entrepreneurship support policy has a positive impact on the interest linkage models.However, the impact is not significant, and it is only significant in the company operation form.Among the characteristics of entrepreneurs, the number of college graduates and above has a significant positive impact, indicating that good education of new farmers can strengthen the interest linkage.Among the production characteristics, the new farmers who mainly focus on planting and agricultural product processing are more inclined to choose the tight interest linkage model.
Endogeneity test
Considering the endogeneity problem caused by the possible omitted variable bias, this study chooses the instrumental variable method for re-estimation.The selected instrumental variable is the convenience of transportation in the area where the new farmers are located, expressed in road mileage per square kilometer.The data comes from the National Bureau of Statistics.Generally speaking, the accessibility of regional transportation systems has a substantial impact on the choice of entrepreneurial organizational structure among new farmers.The more convenient the transportation, the better communication and cooperation among the organization members.At the same time, the convenience of regional transportation, as a geographical factor, does not directly affect the interest linkage models between new farmers and small farmers.
In this study, the IV-Probit estimation method was carried out, and the parameter estimation results are shown in Table 5.The weak instrumental variable identification test results show that the P values of AR under the four entrepreneurial organization forms are all significant at the 10% level, and the Wald chi-square value is significant at the 1% level.It can be considered that the instrumental variables selected in this study are reasonable.From the estimation results of the model, after considering possible endogenous problems, the impact of entrepreneurial organization form on the interest linkage between new farmers and small farmers is still robust.
Robustness test
In this study, the type of new farmers is used as a proxy variable of entrepreneurial organization form, and the ologit model is used to re-estimate the impact of entrepreneurial organization form on the interest linkage models.Compared with the estimation results of IV-Probit model, the significance and direction of influence of the key variables remain stable, and the research results are highly reliable.See Table 6.
Heterogeneity test
Considering differences in regional economic development levels and government entrepreneurship support policies, this study analyzes regional heterogeneity and government entrepreneurship support heterogeneity on the benchmark model.Regional heterogeneity.In China, the problem of unbalanced regional economic development is prominent.Under different levels of economic development, there may be differences in the interest linkage degree of new farmers and small farmers.Therefore, this study divides the samples into three regions, the east, the middle and the west, and carries out sub-sample estimation respectively.The results are shown in Table 7.It can be seen from the estimated results that the impact of entrepreneurial organization form on the interest linkage models is more significant in the western region, partly significant in the central region, and not significant in the eastern region.One possible reason lies in that the economy in the eastern region is more diversified, and the difference in the degree of interest linkages is not significant, showing that the impact of entrepreneurial organization forms on the interest linkages is relatively small.While in the western region, the opposite is true.The degree of interest linkages vary significantly under different entrepreneurial organization forms, showing a relatively large impact.The survey also found that in the western region, due to a relatively lower degree of marketization and a larger number of small farmers, new farmers take into consideration the interests of small farmers to a greater extent when choosing their entrepreneurial organization form, while in the eastern region, due to the extensive mechanization, new farmers have less consideration for small farmers' interests in their choice of entrepreneurial organization form.
Heterogeneity of government entrepreneurship support policies.Discussing the differences in the impact of new farmers and small farmers under different entrepreneurship support policies will help to improve entrepreneurship support policies and cultivate new farmers groups.This study compares the difference in the impact of entrepreneurial organization form on the interest linkage models between new farmers and small farmers with and without government entrepreneurship support policies.The test results are shown in Table 8.It can be seen that compared with no entrepreneurship policy support, the government entrepreneurship support policy significantly strengthens this effect.In order to stabilize the interest linkages of new farmers and small farmers, it is necessary for the government to provide corresponding entrepreneurial support.
Conclusion and discussion
Based on the sample survey data of 572 new farmers in 16 provinces in China, this study compares the differences between the new farmers and small farmers in the interest linkage models, and analyzes the impact of entrepreneurial organization form on the interest linkage models between new farmers and small farmers.The main conclusions are as follows: the organizational form of entrepreneurship has a significant impact on the interest linkage models between new farmers and small farmers.Compared with individual operation and cooperative operation, partner operation and company operation have a more significant positive impact on the interest linkage models.The impact in the western region is more significant than that in the eastern and central regions.In addition, the government's entrepreneurship support policy has significantly strengthened the impact, and the new farmers with government policy support are more closely interest linked with small farmers.In order to improve the mechanism for the interest linkage of new farmers and small farmers, and explore the path of agricultural modernization under the "big country with small farmers", the following suggestions are put forward: Firstly, strengthening organizational construction is crucial to enhance the interest linkage between new farmers and small farmers.Due to the high transaction costs between new farmers and small farmers, the unstable contractual relationship, and unreasonable distribution of benefits, at present, the degree of organization of small farmers is still relatively low, and the interest linkage between new farmers and small farmers is still relatively loose in China.To strengthen the interest linkage between new farmers and small farmers, it is necessary to establish a cooperation platform and information sharing mechanism, provide effective information exchange, market connection and technical support, and promote exchanges and cooperation between new farmers and small farmers.Additionally, it is necessary to further improve the contractual relationship and benefit sharing mechanism, continuously reduce the cooperative costs, and improve the stability and effectiveness of cooperation between new farmers and small farmers.
Secondly, it is essential to improve the support policies for new farmers' entrepreneurship, encouraging the establishment of long-term and stable cooperative relationships between new farmers and small-scale farmers.Specifically, it can be achieved by enhancing policy support for new farmers' entrepreneurship at the levels of fiscal, finance, taxation and technological assistance.Encouraging the development of entrepreneurial organizational forms such as partner operation and company operation can help lower the technical barriers and financial costs associated with new farmers' entrepreneurship, thus providing a better policy environment and external conditions for their entrepreneurial endeavors.Additionally, it is important to formulate and improve policy incentives according to the situation of new farmers driving small farmers.These incentives may include tax relief, preferential loans, and entrepreneurial subsidies, so as to effectively protect the rights and interests of small farmers, consolidate the "community of interests", and encourage the establishment of long-term and stable cooperation between new farmers and small farmers.The research presented in this paper has important implications for developing countries with a large number of small farmers, especially for those transitioning from traditional agriculture to modern agriculture.Compared with the existing research, the existing research primarily focus on the impact of entrepreneurship on the welfare of new farmers [34][35][36][37], but pays less attention to the relationship between new farmers' entrepreneurship and small farmers' interest linkage in developing countries.Fortunately, there are still a few scholars who pay attention to the interest linkage between new farmers and small farmers.For example, researchers like Liverpool-Tasie, through a comparative analysis of 202 similar studies, argue that the interest linkage between small and medium enterprises and small farmers is stronger, and informal collaborations can effectively solve the problem of resource shortage for small farmers and make up for the shortcomings of the market [38].This result is also supported by this study.Entrepreneurship among new farmers represents an upgrade and transformation of the farmer group, serving as an important pathway towards achieving agricultural modernization.However, in the face of a large number of small farmers, what kinds of entrepreneurial models the new farmers choose and how to strengthen their linkage with small farmers are still questions to be answered.The findings of this study suggest that entrepreneurial forms such as partner operation and company operation are more conducive to building a close interest connection with small farmers.This provides valuable insights for the new farmer cultivation policies in the future.
In addition, it should be noted that the interest linkage between new farmers and small farmers is a dynamic and evolving process, with varying applicability across different time periods, countries, and regions.Due to the impact of the COVID-19 pandemic, this study relied on cross-sectional data for analysis.In the future, it is necessary to further deepen the relevant research and strengthen the evaluation of policies related to the interest linkage between new farmers and small farmers.
Fig 1 .
Fig 1.The relationship between entrepreneurial organization form and the interest linkage of new farmers and small farmers.Source: Compiled by the author.https://doi.org/10.1371/journal.pone.0292242.g001
Table 2 . The entrepreneurial organization form and interest linkage model of new farmers.
71%, and the semi-close type accounts for 35.31%.From the perspective of differences in entrepreneurial organization forms, individual operation and cooperative operation mainly adopt a loose interest-linked model, accounting for 59.48% and 57.41%, respectively.However, partner operation and company operation mainly adopt the semi-close interest linkage model, accounting for 43.27% and 49.59%, respectively.This is consistent with the theoretical analysis in Section 2.2. | 6,011.6 | 2023-10-03T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics",
"Business"
] |
Facile synthesis and emission enhancement in NaLuF4 upconversion nano/micro-crystals via Y3+ doping
A series of Y3+-absent/doped NaLuF4:Yb3+, Tm3+ nano/micro-crystals were prepared via a hydrothermal process with the assistance of citric acid. Cubic nanospheres, hexagonal microdisks, and hexagonal microprisms can be achieved by simply adjusting the reaction temperature. The effect of Y3+ doping on the morphology and upconversion (UC) emission of the as-prepared samples were systematically investigated. Compared to their Y3+-free counterpart, the integrated spectral intensities in the range of 445–495 nm from α-, β-, and α/β-mixed NaLuF4:Yb3+, Tm3+ crystals with 40 mol% Y3+ doping are increased by 9.7, 4.4, and 24.3 times, respectively; red UC luminescence intensities in the range of 630–725 nm are enhanced by 4.6, 2.4, and 24.9 times, respectively. It is proposed that the increased UC emission intensity is mainly ascribed to the deformation of crystal lattice, due to the electron cloud distortion in host lattice after Y3+ doping. This paper provides a facile route to achieve nano/micro-structures with intense UC luminescence, which may have potential applications in optoelectronic devices.
Optical upconversion (UC) is an anti-Stokes process that two or more low-energy photons can be converted into a single high-energy photon 1 . Rare-earth (RE) doped UC materials show many advantages, including high photochemical stability, low toxicity and long luminescence lifetimes [2][3][4][5][6] , which may have great potential applications in fields such as biological imaging, multi-dimensional displays, optical temperature sensors and solar cells [7][8][9][10] . However, compared to downconversion materials, the main shortcoming of UC materials is their low luminescence efficiency. Thus, an effective strategy to enhance the UC luminescence intensity is urgently needed. In recent years, many kinds of methods have been used to achieve efficient UC luminescence. For instance, Zhao et al. reported the enhanced red UC emission in Mn 2+ doped NaYF 4 : Yb/Er nanoparticles, due to the efficient energy transfer between Er 3+ and Mn 2+ 11 . Tan et al. demonstrated NaYbF 4 :Tm 3+ and NaYbF 4 :Er 3+ nanocrystals with the enhanced red UC luminescence, which is attributed to the cross relaxation effect among the activators at high activator content 12 . As is known, the UC emission of RE doped materials is remarkably affected by the crystal field symmetry around activators 13 , and the asymmetric environment of activators can result in the emission enhancement. For instance, Zhao's group reported Li + doped GdF 3 :Yb 3+ , Er 3+ nanocrystals with the enhanced red UC luminescence, which was caused by the decrease of local crystal field symmetry around activators after Li + doping 14 . Rai et al. demonstrated the enhanced green UC emission in Li + doped Y 2 O 3 :Yb 3+ /Er 3+ nanocrystals 15 . Yin et al. reported Mo 3+ doped NaYF 4 : Yb/Er nanocrystals with 6 and 8 times enhancement of green and red UC emissions, due to the lattice distortion after Mo 3+ doping 16 . In order to obtain efficient UC emission, the selection of excellent host material is essential. With the similar crystalline plane, NaYF 4 and NaLuF 4 have been considered as the outstanding host matrix for UC processes, due to their high thermal stability, low phonon energy and high refractive index [17][18][19][20][21] . As is known, the ionic radius of Y 3+ (0.89 Å) is larger than that of Lu 3+ (0.85 Å), thus Y 3+ doping may cause the expansion of NaLuF 4 host lattice, leading to the distortion of local symmetry around activators. Consequently, Y 3+ doping is an effective approach for enhancing the UC emission intensity in NaLuF 4 -based system. In addition, due to the small difference in ionic radius between Y 3+ and Lu 3+ , the phase transformation does not occur during introducing Y 3+ in NaLuF 4 crystals, which would be favorable to maintain the stability of crystal structure. However, there is no report on the increase of UC luminescence intensity in NaLuF 4 -based system via Y 3+ doping.
In this paper, in order to obtain different structures of NaLuF 4 nano/micro-crystals before Y 3+ doping, the influence of reaction temperature on the phase of Y 3+ -absent NaLuF 4 crystals is studied. It is found that cubic nanospheres, hexagonal microdisks and hexagonal microprisms can be achieved with the higher temperature. α-, β-, and α/β-mixed NaLuF 4 :Yb 3+ , Tm 3+ crystals with Y 3+ doping show the significant enhancement of UC emissions relative to Y 3+ -absent samples under 980 nm excitation at room temperature. The proposed mechanisms of UC emission enhancement and shape evolution through introducing Y 3+ are presented.
Results and Discussion
Phase and morphology. First, in order to obtain diverse structures of NaLuF 4 nano/micro-crystals before Y 3+ doping, the influence of reaction temperature on the crystal structure of Y 3+ -absent NaLuF 4 crystals is studied. The XRD patterns and the corresponding SEM images of Y 3+ -absent NaLuF 4 :Yb 3+ , Tm 3+ nano/micro-crystals prepared at different reaction temperatures for 12 h are displayed in Figs 1 and 2, respectively. As can be seen from Fig. 1, pure α-NaLuF 4 (JCPDS 27-0725) is formed at 110 °C. The related SEM image (Fig. 2a) shows that the sample is composed of a large number of small cubic nanospheres with an average diameter of 17 nm. At higher reaction temperature of 130 °C, α/β-mixed NaLuF 4 appears in the XRD pattern, indicating that the crystals partially change from α to β phase. Correspondingly, the SEM image of Fig. 2b exhibits two obvious particle morphologies containing small α-NaLuF 4 nanospheres and large β-NaLuF 4 microdisks with a mean diameter of 7.63 μm. After being treated at 150 °C, the corresponding XRD result demonstrates that pure β-NaLuF 4 (JCPDS 27-0726) can be obtained. The corresponding sample is composed of a large amount of hexagonal microdisks with regularity and smooth surfaces, and the small cubic nanoparticles completely disappear, as presented in From the above analysis, it can be concluded that higher reaction temperature favors the formation of NaLuF 4 crystals with hexagonal phase, which is ascribed to the fact that higher temperature favors the nucleation and the crystal growth 25 . The L/D ratio of β-NaLuF 4 microcrystals is enhanced as the temperature increases from 150 °C to 200 °C. As is known, β-NaLuF 4 has a high anisotropic structure 26 . The growth rate along [10ī0] direction is lower than that along [0001] direction at higher temperature due to Cit 3− absorbs onto the { 1 10 0} facets more strongly than the {0001} facets, thus results in the increase of L/D ratio and the shape evolution from disks to prisms.
In order to reveal the effect of Y 3+ doping on the morphology and UC emission of NaLuF 4 crystals, a series of Y 3+ doped α-, βand α/β-mixed NaLuF 4 :Yb 3+ , Tm 3+ nano/micro-crystals were synthesized. Figure 3(a and b) show the XRD patterns of α-NaLuF 4 :Yb 3+ , Tm 3+ nanocrystals and β-NaLuF 4 :Yb 3+ , Tm 3+ microcrystals introduced with different Y 3+ contents prepared at 110 °C and 200 °C for 12 h, respectively. As can be seen, pure cubic phase (Fig. 3a) and pure hexagonal phase (Fig. 3b) can be obtained even Y 3+ content increases up to 79 mol% (the Y 3+ -free samples have been shown in Fig. 1). No extra peaks can be observed, which indicates that Y 3+ doping has no influence on the crystal structure of cubic-phase nanocrystals and hexagonal-phase microcrystals. As demonstrated in the insets of Fig. 3(a and b), with the Y 3+ content increases from 0 to 79 mol%, the main diffraction peaks of α and β phases move to lower angles. According to Bragg's law 2d sinθ = nλ, where d represents the interplanar distance, θ represents the diffraction angle, and λ represents the diffraction wavelength. When Y 3+ doped into the lattice, Lu 3+ can be substituted by the relatively large Y 3+ , resulting in the expansion of NaLuF 4 host lattice (Fig. 3c), thus the interplanar distance increases and diffraction angle decreases. The values of the lattice constants and unit-cell volumes of α-NaLuF 4 :20%Yb 3+ , 1%Tm 3+ doped with different concentrations Table 1, the higher unit-cell volumes are caused by the larger ionic radius of Y 3+ substituting Lu 3+ . Importantly, the lattice expansion may cause the distortion of local symmetry around Tm 3+ , which would break the forbidden transition of Tm 3+ , and consequently enhancing the UC emission intensity 27 . The above XRD results are well consistent with the corresponding SEM images.
As shown in Fig. 4(a-f), the Y 3+ doped α-NaLuF 4 nanoparticles are composed of a great deal of small cubic nanospheres (the Y 3+ -absent sample has been shown in Fig. 2a). The full width at half maximum (FWHM) was gradually narrowed with the Y 3+ concentration increases up to 79 mol%, as presented in Fig. 5. The average crystalline sizes can be calculated based on Scherrer's equation: D = 0.89λ/(βcosθ), where D is the crystallite size, λ represents the wavelength of the X-ray, β stands for the corrected half width of the diffraction peak, and θ is the diffraction angle. The factor 0.89 is the characteristic of a spherical particle. Thus, the mean diameters ( Table 2) of the spheres were calculated to be about 17 nm, 17 nm, 18 nm, 19 nm, 22 nm, and 24 nm, respectively. From the above results, it can be seen that the replacement of Lu 3+ by larger Y 3+ may lead to the increasing size of cubic-phase nanospheres.
The SEM images of Y 3+ doped β-NaLuF 4 microparticles are displayed in Fig. 6(a-f). As exhibited in Fig. 6a, the Y 3+ -free sample has been shown in Fig. 2e. As the Y 3+ concentration increases from 10 to 20 mol%, short hexagonal microprisms with regularity and uniformity are obtained, as presented in Fig. 6(b and c). On average, the prisms have a length of 3.01 μm and 4.81 μm; a diameter of 6.72 μm and 7.42 μm, respectively. When the Y 3+ concentration increases to 40 mol%, irregular hexagonal microprisms with coarse surfaces are shown in Fig. 6d. The average length of the prisms is 14.08 μm, and the average diameter is 11.02 μm. With the Y 3+ content further increases to 60 and 79 mol% [ Fig. 6(e and f)], the corresponding samples consist of hexagonal microprisms with scrappy ends and concave centers on the top/bottom surfaces. The prisms have a mean size of 7.78 μm and 7.71 μm in length; 5.98 μm and 5.10 μm in diameter, respectively. The L/D ratios are calculated to be about 0.45, 0.65, 1.28, 1.30, and 1.51 when the Y 3+ content is 10, 20, 40, 60, and 79 mol%. Thus, the L/D ratio of hexagonal microprisms is increased as the Y 3+ content increases from 10 to 79 mol%. Under our experimental condition, the chelated Lu 3+ -Cit 3− complex and Y 3+ -Cit 3− complex were formed. As is known, both β-NaLuF 4 and β-NaYF 4 have high anisotropic structures. From Fig. 6a (Lu 3+ = 79 mol%, Y 3+ = 0 mol%) and Fig. 6f (Lu 3+ = 0 mol%, Y 3+ = 79 mol%), it can be clearly seen that the L/D ratio of β-NaYF 4 is larger than that of β-NaLuF 4 . Thus, the v 1 / v 2 ratio of β-NaYF 4 is higher than that of β-NaLuF 4 under the same experimental conditions (v 1 is the growth rate along [0001] direction, v 2 is the growth rate along [ 1 10 0] direction), leading to the enhancement of L/D ratio and the morphology evolution from short hexagonal microprisms to long hexagonal microprisms when the Y 3+ concentration increases from 10 to 79 mol%. According to Liu et al. 's report about the density functional theory calculation on Gd 3+ doped NaYF 4 :Yb 3+ , Er 3+ nanoparticles, the electron charge density in host lattice changes after Y 3+ is substituted by Gd 3+ in the crystal lattice 28 . Under our synthesis conditions, the replacement of Lu 3+ by larger Y 3+ is similar to the substitution of Y 3+ by larger Gd 3+ . Thus, it is creditable that Y 3+ doped into NaLuF 4 host lattice may change the electron charge density, leading to the electron cloud distortion in crystal lattice, which would cause the deformation of crystal lattice. The change in crystal lattice may result in the formation of irregular and distorted hexagonal microprisms with coarse surfaces when the Y 3+ content is 40 mol%. Figure 7 shows the XRD patterns (a) and the main diffraction peak (b) of different Y 3+ doped α/β-mixed NaLuF 4 :Yb 3+ , Tm 3+ nano/micro-crystals prepared at 130 °C for 12 h. As shown in Fig. 7a, all samples are composed of a mixture of cubic and hexagonal phases (the Y 3+ -free sample has been shown in Fig. 1). Figure 7b displays the main diffraction peak of cubic phase shifts towards lower angles as the Y 3+ content increases from 0 to 79 mol%, which is mainly attributed to the expansion of crystal lattice after Lu 3+ is replaced by the relatively large Y 3+ . The shifting peak reveals that Y 3+ can be doped into the host lattice. The corresponding SEM images [ Fig. 8(a-f)] present two distinct particle morphologies including large microdisks (hexagonal phase) and small nanoparticles (cubic phase). It can be obviously seen that numerous spherical nanoparticles are attached on the surfaces of microdisks. The corresponding diameters of the disks are 7.63 μm, 5.64 μm, 4.79 μm, 3.50 μm, 2.66 μm, and 2.33 μm, respectively. The reduced diameter of the disks can be ascribed to the fact that β-NaYF 4 has higher v 1 /v 2 ratio than β-NaLuF 4 under the same experimental conditions.
The above results demonstrate that reaction temperature has a significant effect on the crystal structure of the products, and Y 3+ doping may cause the size-tuning and shape evolution of the crystals. Figure 9 summarizes the formation processes of Y 3+ -absent/doped NaLuF 4 :Yb 3+ , Tm 3+ nano/micro-crystals synthesized under different experimental conditions. UC photoluminescence properties. Figure 10(a-c) in Fig. 11. For 450 nm emission, the Tm 3+1 D 2 level is populated by the ET1+ET2+CR processes (ET = energy transfer, CR = cross relaxation). For 477 nm and 649 nm emissions, the Tm 3+ 1 G 4 level is populated by the ET1+ET2+ET3 processes. For 696 nm emission, the Tm 3+3 F 3 level is populated by the ET1+ET2 processes. As can be seen from Fig. 10(a-c), the blue and red UC emission intensities are distinctly enhanced as the Y 3+ content increases from 0 to 40 mol%, and then declined at the content of 40-79 mol%. Thus, the strongest UC luminescence intensities are observed in the samples with 40 mol% Y 3+ doping. Compared to their Y 3+ -free samples, the integrated spectral intensities in the range of 445-495 nm from α-, β-, and α/β-mixed 30 , when the Li + content is below 7 mol%, Li + substitutes Na + , causing the shrinking of host lattice; however, as the Li + content increases from 7 to 15 mol%, Li + begins to occupy interstitial site, leading to the expansion of crystal lattice; thus the sample with 7 mol% Li + doping has the highest UC emission intensity, owing to the lowest crystal field symmetry around activators. Besides, Y 3+ doping causes the electron cloud distortion in host lattice, resulting in the tunable size of the as-prepared samples. As is known, as for larger-size crystals, the nonradiative energy transfer processes of Tm 3+ would decrease due to their fewer surface quenching sites 28 , which is in favor of UC emission. Thus, as for Y 3+ doped β-NaLuF 4 :20%Yb 3+ , 1%Tm 3+ microcrystals, the larger-size (relative to Y 3+ -absent samples) of the samples with 40 mol% Y 3+ doping may have a small contribution to the enhancement of UC luminescence intensity. Figure 12 presents the decay curves of (a) 1 G 4 → 3 H 6 and (b) 1 G 4 → 3 F 4 transitions of Tm 3+ in α-NaLuF 4 :20%Yb 3+ , 1%Tm 3+ nanocrystals doped with 0, 40 and 79 mol% Y 3+ . Based on the function: τ = ∫I(t) dt/I max , where I(t) represents the emission intensity at time t, and I max represents the peak intensity in the decay curve. The calculation results (Table 3) is equal to the sum (A r+nr = A r + A nr ) of radiative (A r ) and nonradiative (A nr ) transition probability. Thus, the lowest luminescence lifetime in the sample with 40 mol% Y 3+ doping is mainly caused by the maximum emission intensity.
Conclusion
In summary, cubic nanospheres, hexagonal microdisks, and hexagonal microprisms can be achieved by simply adjusting the reaction temperature. It is found that higher temperature favors the nucleation and the crystal growth. The effect of Y 3+ doping on the morphology and UC emission of the as-prepared samples were systematically investigated. The results demonstrate that Y 3+ doping may cause the size-tuning and shape evolution of the crystals. Compared to their Y 3+ -free samples, the integrated spectral intensities in the range of 445-495 nm from α-, β-, and α/β-mixed NaLuF 4 :20%Yb 3+ , 1%Tm 3+ crystals with 40 mol% Y 3+ doping are increased by 9.7, 4.4, and 24.3 times, respectively; red UC luminescence intensities in the range of 630-725 nm are enhanced by 4.6, 2.4, and 24.9 times, respectively. It is proposed that the increased UC emission intensity is mainly ascribed to the deformation of crystal lattice, due to the electron cloud distortion in host lattice after Y 3+ doping. Besides, as for Y 3+ doped β-NaLuF 4 :20%Yb 3+ , 1%Tm 3+ microcrystals, the larger-size (relative to Y 3+ -absent samples) of the samples with 40 mol% Y 3+ doping may have a small contribution to the enhancement of UC luminescence intensity. As a result of their intense UC emission, these phosphors may be suitable for optoelectronic devices. Preparation. All samples were prepared based on our previously reported procedures [22][23][24] . As for the synthesis of Y 3+ -absent α-NaLuF 4 :20%Yb 3+ , 1%Tm 3+ nanocrystals, 3 mmol of citric acid (2 M, 1.5 mL), 5 mmol of NaOH (4 M, 1.25 mL) and 10 mL of deionized water were mixed and stirred for 10 min. Then 1 mmol of RE(NO 3 ) 3 (0.79 mmol of Lu(NO 3 ) 3 (1M, 0.79 mL), 0.2 mmol of Yb(NO 3 ) 3 (0.5 M, 0.4 mL), and 0.01 mmol of Tm(NO 3 ) 3 (0.1 M, 0.1 mL)) were added to above mixture and then stirred for 30 min to form the RE-Cit 3− complex. Subsequently, 16 mL of aqueous solution containing 9 mmol of NaF (1 M, 9 mL) and 7 mL of deionized water were added into the chelated RE-Cit 3− complex to form a colloidal suspension and kept stirring for another 30 min. Finally, the suspension was transferred into a 50 ml-Teflon vessel, sealed in autoclave and maintained at 110 °C for 12 h. After the autoclave was cooled to room temperature naturally, the final products separated by centrifugation, washed with ethanol and deionized water several times, and then dried in air at 60 °C for 12 h. Other samples were prepared by a similar process only by tuning the reaction temperature (110-200 °C) and Y 3+ content (0-79 mol%). Characterization. The crystal structure of the as-prepared samples was confirmed by powder X-ray diffraction (XRD) patterns using the D-Max 2200VPC XRD from Rigaku Company (Cu-Kα radiation, λ = 1.5418 Å). The morphology was observed by Oxford Quanta 400 F Thermal Field Emission environmental Scanning Electronic Microscope (SEM). UC photoluminescence spectra were carried out on an Edinburgh Instrument Company FLS980 combined fluorescence lifetime and steady-state fluorescence spectrometer equipped with a 1 W 980 nm laser diode. | 4,698.4 | 2017-10-23T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Discovery of a sub-Keplerian disk with jet around a 20Msun young star. ALMA observations of G023.01-00.41
It is well established that Solar-mass stars gain mass via disk accretion, until the mass reservoir of the disk is exhausted and dispersed, or condenses into planetesimals. Accretion disks are intimately coupled with mass ejection via polar cavities, in the form of jets and less collimated winds, which allow mass accretion through the disk by removing a substantial fraction of its angular momentum. Whether disk accretion is the mechanism leading to the formation of stars with much higher masses is still unclear. Here, we are able to build a comprehensive picture for the formation of an O-type star, by directly imaging a molecular disk which rotates and undergoes infall around the central star, and drives a molecular jet which arises from the inner disk regions. The accretion disk is truncated between 2000-3000au, it has a mass of about a tenth of the central star mass, and is infalling towards the central star at a high rate (6x10^-4 Msun/yr), as to build up a very massive object. These findings, obtained with the Atacama Large Millimeter/submillimeter Array at 700au resolution, provide observational proof that young massive stars can form via disk accretion much like Solar-mass stars.
Introduction
Models of massive star formation in the disk accretion scenario predict that circumstellar disks could reach radii between 1000 and 2000 au (e.g., Krumholz et al. 2007;Kuiper et al. 2011;Harries et al. 2017). Observationally, evidence for gas rotation near young stars with tens of Solar masses exists (e.g., Qiu et al. 2012;Johnston et al. 2015;Cesaroni et al. 2017), although the amount of gas mass undergoing rotation is a significant fraction of the star mass, and these envelopes, possibly hosting an inner disk, might be prone to fragmentation due to self-gravity and/or develop spiral instabilities (e.g., Kratter et al. 2010;Kuiper et al. 2011;Klassen et al. 2016;Chen et al. 2016;Meyer et al. 2017Meyer et al. , 2018. The interplay between disks and jets provides a mechanism to ensure mass accretion through the disk. Notwithstanding their connection, there is poor evidence of disk-jet systems in the inner few 1000 au of stars with tens of Solar masses (e.g., Beltrán & de Wit 2016), such as those resolved around Solar-and intermediate-mass stars (e.g., Lee et al. 2017a,b;Cesaroni et al. 2005Cesaroni et al. , 2013Cesaroni et al. , 2014. In order to establish the disk accretion scenario as a viable route for the formation of the most massive stars (e.g., Kuiper et al. 2010), we seek to simultaneously resolve the spatial morphology of the disk and the regions where the jet originates, and to determine whether or not the disk gas is in centrifugal equilibrium.
In the recent years, we have identified an "isolated" massive young star in the hot molecular core (HMC) of the starforming region G023.01−00.41, which is located about half way between the Sun and the Galactic center, at a parallax distance of 4.6 kpc (Brunthaler et al. 2009). The HMC emits a luminosity of 4 × 10 4 L (Sanna et al. 2014), and stands out among the strongest Galactic CH 3 OH maser sources (Menten 1991;Cyganowski et al. 2009;Sanna et al. 2010;Moscadelli et al. 2011;Sanna et al. 2015). The HMC luminosity corresponds to that of a zero-age-main-sequence (ZAMS) star with a mass of 20 M and an O9 spectral-type (Ekström et al. 2012). The HMC is located at the center of a collimated CO outflow whose emission extends up to the parsec scales (Araya et al. 2008;Furuya et al. 2008;Sanna et al. 2014); we tracked the driving source of the outflow down to the HMC center, where we imaged a collimated radio thermal jet associated with strong H 2 O maser shocks (Sanna et al. 2010(Sanna et al. , 2016. The different outflow tracers are aligned on the sky plane and constrain the direction of the outflow axis within an uncertainty of a few degrees (Sanna et al. 2016). An accurate knowledge of the outflow axis allows us to pinpoint the young stellar object (YSO) position, and thus to circumvent the usual problem of distinguishing between velocity gradients due to expanding or rotational motions. Notes. Column 1: granted 12m-array configuration. Columns 2 and 3: target phase center (ICRS system). Columns 4: source radial velocity. Columns 5: minimum and maximum rest frequencies covered. The ALMA IF system was tuned at a LO frequency of 226.544 GHz, and made use of 4 basebands evenly placed in the lower and upper sidebands. Column 6: maximum spectral resolution required. Columns 7, 8, and 9: bandpass, phase, and absolute flux calibrators employed at both runs. Calibration sources were set by the ALMA operators at the time of the observations. Columns 10: required beam size at a representative frequency of 220.6 GHz.
Here, we exploit the information about the star-outflow geometry in G023.01−00.41, and make use of Atacama Large Millimeter/submillimeter Array, ALMA, observations (Sect. 2), to directly resolve a disk-jet system associated with an O-type YSO. We firstly show that the line emission from dense gas reveals a molecular disk which extends up to radii of 2000-3000 au from the central star; the disk is warped in the outer regions (Sect. 3). Then, we make use of the position-velocity (pv-) diagrams of gas along the disk plane to show that gas is falling in close to free-fall, and slowly rotating with sub-Keplerian velocities; at radii near 500 au, gas rotation takes over, and approaches centrifugal equilibrium at shorter radii (Sect. 4). In the process, we image the molecular jet component which arises from the inner disk regions. We corroborate our conclusions by comparing the observed pv-diagrams with those obtained for a disk model around a 20 M (Appendix A).
Observations and calibration
We made use of our previous Submillimeter Array (SMA) observations at 1 mm (Sanna et al. 2014), which covered a range of angular resolutions between 3 and 0 . 7, to set up the requirements for higher resolution observations with ALMA.
We observed the star-forming region G023.01−00.41 with the 12 m-array of ALMA in band 6 (211-275 GHz). Observations were conducted under program 2015.1.00615.S during two runs, on 2016 September 5 and 16 (Cycle 3), with precipitable water vapor of 1.5 mm and 0.6 mm, respectively. The 12 m-array observed with 45 antennas covering a baseline range between 16 m and 3143 m, with the aim to achieve an angular resolution of 0 . 2, and to recover extended emission over a maximum scale of 2 .
We made use of the dual-sideband receiver, in dual polarization mode, to record 12 narrow spectral windows, each 234 MHz wide, and an additional wide band of 1875 MHz. The narrow spectral windows were correlated with 960 channels and Hanning smoothed by a factor of 2, achieving a velocity resolution of 0.7 km s −1 . Individual spectral windows were placed to cover a number of molecular lines of high-density tracers (> 10 6 cm −3 ) such as methanol (CH 3 OH) and methyl cyanide (CH 3 CN). These settings provide a line spectral sampling of 13 channels for an expected linewidth of 9 km s −1 . The wide band was correlated with 3840 channels and centered at a rest frequency of 217.860 GHz. This band was used to construct a pure continuum image from the line-free channels.
We spent 1 hour of time on-source during a total observing time of 2 hours, which includes calibration overheads. Time constraints were set to achieve a thermal rms per (narrow) spectral channel of about 2 mJy beam −1 (with 36 antennas), which corre-sponds to a brightness temperature of 1.4 K over a beam of 0 . 2. Additional observation information is summarized in Table 1.
The visibility data were calibrated with the Common Astronomy Software Applications (CASA) package, version 4.7.1 (r39339), making use of the pipeline calibration scripts. We determined the continuum level in each spectral window separately. We made use of the pipeline spectral image cubes and selected the line-free channels from a spectrum integrated over a circular box of 2 in size, which was centered on the target source. The task uvcontsub of CASA was used to subtract a constant continuum level across the spectral window (fitorder = 0). We imaged the line and continuum emission with the task clean of CASA. In each individual map, the ratio between the image rms and the thermal noise, expected from the ALMA sensitivity calculator, is near unity. For the continuum map, we integrated over a line-free bandwidth of 218 MHz selected from the wide spectral window, and achieved a signal-to-noise ratio of about 100. Imaging information is summarized in Table 2.
Results
According to the "observer's definition" of disk framed in Cesaroni et al. (2007), necessary conditions for a disk detection are (i) a flattened core of gas and dust perpendicular to the outflow direction, and (ii) a velocity gradient along the major axis of the core. We therefore want to analyze the spatial morphology and kinematics of dense gas which lies at the center of the bipolar outflow, and, specifically, along a position angle of −33 • on the sky plane (measured east of north). This direction traces the projection of the equatorial plane, hereafter the "disk plane", perpendicular to the molecular outflow and the radio jet orientation (or, simply, the outflow), which is inclined by less than 30 • with respect to the sky plane (Sanna et al. 2014(Sanna et al. , 2016. In order to gain an overall view of the mass distribution around the star, we first looked for a molecular tracer of cold and dense gas (> 10 6 cm −3 ). In the left panel of Fig. 1, we plot the velocity-integrated map of the CH 3 OH gas emission at a frequency of 218.440 GHz (in color); this CH 3 OH transition has an upper excitation-energy of only 45 K (E up ). To emphasize the bulk of the emission, we integrated over a small velocity range (2 km s −1 ) near the line peak.
For clarity, we have rotated this map in order to align the outflow axis, which lies at a position angle of +57 • on the sky plane (Sanna et al. 2016), with the vertical axis of the plot. The blueshifted (approaching) lobe of the outflow points now to the north. The horizontal axis is aligned with the disk plane, which is marked in the plot at multiples of 1000 au from the central star position. The position of the central star was set as to maximize the symmetry of the blue-and redshifted sides of the disk (see Figs. 2 and 4), and has coor- Left: momentzero map of the CH 3 OH (4 2,2 -3 1,2 ) E line emission (colors) combined with the continuum map of the dust emission at 1.37 mm (dashed white contours). Maps have been rotated clockwise by a position angle of −57 • , in order to align the (projected) outflow axis, drawn on the left side, with the north-south axis; negative outflow heights indicate the receding outflow direction. The CH 3 OH emission was integrated in the range 78.8-80.2 km s −1 ; the wedge on the top left corner quantifies the line intensity, from its peak to the maximum negative in the map. The lowest dashed contour corresponds to the 10 σ level of the dust map, and the inner contour traces the 50% level from the continuum peak emission. The disk plane is drawn at three radii: from the central star position (star) to 1000 au (yellow), from 1000 to 2000 au (black), and up to 3000 au (pink). The dotted black contour draws the 60% level of the CH 3 OH emission, which identifies the disk profile (see text). The synthesized ALMA beams, for the dust continuum map (dashed circle) and the line map (red circle), are shown in the bottom left corner. Right: first-moment map (colors) of the CH 3 OH (4 2,2 -3 1,2 ) E line emission plotted in the left panel. The outer contour traces the 40% level of the moment-zero map; the inner dotted contour is the same as in the left panel. The LSR velocity scale is drawn in the upper left. Notes. Columns 1 and 2 list the tracer and central frequency of each map (or molecular transition), respectively. For the line emission, column 3 specifies the upper excitation-energy of the molecular transition. Column 4 reports the Briggs' robustness parameter set for the imaging. Column 5 reports the restoring (circular) beam size, set equal to the geometrical average of the major and minor axes of the dirty beam size. Columns 6 and 7 report the rms of the map and the peak brightness of the emission, respectively. a Units of mJy beam −1 km s −1 .
dinates of R.A. (J2000) = 18 h 34 m 40. s 283 and Dec. (J2000) = -9 • 00 38 .310 (± 30 mas per coordinate). This position also coincides with the peak position of the radio continuum emission observed with the Very Large Array at the same resolution as the ALMA beam (Sanna et al. 2016). The same geometry and symbols will be used in all maps for comparison. Figure 1 shows that dense gas condenses in the direction perpendicular to the outflow axis. The CH 3 OH gas emission extends up to a deconvolved radius of 2380 ± 840 au (or 520 mas) from the central star, and has a ratio between the minor-to-major axes of 0.2. These values are calculated by Gaussian fitting the CH 3 OH emission within the 60% contour level (dotted black contour in Fig. 1). We use this threshold to select the region where the CH 3 OH iso-contours have a nearly constant height. This piece of evidence satisfies the first condition, that of a highly flattened structure perpendicular to the outflow, and hereafter we refer to the 60% contour level as the "disk profile". Below the 60% contour level, the CH 3 OH emission emerges at (relatively) large radii from the star (>2000 au) and appears significantly warped with respect to the disk plane. In this paper, we focus on the analysis of the emission within the disk profile.
In the right panel of Fig. 1, we plot the intensity-averaged map of the velocity field of the CH 3 OH gas (i.e., the firstmoment map). There is a clear gradient between the redshifted and blueshifted velocities moving from the left to the right side of the star along the disk plane. This second piece of evidence fulfills the more stringent requirement for confirming a disk candidate. On a closer look, the velocity field is actually skewed along the main outflow direction. This is not surprising, since, around Solar-mass stars, CH 3 OH emission has been shown to be excited both in the disk and jet regions (e.g., Leurini et al. 2016;Lee et al. 2017c;Bianchi et al. 2017). Indeed, contamination of the velocity gradient across the disk plane, by the blueshifted and redshifted gas velocities due to the outflow expansion, to the north and south of the star respectively, can explain this apparent velocity gradient.
To better investigate the gas kinematics inside the disk profile, in Fig. 2 we plot a number of maps of the CH 3 OH (10 2,8 -9 3,7 ) A + line emission at different velocities. This transition has an excitation-energy (E up = 165 K) about four times higher than that of the line in Fig. 1. The upper row shows the spatial morphology of the CH 3 OH gas emission at increasing (redshifted) LSR velocities, from left to right, and underlines the eastern disk side. On the contrary, the lower row shows the CH 3 OH gas distribution at decreasing (blueshifted) LSR velocities, which arise from the western side of the disk. Each row spans a velocity range of 2.8 km s −1 , which is the rotation speed expected around a 20 M star at an outer radius of 2300 au (assuming centrifugal equilibrium). Figure 2 resolves the spatial distribution of gas around the star at the different velocities, providing direct proof that the CH 3 OH gas is rotating clockwise around the outflow axis (as seen from the north), with the approaching and receding sides of the disk to the west and east of the central star, respectively. The pair of maps on the same column to the left shows the transition between the eastern (upper) and western (lower) sides of the disk emission, and can be used to infer the rest velocity of the star (see below). Between velocities of 80 and 79 km s −1 , the CH 3 OH gas emission is maximally stretched along the disk plane on either sides of the star; at higher (lower) velocities, the emission progressively brightens southeastward (northwestward) of the disk plane, in agreement with the redshifted (blueshifted) outflow lobe. A posteriori, this behavior confirms that the observed molecular species can trace both the disk kinematics and the (inner) outflowing gas (see also Fig. 4, right).
In the Fig. 3, we provide the same analysis for the CH 3 OH (18 3,16 -17 4,13 ) A + line emission, which has an excitation-energy of 446 K (E up ), and is more affected by the outflow emission. For this line, the transition between the red-and blueshifted sides of the disk occurs at lower velocity than in Fig. 2. The rest velocity of the star, V , was estimated from the median value between Fig. 2 and Fig. 3, and is set to +79.1 km s −1 with an uncertainty of ± 0.4 km s −1 . We explicitly note that, since the disk plane is not seen exactly edge-on, the blueshifted outflow emission close to the star, which lies between the observer and the disk, shifts the velocity field towards the lower velocities, when weighting the velocity channels by the line intensity. In the first-moment map of Fig. 1, the net result is that the redshifted velocities are down-weighted, and the central velocity is shifted by about −1.5 km s −1 with respect to the rest velocity of the star.
Overall, in Figs. 1 and 2 we resolve the gas emission within a few 1000 au of the O-type YSO, which allows us to prove the presence of a disk, and prompts us to study its kinematics in detail.
Discussion
In the following, we want to study the dependence of the gas velocity on the distance to the star, and to quantify the disk mass. Together with the observations, we discuss the results of a radiation transfer model for a circumstellar disk around a 20 M star (Appendix A). We have simulated the disk appearance for our specific observational conditions, and produced synthetic maps of the line and dust emission around the young star, in order to compare the observed pv-diagrams with those expected under two simple assumptions: the disk is falling in towards the central star due to gravitational attraction; the disk is rotating around the central star in centrifugal equilibrium.
Position-velocity diagrams
In Fig. 4, we study the velocity profile of gas along the disk plane through the pv-diagrams of two molecular species having a common methyl group, CH 3 OH and CH 3 CN. On the same column to the left, we compare the pv-diagrams of the CH 3 OH (10 2,8 -9 3,7 ) A + and CH 3 CN (12 4 -11 4 ) line transitions, which have similar excitation-energies of 165 K and 183 K, respectively. The two plots span the same ranges in space and velocity, and the lower intensity contour is drawn as to include an outer radius of 2400 au. Both pv-diagrams are double peaked, and the peaks are symmetrically displaced with respect to the star position and velocity. However, the CH 3 CN line emission shows a steeper velocity gradient at small offsets from the star, which indicates that this transition traces a region closer to the disk center than the CH 3 OH transition. This piece of evidence also explains the lower redshifted tail of the CH 3 CN pv-diagram, which traces the higher rotation speeds approaching the central star, as it can be expected for Keplerian-like rotation (v rot ∝ R −1/2 ). This component is missing in the roundish contours of the CH 3 OH emission. In Fig. 5, we plot the pv-diagram of the CH 3 CN (12 3 -11 3 ) line, which has an upper excitation-energy of 133 K, and shows a similar profile to that of the K = 4 line. Their agreement, when compared to the pv-diagram of the CH 3 OH line, strengthens the idea that the CH 3 CN gas allows us to peer into the innermost disk regions, and this holds for a broad range of line excitationenergies. On the other hand, both the CH 3 CN and CH 3 OH lines show increasing blueshifted velocities moving close to the star. In the right column of Fig. 4, we show a channel map at the blueshifted velocity of +73.2 km s −1 for both transitions. These maps clearly show a compact (molecular) outflow component, namely the jet, to the north of the star, corresponding to the direction of the blueshifted outflow lobe. Since the blueshifted outflow emission lies in between the observer and the disk, we cannot neglect its influence on the pv-diagrams (see below). Theory predicts that jets are launched and collimated in the inner few 100 au from the central star (e.g., Frank et al. 2014;Kuiper et al. 2015Kuiper et al. , 2016. The jet width of 1510 ± 73 au (or 329 mas), determined from the deconvoled (Gaussian) size of the CH 3 OH emission, allows us to set an upper limit of 800 au to the radius where the jet originates. Notably, we do not detect the redshifted lobe of the jet, which lies in the background with respect to the disk, and interpret this result as evidence for the inner disk regions being (partially) optically thick at 1 mm (e.g., Sanna et al. 2014;Forgan et al. 2016).
In Fig. 6, we compare the observed pv-diagram of the CH 3 CN (12 4 -11 4 ) line, which is sensitive to the inner disk velocities, with the modeled pv-diagrams for two different velocities fields. On the left panel, we consider an infalling disk around a 20 M star (colors), where the gas is moving radially at 70% of the corresponding free-fall velocity (v ff = 2GM /R 1/2 ). The disk extends from the dust sublimation radius, where the disk temperature approaches 1500 K, up to a radius of 3000 au from the central star. We find that, this fraction of the free-fall velocity, combined with an inner cutoff near 500 au, best match the velocity range covered by the observed pv-diagram (green contours). Note that, since the infalling motion starts with non-zero velocities at the outer radius, this produces the inner hole near zero offsets in the modeled pv-diagram. The infalling profile well reproduces the observed pv-diagram in the second and fourth quadrants, which are the quadrants forbidden under the assumption of purely rotational motion (right panel).
In the right panel of Fig. 6, we show the comparison of the observed pv-diagram (green contours) with that expected for a disk in Keplerian rotation around a 20 M star (colors). We also draw the outer contour of the infalling disk model (cyan contour), in order to highlight the regions excluded by simple rotation. At variance with the infalling profile, we find that a Keplerian-like profile better reproduces the higher velocities in the first and third quadrants, when we assume an inner cutoff near 250 au and outer disk radius of 2400 au, similar to the deconvolved size of our disk profile (Fig. 1).
This analysis shows that the velocity field of gas through the disk is a combination of infalling and rotational motions, within a radius of 3000 au from the central star. On the one hand, the detection of a radial flow towards the central star implies that the condition of centrifugal equilibrium does not hold outside a radius of 500 au, where the velocity field has to be a combination of sub-Keplerian rotation and infalling motion. This scenario resembles that of the sub-Keplerian infalling disks modeled by Seifried et al. (2011) under the presence of strong magnetic fields. Our model predicts a mass infall rate at a radius of 500 au of 6 × 10 −4 M yr −1 . On the other hand, our data suggest that the inward gas flow slows down in the inner disk regions (≤ 500 au), where the pv-diagram resembles that of a centrifugally supported disk. However, these inner regions are only sampled by two beams at the current resolution. We explicitly note that, although we fixed the star mass in our model, the sim-ple agreement between observed and modeled pv-diagrams provides indirect confirmation of the central mass determined from the bolometric luminosity.
Moreover, in the right panel of Fig. 6 it is evident that there is an excess of blueshifted emission near zero offsets. This emission coincides with the molecular jet detected in our channel maps (right column of Fig. 4). If the molecular species does not trace the disk gas exclusively, and the disk plane is not seen edgeon, we show that the blueshifted outflow emission close to the star leaves a strong imprint in the pv-diagrams. In the Fig. 7, we provide the same pv-analysis for the CH 3 OH (18 3,16 -17 4,13 ) A + line emission. Interestingly, at the higher excitation-energy of this line, the blueshifted jet emission strongly affects the eastern side of the pv-diagram. In the channel maps (right panels), the spatial distribution of the jet emission bends to the redshifted side of the disk indeed, and progressively aligns to the outflow axis moving away from the disk plane. These maps definitely underline that the jet emission arises in the inner disk regions (< 800 au), providing a mechanism to transfer angular momentum away from the disk, and ensuring an inward flow of mass towards the central star.
Under steady state accretion, the mass infall rate of our model would imply that the final star mass could be as large as three times the current value, assuming the accretion phase lasts for about 10 5 yr. Nevertheless, it has been recently shown that the accretion process of young massive stars undergoes episodic accretion bursts, with the accretion rate suddenly rising by a few orders of magnitude (Caratti o Garatti et al. 2017;Hunter et al. 2017). Therefore, the net mass accretion onto the central star might exceed that inferred from the model.
Disk mass
In Fig. 8, we overlap a map of the observed dust continuum emission at 1.37 mm (dashed red contours) with that of the modeled dust emission from the disk at the same wavelength (white contours). We draw the contours of the observed continuum emission down to a 20% level of the peak of 14.6 mJy beam −1 . Below this level, corresponding to a radius of about 2000 au from the star, the dust continuum emission suffers contamination from the outer envelope, which is outlined by the lower, dashed, white contour in Fig. 1 (corresponding to about the 10% peak level). Our model predicts a dust continuum peak of 12.4 mJy beam −1 , in excellent agreement with the observed intensity.
We compare the disk mass estimated from the dust continuum emission with that predicted by the model. Following Hildebrand (1983), we estimate the disk mass from the continuum flux of 62.9 mJy within the 20% contour. We use an average gas temperature of 279 K, determined from the model between radii of 10 au and 2000 au, and assume a standard gas-to-dust mass ratio of 100 and a dust opacity of 1.0 cm 2 g −1 at 1.3 mm (Ossenkopf & Henning 1994, same for the model). From these parameters, we calculate a disk mass of 1.6 M and assign an uncertainty of ± 0.3 M , corresponding to an uncertainty of ± 20% of the measured flux. Our model predicts a disk mass of 2.5 M within a radius of 3000 au, which decreases to 1.4 M at 2000 au. Although the model does not account for a dusty envelope surrounding the disk, which might contribute to the observed dust flux, these values are in excellent agreement with the measured disk mass. We conclude that very likely the disk mass is much less (∼ 10%) than the mass of the star. Johnston et al. (2015) Previous ALMA observations by Johnston et al. (2015) reported about a circumstellar disk around a young star in AFGL 4176, which has a mass comparable to G023.01−00.41. These two sources have similar distances from the Sun and were observed with comparable resolution as well, allowing for a direct comparison. They might represent two snapshots for the formation of an O-type star, with G023.01−00.41 being younger, and having the potential to become three-times more massive than AFGL 4176 at the end of the accretion phase. In the following, we highlight common features and differences:
Comparison with
-Both disks extend up to radii of ∼ 2000 au from the central star, and have similar average temperatures of ∼ 200 K, as determined by LTE analysis of the CH 3 CN K-ladders. -There is a large difference (5×) in the estimate of the disk mass for these two objects, of 1.6 M and 8 M for G023.01−00.41 and AFGL 4176, respectively. However, this difference is due to the different dust opacities at 1 mm assumed in the calculations, of 1.0 cm 2 g −1 and 0.24 cm 2 g −1 respectively, whereas the integrated fluxes at the same wavelength are similar (50-60 mJy). We argue that the lower dust opacity used by Johnston et al. is more appropriate for diffuse clouds (Draine 2003). If consistent dust opacities are used, the two disks also have similar masses of ∼ 2 M , which amount to ∼ 10% of the central star mass. -The main difference between the two disks is related to the disk kinematics. On the one hand, Johnston et al. found Keplerian-like rotation up to the outer disk radii, meaning that nearly centrifugal equilibrium slows down any inward flow of mass through the disk. On the other hand, we find that the disk surrounding G023.01−00.41 is infalling and sub-Keplerian, and only at radii of a few 100 au might approach centrifugal equilibrium. According to Kuiper et al. (2011, their Fig. 4), larger Keplerian disks are expected at later times, whereas Keplerian rotation around the youngest stars is confined to the inner disk regions. This evidence suggests that G023.01−00.41 is still in an active phase of accretion and much younger than AFGL 4176.
Conclusions
We report about Atacama Large Millimeter/submillimeter Array (ALMA) observations, at wavelengths near 1 mm, of dense molecular gas and dust in the vicinity of an O-type young star, with a linear resolution as good as 700 au and a line sensitivity of 1 K. We targeted the luminous hot-molecular-core G023.01−00.41 after selecting the best candidate disk-jet system from our previous observations. We have resolved a (molecular) disk-jet system around a young star which currently attains a mass of 20 M . We present a kinematic analysis of the position-velocity diagrams of dense gas along the disk midplane, and compare them with the position-velocity diagrams simulated with a radiation transfer model. We show that the disk is falling in close to free-fall and slowly rotating with sub-Keplerian velocities, from radii of about 2000 au and down to 500 au from the central star, where we estimate a mass infall rate of 6 × 10 −4 M yr −1 . The disk mass is a small fraction of the star mass (∼ 10%). Furthermore, we are able to image the jet emission which arises from the inner disk radii (< 800 au), and show that its blueshifted emission leaves a strong imprint in the position-velocity diagrams. Fig. 4, but for the CH 3 OH (18 3,16 -17 4,13 ) A + line emission with E up of 446 K. The right panels show two channel maps of the jet emission at a V LSR of +73.2 (middle) and +72.5 km s −1 (right). Fig. 8. Analysis of the dust continuum emission from the disk. Comparison of the dust emission observed with ALMA towards G023.01−00.41 (red dashed contours) with the modeled dust emission from a circumstellar disk around a 20 M star (colors and white contours). The brightness scale of the modeled emission is quantified by the righthand wedge. The red dashed contours are plotted at steps of 10% of the observed peak emission, starting from 90%; the white contours draw the same absolute levels for the model. The reference system and symbols are the same used in the previous figures. The synthesized ALMA beams, for the observed (red filled circle) and modeled (white circle) dust continuum maps, are shown in the bottom left corner. ing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. A.S. gratefully acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG) Priority Program 1573. RK and AK acknowledge funding from the Emmy Noether research group on "Accretion Flows and Feedback in Realistic Models of Massive Star Formation" funded by the German Research Foundation under grant no. KU 2849/3-1. | 7,862.4 | 2018-05-24T00:00:00.000 | [
"Physics"
] |
Scattered high-energy synchrotron radiation at the KARA visible-light diagnostic beamline
High-energy scattered synchrotron radiation at the visible-light diagnostics beamline at KARA (Karlsruhe Research Accelerator) and consequences for the future design of beamlines and radiation shielding as accelerators, beamlines and methods/techniques develop are presented.
Introduction
The measurement of scattered secondary radiation from a synchrotron and its calculation is central to radiation protection.Although procedures are well established, changes in machine, new beamlines and experiments provide a constant impetus to improve and develop methods for measurement and calculation of the radiation.For example, the development of small-scale pulsed plasma-based accelerator sources and their related technology (Nasse et al., 2013;Ghaith et al., 2021;Bernhard et al., 2018) is very different in the production, type of X-rays and energy range compared with the more traditional large-scale facilities (Einfeld et al., 2000).
It is important for the operation of such sources that the electron beam parameters, and their control, responsible for the production of the light are accurately known.Visible light is frequently used and techniques (Boland et al., 2012;Ikeda et al., 2022) further developed along with the construction of dedicated beamlines for the measurements (Breunlin & Anderson, 2016;Panas ´et al., 2021;Schiwietz et al., 2021).Such a beamline has been in use since 2011 (Hiller et al., 2011) on a bending magnet at the Karlsruhe Reseach Accelerator (KARA) and is being further developed (Patil et al., 2023).
The reflecting optics are very similar to those at other infrared (Zheshen et al., 2015) and UV beamlines (Bu ¨rck et al., 2015), consisting of a chicane of 90 reflections.Dose rates at such beamlines are not high, below 0.5 mSv h À1 .Surprisingly, a higher dose rate about two orders of magnitude higher than this value has been found.Fortunately, this radiation could be easily shielded to an acceptable level using 300 mm of aluminium foil or 2 mm of Pyrex glass, which suggested a relatively low energy of the radiation.Although the dose could be measured, the sensitivity of the dosimeters differed, and uncertainty as to the exact nature, spatial and energy distribution existed; the origin was not clear.Therefore it was decided to investigate this further by direct measurement using fluorescence detection and calculations.High-energy scattered synchrotron radiation was finally found to be responsible resulting in a large copper K-shell fluorescence reaching the diagnostics on the other side of the wall.The resulting fluorescence spectrum is shown in Fig. 1 (red curve) together with a copper foil attenuated spectrum (green curve) showing the high-energy synchrotron radiation background, and the results of a calculation (purple curve) of the background, described in the Results and discussion section.
The beamline layout and setup is discussed in the next section.Results of attenuation experiments together with calculations of the radiation dose and background are presented in Section 3 and compared with measurements.Finally, other typical scattering scenarios and resulting scattered radiation calculations are presented.
Experiment and method
Fig. 2 shows a schematic of the beamline layout which is described in previous work (Hiller et al., 2011).Synchrotron radiation passes through a 20 mm horizontal aperture and is captured by a water-cooled plane mirror (diameter 5.9 cm, solid angle 1.94  10 À5 sterad) and is then deflected upwards by 90 through a 5 mm-thick UHV quartz window (diameter 6.5 cm) which isolates the vacuum and also acts as a filter transmitting the visible light and blocking X-rays.The visible light is then deflected sideways horizontally through a hole in the ring wall to the beam diagnostics by a second mirror (diameter 7.0 cm, solid angle 1.38  10 À2 sterad).The second mirror is a polished copper parabolic mirror with a thin 30 mm reflecting aluminium coating.An aluminium profile frame was constructed to hold various interchangeable detectors.The fluorescence and scattered photons were measured with an energy-dispersive silicon drift detector (SDD) sensitive to radiation in the 2-40 keV range with an energy resolution of 135 eV (KETEK Gmbh, AXAS-M) and also measured using two non-dispersive detectors, a photodiode (Hamamatsu S3590-06) and a calibrated dosimeter (Berthold Tecnologies, Experimental results of summed raw (red) and copper foil attenuated (green) SDD output.The purple curve is a result of a calculation of the scattered synchrotron radiation from the second copper mirror described in Section 3. Good agreement is seen between the calculated curve and the attenuated spectrum identifying the high-energy portion as high-energy scattered synchrotron radiation.Tol-F).The dosimeter measures the energy deposited in the detector by absorption of X-rays in an energy window from 10 keV to 7 MeV.The diode is very sensitive to visible and UV light which has to be blocked by a thin 25 mm black Kapton foil filter.The illuminated area of the dosimeter was 2 cm  4 cm giving a solid angle of $ 2.5  10 À5 sterad.The SDD output was processed using a digital signal processor and associated software (XIA LLC).
The diode, together with filter measurements, determined the extent of the radiation and showed that its effects would not be adverse for the fluorescence detector.Initial fluorescence measurements were made with a silver metal collimator used in normal measurements to reduce scattered radiation contributions (Simon et al., 2003).In subsequent measurements this was omitted to allow faster acquisition as no significant difference was observed in spectral shape and relative intensities.An all-plastic filter holder was placed directly in front of the detectors for the attenuation measurements which used thin aluminium and copper metal foils, 20 and 45 mm-thick, respectively.Care was taken to avoid metal components, additional scattering and subsequent measurement contamination.
Results and discussion
The exact nature of the radiation resulting in the high dose was unclear and the initial dose measurements were inadequate and conflicting.The fluorescence measurements clearly show copper fluorescence but, due to the limited range of the silicon drift detector (2-40 keV), it was not clear whether there was a higher-energy component present in the Tol-F dosimeter (10 keV to 7 MeV; Berthold Tecnologies) measurements as the fluorescence lies below the low energy limit of the measurement range.To clarify this, measurements of the radiation with attenuation by foils of different thickness were made.If the copper fluorescence measurements followed the dosimeter readings then it could be reasonably assumed that the measured radiation was the same and that there was no significant high-energy component present.Fig. 3 shows the results of the attenuation measurements using the thin foils.Three measurements are shown -integrated copper fluorescence, Tol-F dosimeter and photodiode signals.The measurements have been normalized to the unattenuated signal.All measurements are in good agreement following a similar exponential attenuation with thickness.The solid line for aluminium is calculated for an attenuation length of 80.5 mm, in good agreement with the theoretical value (80.3 mm).For copper foils the experimental points (fit 22.8 mm) are also in good agreement with the expected attenuation for the metal (22.6 mm).
That, in all cases, the measurements closely follow each other confirms that the high dose is due to the copper K-shell fluorescence.If a high-energy radiation component was present then the dosimeter reading would remain higher, and the converse for the diode and a low-energy componenent.The Tol-F dosimeter was used for the measurements as its range is greater than that of the LB1236 (30 keV to 1.3 MeV; Berthold Technologies) and more sensitive, the value being two orders of magnitude greater.In addition, the Tol-F has an internal calibration source (Sr-90).The LB1236 is a proportional-counting dosimeter and the Tol-F an ionizationchamber/proportional-counter dosimeter.They also differ in the detector housing/shield -the Tol-F has a thin metal-coated plastic housing and the LB1236 an aluminium casing.
As the radiation is identified as copper K-shell fluorescence and not of a higher energy, it clearly originates from the copper mirrors of the visible-light port.Given that the power from 13 mrad of synchrotron radiation for 2.5 GeV electrons and 100 mA beam current is 98 W and that the window transmission at the copper fluorescence (8.05 keV) is 4.2 Â 10 À17 , it cannot come from the first mirror: assuming that the conversion efficiency is 100% gives a dose of 1.5 Â 10 À5 mSv h À1 .The origin of the copper fluorescence and the high dose level of 55 mSv h À1 (Tol-F) must be the second mirror, and ionization by the higher-energy scattered radiation transmitted by the quartz vacuum isolation window, which has a cut-off at $ 10 keV.
That such radiation is present is easily seen by the highenergy background in the spectrum of the fluorescence (red curve) and that of the attenuated measurement (green curve) of Fig. 1.Half of the synchrotron power is emitted above the critical energy of the synchrotron radiation (6.24 keV), approximately 50 W, close to the window cut-off.This power, a substantial amount, is scattered by the first mirror, transmitted further by the UHV window, and ionizes the copper of the second mirror.The resulting fluorescence travels through air into the diagnostic hutch.To compare with the dosimeter reading the scattering needs to be modelled.Such modelling and the calculation of the subsequent radiation dose is frequently carried out by Monte Carlo simulation [FLUKA (Ferrari et al., 2005), EGS5 (Hirayama et al., 2006), PENE-LOPE (Salvat & Fernandez-Varea, 2009), PHITS (Sato et al., 2018)].Here, due to the setup and the energy of the synchrotron radiation, the scattering from the first mirror responsible for the fluorescence from the second is principally coherent (Rayleigh, Thomson) and not incoherent (Compton), and consequently the calculation can be simplified.For this it is useful to calculate the various quantities using power.The various relevant scattering cross sections (Santra, 2009;Hubbell et al., 1980;Chantler et al., 1997Chantler et al., , 2005) ) are shown in Fig. 4 and plotted as a function of energy.The cross sections are taken from Hubbell et al. (1980) and Chantler et al. (1997Chantler et al. ( , 2005)).
For photon energies below 100 keV the dominant contribution is the photoelectron ionization cross section.This decreases rapidly with energy and the non-ionizing contributions of the cross sections for coherent and incoherent increase.At energies transmitted by the window, several tens of keV, the other two contributions have a weak but a significant contribution.For high energies the scattering is described by the Klein-Nishina formula which is asymmetric (Santra, 2009).For the energies of interest here the form is much more Thomson-like with a light asymmetry.For 40 keV photons the fraction of such scattered photons is $ 5% and the asymmetry of the cross section is 15%.As the copper mirror is of polycrystalline nature, the scattering due to diffraction is averaged over angle.
The fraction of photons scattered from the mirror is the sum of the differential scattering cross section and the higher-order multiple-scattering terms to the total cross section: multiple scattering is described by a convolution of the scattering but due to the small value can be neglected apart from the first few terms.The result is shown in Fig. 4. The distance between the mirrors and their small size is such that the scattering angle and the subtended angle can be considered constant and small, respectively.By using the energy-dependent scattering fraction and the synchrotron power, the power leaving the first mirror is obtained (scattered, Fig. 5).This, together with the transmission of the 5 mm quartz UHV window (quartz trans, Fig. 5), determines the transmitted power exiting the window and incident on the second mirror (transmitted, Fig. 5).It peaks at 25 keV due to the strong absorption of the window and the exponential decay of the synchrotron radiation power with energy [/ ðhv=EÞ 3=2 expðÀhv=EÞ] above the critical energy E. The maximum is well above the threshold of the copper K-edge, the maximum in the ionization cross section.Whilst the energy is high it is still in the region of a substantial contribution of coherent scattering (see above, and Fig. 4).The window acts as a low-energy cut-off with a transmission value of 4.2  10 À17 at 8045 eV.Consequently, hardly any copper fluorescence is transmitted by the window.Low-energy silicon fluorescence from the window is also not seen, a consequence of air absorption, additional scattering and the detector geometry (see Fig. 6).
In addition to blocking a substantial portion of the synchrotron radiation the quartz window blocks high-energy electron and secondary emission, other important ionization contributions.To calculate the dose from the copper fluorescence from the second mirror the ionization and subsequent production of the fluorescence radiation needs to be addressed.This requires an integration over depth of the ionization by the exponentially attenuated incoming photons and the likewise attenuated escape of the fluorescence.The integration gives an effective 'escape' depth and is plotted in Fig. 5.It is energy dependent due to the varying penetration depth of the photons with energy.For high energies the attenuation length of the copper K-shell fluorescence limits the 'escape' depth, and below the K-shell ionization energy has no meaning.Using the photon energy dependent copper K-shell ionization cross section, 'escape' depth, fluorescence yield and power incident on the second mirror (transmitted) gives the power of the copper fluorescence with photon energy and is plotted (fluorescence) in Fig. 5. Numerically integrating, with the above solid angles and taking account of absorption due to the air path, gives a value of 5.5 mSv h À1 which is in fair agreement with the measured 55 mSv h À1 (Tol-F).The attenuation by the thin 30 mm aluminium coating plays little role at the higher copper photon energy and transmits 75% of the fluorescence.The above calculation can be simplified as The photoelectron, coherent (Chrn) and Compton (Inch) scattered cross sections for copper are shown (Hubbell et al., 1980;Chantler et al., 1997Chantler et al., , 2005)).The fraction of coherent and incoherent scattering of the total cross section is plotted on the right-hand axis.
there is only a small contribution of the Compton scattering in the energy range of the scattered photons: if the maximum energy loss for the scattered photons is assumed, the result differs by 5% demonstrating that there is little Compton scattering present for the scattered photons incident on the second mirror.
Inspecting the spectrum in Fig. 1 more closely, weak peaks due to additional fluorescence from lead, iron and bismuth are also seen.These peaks are also probably from ionization by the high-energy scattered synchrotron radiation.The mirror has lead housing shielding and also there are other items made of steel in the immediate vicinity.The calculation of these contributions, however, is ill-defined and much more difficult.The remaining high-energy background from the second mirror though can be reasonably modelled.As with the first mirror the coherent and incoherent scattering fraction is used for the scattered photons but in addition the detector response is needed.The result is the purple solid curve in Fig. 1 and is in good agreement with the unattenuated and attenuated spectra.In addition to the above calculations, simulations using FLUKA were also performed.The calculations (Batchelor et al., 2022) are time consuming and the statistics behind the second mirror and the shielding wall allowed only a dose estimate in the 10 mSv h À1 region.
The fractions of coherent and incoherent scattering to the total cross section for silicon, copper, quartz and lead are plotted in Fig. 6 against photon energy.They are very similar to within an order of magnitude for the different materials, and amount to several percent at energies of tens of keV.Using these fractions and the power of the KARA synchrotron ring the power for a single 'reflection' can be calculated and is plotted in Fig. 7. Considerable power is available to ionize the edges visible in the plot and produce high-energy fluorescence that escapes the material with little attenuation.Silicon has the lowest fluorescence energy but is easily attenuated by an air path (see Fig. 6).The attenuation length of 20 keV (rough maximum in Fig. 7) photons in a high atomic number material is only tens of micrometres due to the high photoelectron cross section.
Similarly the coherent scattering cross section is larger for high atomic number compared with low atomic number.Consequently, the radiation is readily 'reflected' and escapes or ionizes producing high-energy fluorescence.For silicon the attenuation length is a couple of millimetres and so the material is penetrated further and ionization results in much lower energy fluorescence which is more easily absorbed/ attenuated.Glass and silicon with a thin metallic coating are typically used for reflective optics as extensive experience exists in grinding/machining to the desired shape, polishing and covering with a thin metallic reflecting layer.In addition, silicon also has reasonably good thermal conducting properties.The above shows that it is a better choice for reflecting X-ray optics for synchrotrons in comparison with the copper here.
For the optical components of the visible-light port beamline and the energies considered here, coherent scattering dominates and higher-energy Compton scattering, with its associated energy loss, plays a much less significant role allowing a simplification of the calculation.In addition, the scattered radiation only produces copper K-shell fluorescence of moderate energy in contrast to higher atomic number material where the higher-energy edges and larger number of decay processes result in higher-energy fluorescence and scattered radiation allowing a build-up of secondary radiation (Asano & Sasamoto, 1994).Whilst low atomic number material avoids this, it is impractical and a composite of different atomic number materials leads to a more efficient shielding (Hirayama & Shin, 1998).
Conclusions
The above experimental results show unequivocally that the higher radiation dose is dominated by the coherently scattered synchrotron background, absorption of which results in copper K-shell fluorescence emission from the second mirror.Calculations are able to confirm this both quantitatively (dose) and qualitatively (high photon energy background).Power of the coherently/incoherently scattered radiation by a single 'reflection' (product of KARA power and fraction from Figs. 5 and 6) for the different materials.The blue dashed line is the KARA power and is intended as a guide (arbitrarily scaled).
Figure 6
Fraction of coherent and incoherent cross section to the total for silicon, quartz, copper and lead as a function of photon energy.The attenuation length of the photons for air is also plotted on the right-hand axis.
The relatively high dose can be avoided by use of silicon or aluminium as mirror material.Finally it is instructive to consider other scenarios.Whilst here we have a quartz window separating the two mirrors and optical components, such a window is not present in X-ray and ultraviolet beamlines and the relatively high dose that can emanate from a beamline relies on additional 'reflection(s)', additional shielding and/or a favourable geometry.
Figure 2
Figure 2 Schematic of the visible-light diagnostic port showing the hole in the radiation wall and the fluorescence detector, diode and dosimeter and attenuation foils setup (side and top views).
Figure 3
Figure 3 Attenuation of copper K-shell fluorescence (SDD), diode and dosimeter (Tol-F) signals with different thickness of aluminium and copper foils.The symbols have been shifted by 1 mm for clarity.
Figure 5
Figure 5 Power from the synchrotron (KARA), scattered by the first mirror (scattered), transmitted by the UHV isolation window (transmitted), and the resulting copper fluorescence from the second mirror (fluorescence).The transmission of the quartz UHV window (quartz trans) and the 'escape' depth are plotted on the right-hand axis.Below the copper fluorescence energy of 8.05 keV the 'escape' depth has no meaning. | 4,414.2 | 2024-03-26T00:00:00.000 | [
"Physics"
] |
Biopolymer-Based Hybrids as Effective Admixtures for Cement Composites
In the framework of this publication, silica-lignin hybrid materials were designed, obtained, characterized and then used as admixtures for cement composites. High-energy mechanical grinding of individual components was used to produce the systems that allowed ensuring adequate homogeneity of the final products. As a result of the analysis of Fourier transform infrared spectroscopy, it has been confirmed that weak physical interactions occur between the components. This allowed classifying the resulting systems as I class hybrid materials. In addition, the efficiency of obtaining final products was also inferred on the basis of obtained porous structure parameters and colorimetric data. The achieved bio-admixture with different weight ratios of silica and lignin was added to cement pastes in the amount ranging from 0.5 to 1 wt.%. The study showed that increasing the ratio of lignin in the admixture from 0.15 to 1 wt.% had a positive effect on the rheological properties of the pastes, while the mechanical properties of the composite were deteriorated. In turn, a higher amount of silica in the admixture acted in reverse. The most favorable results were obtained for a silica-lignin bio-admixture with a weight ratio of components equal to 5:1 wt./wt. A significant increase in compressive strength was gained at satisfactory plasticity of the paste.
Introduction
Biopolymers were already used as admixtures in construction materials in ancient times. The first reports regarding the use of vegetable oils in lime mortars appeared in the works of Vitruvius. The Romans also used other bio-admixtures, such as dried blood to aerate building materials or proteins as gypsum binding regulators. In turn, the Chinese used bio-admixtures such as proteins, egg whites, fish oils or blood to modify the mortar used during the construction of the Great Wall [1,2]. The durability of the buildings founded at that time, some of which have survived to this day, indicates the validity of the use of biopolymers in present-day technology of construction materials. This is also important because of the increased search for biodegradable materials from natural sources as an alternative to polymers traditionally synthesized from petroleum products. Although the production of bio-admixtures is still relatively low compared to petroleum-based ones, the use of bio-admixtures is expected to soon increase significantly. This is related to the global trend of sustainable development and dissipation of natural resources. It is also of ecological importance, related to the need to look for in this study. The main aim of the research was to check how the composition of the bio-admixture (ratio of silica to lignin) will affect the physical and mechanical properties of the cement composite. The main role of silica is strengthening of the cement paste, but very often silica aggregates and agglomerates, hence some of its properties is lost. The combination of silica and lignin should counteract this phenomenon. For this purpose, a wide range of dispersive and morphological properties of the bio-admixture itself was carried out and its effect on the rheological and mechanical properties of the cement paste was assessed.
Materials
Syloid 244 was used in the present study. This is fine particle size, synthetic amorphous silica produced by W.R. Grace and Company (Columbia, MD, USA). In addition, kraft lignin (Sigma-Aldrich, Steinheim am Albuch, Germany) with an average molecular weight equal to Mw~10,000 was used. The cement that served as the basis of the study was Portland CEM I 42.5R, Górażdże Cement SA, Górażdże, Poland, which included Portland clinker (95%) as the main component and a setting time regulator (up to 5%).
Preparation of Silica-Lignin Hybrids
In order to obtain first class hybrid materials, a method of mechanical grinding of components was used. We have already employed this method in our previous publications [29][30][31][32]. This method involves mechanical grinding of precursors, with simultaneous mixing using (i) RM100 mortar grinder (Retsch GmbH, Haan, Germany) and (ii) Pulverisette 6 Classic Line ball mill (Fritsch GmbH, Idar-Oberstein, Germany). The combined application allows obtaining final products characterized by adequate homogeneity. As part of this publication, silica-lignin hybrid systems with the following weight ratio of inorganic to organic parts were prepared: 5:1, 2:1, 1:1, 1:2 and 1:5.
Physicochemical and Dispersive-Microstructural Characteristics of Inorganic-Organic Hybrids
The Mastersizer 2000 analyzer (Malvern Instruments Ltd., Malvern, UK) was exploited to determine the particle size and dispersion data. The apparatus allows determining the particle size in the range of 0.2-2000 µm, using the laser diffraction method.
Surface morphology, as well as the shape and size of individual particles were examined using scanning electron microscopy (SEM). The EVO40 microscope from Zeiss AG, Jena, Germany was applied for the research. Prior to taking a picture of the surface of the material, the sample was covered with gold using a Balzers PV205P apparatus (Oerlikon Bazers Coating SA, Biel, Switzerland).
In order to confirm the effectiveness of the preparation of hybrid materials, Fourier transform infrared spectroscopy (FTIR) was accessed. In the FTIR method, infrared radiation covers the range of 4000-400 cm −1 . Compound spectra were obtained using a Vertex 70 spectrometer from Bruker Optik GmbH, Ettlingen, Germany. This camera is the first fully digital spectrometer that offers a wide spectral range (up to 25,000 cm −1 ) and a resolution better than 0.5 cm −1 .
A Specbos 4000 colorimeter (JETI Technische Instrumente GmbH, Jena, Germany) was utilized to measure the color of the materials. This apparatus determines the basic colors in the 0 • /45 • geometry surface and differentiates the tested powder materials in terms of even slight discrepancies occurring in the shades of colors. As part of the study, the CIE L*a*b* color system was employed.
Preparation of Cement Pastes
In order to obtain cement paste, the following components were weighted and mixed: 150 g of Portland cement CEM I 42.5 R, 75 cm 3 of distilled water and 0.75 g of admixture (in the case of samples containing 0.5 wt.% admixture) or 1.5 g of admixture (in for samples containing 1 wt.% admixture).
The cement was placed directly in the stirrer cuvette, while the appropriate amount of admixture was first dissolved in distilled water and then poured into the cuvette. All prepared ingredients were subjected to vacuum mixing in an automatic mixer from Renfert, Hilzingen, Germany, for 2 min at 450 rpm. The prepared paste was tested on a shaking table and then placed in standardized containers with a release agent applied to obtain the rollers necessary for the strength tests. After 24 h, the samples were demoulded and allowed to cure in water for 28 days. A schematic diagram of the preparation of cement paste samples is shown in Figure 1. The cement was placed directly in the stirrer cuvette, while the appropriate amount of admixture was first dissolved in distilled water and then poured into the cuvette. All prepared ingredients were subjected to vacuum mixing in an automatic mixer from Renfert, Hilzingen, Germany, for 2 min at 450 rpm. The prepared paste was tested on a shaking table and then placed in standardized containers with a release agent applied to obtain the rollers necessary for the strength tests. After 24 h, the samples were demoulded and allowed to cure in water for 28 days. A schematic diagram of the preparation of cement paste samples is shown in Figure 1.
Mixture Consistency
The test consists of determining the flow diameter of the paste sample on a shaking table. The test paste was placed in a mold in layers, thickening each layer with impacts of the beater. The mold was then lifted vertically and the sample was shaken by turning the crank 15 times. Immediately after shaking, two perpendicular diameters of the spilled leaven cake were measured. The diameter of the spilled cake is the measure of consistency.
Compressive Strength Test
The cylinder sample strength tests were carried out using an Instron testing machine, Satec, Norwood, Massachusetts, USA. This is a static machine, designed for testing samples under compression, bending and stretching.
The test was carried out according to PN-EN 196-1: 2006 "Cement testing methods. Part 1: Determination of strength." The test cylinders were characterized by standardized dimensions of 2 cm × 2 cm. The sample was placed between two compression plates that tightly clamped the central part of the sample under test. The load was evenly increased and recorded until the sample was crushed. The compressive strength test was carried out on samples after 28 days of seasoning.
Microstructural Analysis
The surface morphology and structure of the solid sample were assessed using an EVO40 scanning electron microscope with a natural breakthrough after mechanical testing. As with powder samples, the surface of the sample was previously covered with gold.
Mixture Consistency
The test consists of determining the flow diameter of the paste sample on a shaking table. The test paste was placed in a mold in layers, thickening each layer with impacts of the beater. The mold was then lifted vertically and the sample was shaken by turning the crank 15 times. Immediately after shaking, two perpendicular diameters of the spilled leaven cake were measured. The diameter of the spilled cake is the measure of consistency.
Compressive Strength Test
The cylinder sample strength tests were carried out using an Instron testing machine, Satec, Norwood, Massachusetts, USA. This is a static machine, designed for testing samples under compression, bending and stretching.
The test was carried out according to PN-EN 196-1: 2006 "Cement testing methods. Part 1: Determination of strength." The test cylinders were characterized by standardized dimensions of 2 cm × 2 cm. The sample was placed between two compression plates that tightly clamped the central part of the sample under test. The load was evenly increased and recorded until the sample was crushed. The compressive strength test was carried out on samples after 28 days of seasoning.
Microstructural Analysis
The surface morphology and structure of the solid sample were assessed using an EVO40 scanning electron microscope with a natural breakthrough after mechanical testing. As with powder samples, the surface of the sample was previously covered with gold. by particles in the nanometric range. However, they exhibit a significant tendency to form aggregates (<1 µm) and agglomerates (>1 µm). In turn, the lignin particles possess a larger size compared to silica, as evidenced by the presence of large particles, reaching the size of even 6-7 µm. Dispersion data obtained on the basis of measurements using the Mastersizer 2000 apparatus, indicating that the average particle size for a lignin sample is equal to 6.5 µm (see Table 1), are a confirmation of these conclusions. In turn, based on the same table, it can be concluded that silica possesses an average particle size of 1.9 µm. This clearly confirms that, despite the presence of particles below 100 nm in the SiO 2 structure, their tendency to form larger clusters is significant. In addition, SEM images of obtained silica-lignin hybrid materials are presented in Figure 3. On the basis of the microphotography, it can be concluded that the products exhibit an increased tendency to form larger structures as the lignin content in the hybrid increases. This is particularly evident in the case of hybrid material SiO 2 -lignin 1:5 wt./wt. The conclusions are confirmed by dispersion data presented in Table 1. These data show that the average particle size for hybrid systems is in the range of 2.3-4.8 µm, and the highest value is achieved for the silica-lignin system with a component weight ratio of 1:5.
Results and Discussion
Polymers 2020, 12, x FOR PEER REVIEW 5 of 15 characterized by particles in the nanometric range. However, they exhibit a significant tendency to form aggregates (<1 μm) and agglomerates (>1 μm). In turn, the lignin particles possess a larger size compared to silica, as evidenced by the presence of large particles, reaching the size of even 6-7 µ m. Dispersion data obtained on the basis of measurements using the Mastersizer 2000 apparatus, indicating that the average particle size for a lignin sample is equal to 6.5 µ m (see Table 1), are a confirmation of these conclusions. In turn, based on the same table, it can be concluded that silica possesses an average particle size of 1.9 µ m. This clearly confirms that, despite the presence of particles below 100 nm in the SiO2 structure, their tendency to form larger clusters is significant. In addition, SEM images of obtained silica-lignin hybrid materials are presented in Figure 3. On the basis of the microphotography, it can be concluded that the products exhibit an increased tendency to form larger structures as the lignin content in the hybrid increases. This is particularly evident in the case of hybrid material SiO2-lignin 1:5 wt./wt. The conclusions are confirmed by dispersion data presented in Table 1. These data show that the average particle size for hybrid systems is in the range of 2.3-4.8 µ m, and the highest value is achieved for the silica-lignin system with a component weight ratio of 1:5. In our previous papers, we also presented the most important parameters of the thermal stability, as well as porous structure for selected silica-lignin hybrids and pure components [33][34][35]. The highest BET surface area was observed for pristine silica Syloid 244 (ABET = 262 m 2 /g). In turn, pristine lignin possesses a very small BET surface area-1 m 2 /g. The biopolymer is also characterized by a total pore volume value of 0.01 cm 3 /g and an average pore size of 12.1 nm. Based on the data presented in [33,35], it can be concluded that the BET surface area values decrease as the ratio of lignin in hybrid systems increases. The situation is similar in the case of the total volume of pores. characterized by particles in the nanometric range. However, they exhibit a significant tendency to form aggregates (<1 μm) and agglomerates (>1 μm). In turn, the lignin particles possess a larger size compared to silica, as evidenced by the presence of large particles, reaching the size of even 6-7 µ m. Dispersion data obtained on the basis of measurements using the Mastersizer 2000 apparatus, indicating that the average particle size for a lignin sample is equal to 6.5 µ m (see Table 1), are a confirmation of these conclusions. In turn, based on the same table, it can be concluded that silica possesses an average particle size of 1.9 µ m. This clearly confirms that, despite the presence of particles below 100 nm in the SiO2 structure, their tendency to form larger clusters is significant. In addition, SEM images of obtained silica-lignin hybrid materials are presented in Figure 3. On the basis of the microphotography, it can be concluded that the products exhibit an increased tendency to form larger structures as the lignin content in the hybrid increases. This is particularly evident in the case of hybrid material SiO2-lignin 1:5 wt./wt. The conclusions are confirmed by dispersion data presented in Table 1. These data show that the average particle size for hybrid systems is in the range of 2.3-4.8 µ m, and the highest value is achieved for the silica-lignin system with a component weight ratio of 1:5. In our previous papers, we also presented the most important parameters of the thermal stability, as well as porous structure for selected silica-lignin hybrids and pure components [33][34][35]. The highest BET surface area was observed for pristine silica Syloid 244 (ABET = 262 m 2 /g). In turn, pristine lignin possesses a very small BET surface area-1 m 2 /g. The biopolymer is also characterized by a total pore volume value of 0.01 cm 3 /g and an average pore size of 12.1 nm. Based on the data presented in [33,35], it can be concluded that the BET surface area values decrease as the ratio of lignin in hybrid systems increases. The situation is similar in the case of the total volume of pores. In our previous papers, we also presented the most important parameters of the thermal stability, as well as porous structure for selected silica-lignin hybrids and pure components [33][34][35]. The highest BET surface area was observed for pristine silica Syloid 244 (A BET = 262 m 2 /g). In turn, pristine lignin possesses a very small BET surface area-1 m 2 /g. The biopolymer is also characterized by a total pore volume value of 0.01 cm 3 /g and an average pore size of 12.1 nm. Based on the data presented in [33,35], it can be concluded that the BET surface area values decrease as the ratio of lignin in hybrid systems * d(0.1)-10% of the volume distribution is below this value diameter; ** d(0.5)-50% of the volume distribution is below this value diameter; *** d(0.9)-90% of the volume distribution is below this value diameter; **** D[4.3]-average particle size in examined system.
Fourier Transform Infrared Spectroscopy
The list of absorption maxima for individual vibrational assignment at the right wavenumber, obtained on the basis of Fourier transform infrared spectroscopy, is presented in Table 2. The obtained data confirm the identification of the relevant components (silica and lignin), as well as the efficiency of their interconnection.
For pristine silica, a band with a maximum of 3450 cm −1 was recorded, derived from the stretching vibrations of O-H bonds that are exemplary for physically bound water. In addition, subsequent bands reveal typical vibrations characteristic for silica: Si-O group (ν s : 1200 cm −1 ; ν as : 1102 cm −1 ), Si-OH group (ν as : 992 cm −1 , ν s : 865 cm −1 ) and Si-O group (δ: 512 cm −1 ), where ν s and ν as are symmetrical and asymmetrical stretching vibrations and δ-bending vibrations, respectively. The obtained absorption maxima data are consistent with the course of the silica spectrum, which were included in previous publications [33,[36][37][38].
In . The presence of all these bands is characteristic for pristine lignin and has also been confirmed by us in other publications, in the form of an appropriate spectrum [33,36,39].
In order to confirm the effectiveness of the preparation of silica-lignin hybrid materials, the absorption maxima of individual bands were determined and listed in Table 2. On the basis of the obtained data, small maxima shifts of individual bands can be clearly seen, indicating the formation of physical interactions between components in the form of hydrogen bonds. This type of connection is characteristic for class I hybrid systems and is sufficient when applying such products as potential cement admixtures. Similar conclusions have already been established within the framework of properly designed hybrid systems such as MgO-lignin [29], ZnO-lignin [30] or Al 2 O 3 -lignin [32]. Table 2. Vibrational frequency wavenumbers (cm −1 ) for silica, lignin and silica-lignin hybrid materials with different weight ratios of the components used.
Colorimetric Analysis
As part of the study, the color characteristics for the obtained hybrid materials and pristine components were also carried out (see Figure 4). Colorimetric analysis using the CIE L*a*b* color space system was performed to determine the lightness, the proportion of individual colors, saturation and hue. This type of analysis gains practical significance in application research in which the color of the material plays an important role. Pristine silica was used as a reference standard in the analysis. Syloid 244 used in the study is characterized by high lightness-L* parameter equal to 93.2.
In turn, lignin is a dark brown solid, for which the parameter L* reaches the value 41.2. In the case of analyzing the values of the parameters a* and b*, which are responsible for the ratio of red and yellow in the lignin sample, values equal to 10.1 (parameter a*) and 25.9 (parameter b*) were observed. The dE variable is also an important parameter in the colorimetric analysis, as it determines the total color change, which is equal to 44.3 for used biopolymer.
Hybrid materials obtained on the basis of Syloid 244 silica are characterized by a progressive decrease in lightness (L* parameter) as the biopolymer content increases in the tested samples (see Figure 4). In case of the SiO 2 -lignin hybrid system (5:1 wt./wt.), the value of the parameter L* = 81.7 was noted, while the increase in the ratio of lignin in the hybrid (SiO 2 -lignin 1:5 wt./wt.) resulted in lowering the value of the L* parameter to 54.3.
Based on the analysis of colorimetric data, a progressive increase in the value of the parameter determining the total color change (dE) was also observed. The highest value of this parameter was achieved for the SiO 2 -lignin 1:5 wt./wt. product and it was equal to 40.3. Analysis of the attached digital photos (see Figure 4) also allows us to observe the differences in color between individual systems. Colorimetric analysis allowed to obtain satisfactory results for all types of hybrid materials. Thus, the effectiveness of the proposed research methodology has been indirectly confirmed.
Polymers 2020, 12, x FOR PEER REVIEW 8 of 15 digital photos (see Figure 4) also allows us to observe the differences in color between individual systems. Colorimetric analysis allowed to obtain satisfactory results for all types of hybrid materials. Thus, the effectiveness of the proposed research methodology has been indirectly confirmed.
Slump Test Results
In order to determine the consistency of the cement paste, the slump test was performed on a shaking table.
The size of propagation for the reference sample, without any admixture, is equal to 19 cm. Results for pastes containing 0.5 wt.% of admixtures are presented in Figure 5a, the samples containing 1 wt.% of the admixture are shown in Figure 5b, while digital images of carried outflows are shown in Figure 6.
Slump Test Results
In order to determine the consistency of the cement paste, the slump test was performed on a shaking table.
The size of propagation for the reference sample, without any admixture, is equal to 19 cm. Results for pastes containing 0.5 wt.% of admixtures are presented in Figure 5a, the samples containing 1 wt.% of the admixture are shown in Figure 5b, while digital images of carried outflows are shown in Figure 6. digital photos (see Figure 4) also allows us to observe the differences in color between individual systems. Colorimetric analysis allowed to obtain satisfactory results for all types of hybrid materials. Thus, the effectiveness of the proposed research methodology has been indirectly confirmed.
Slump Test Results
In order to determine the consistency of the cement paste, the slump test was performed on a shaking table.
The size of propagation for the reference sample, without any admixture, is equal to 19 cm. Results for pastes containing 0.5 wt.% of admixtures are presented in Figure 5a, the samples containing 1 wt.% of the admixture are shown in Figure 5b, while digital images of carried outflows are shown in Figure 6. Accordingly, slump diameter for SiO 2 admixture at 0.5 and 1 wt.% is equal to 18.5 cm, while for lignin it is equal to 20 and 21 cm, respectively. In the case of samples containing admixtures of Polymers 2020, 12, 1180 9 of 15 hybrid materials, a beneficial effect of lignin on the increase in paste plasticity was observed. Therefore, the SiO 2 -lignin hybrid system-1:5 wt./wt. is characterized by the largest flow diameter, i.e., for samples with 0.5 wt.% admixture it is 20 cm, and for 1 wt.%-20.5 cm.
The authors of publications [40][41][42][43] also observed an increase in the plasticity of mixtures with the addition of lignin or its derivatives. Kalliola et al. [40] reported a beneficial effect of the admixture of oxidized lignin on the flow rate for pastes, mortars and concretes. Moreover, in their tests, it was confirmed that this admixture had no negative impact on the compressive strength of the mature concrete and the aeration of the mix. In their study, Li et al. [41] obtained lignin from pinewood, and then subjected it to a series of solvent modifications to obtain a lignin-based water reducer. The authors showed the influence of the sulfonation degree of the obtained lignin on the workability of pastes, including their plasticity. Arel [42] used lignosulfonate, a derivative of lignin, in his research, and prepared two series of samples with an admixture of 0.4 and 0.8 wt.% for two types of cement. In his research, he confirmed that the setting time, compressive strength and water reduction values increased together with the increase in lignosulfonate in the admixture. In their work, Klapiszewski et al. [43] showed that lignin had a more favorable effect than lignosulfonate on the mechanical properties of cement mortars. Increased plasticity was noted for both lignin and lignosulfonate samples. Accordingly, slump diameter for SiO2 admixture at 0.5 and 1 wt.% is equal to 18.5 cm, while for lignin it is equal to 20 and 21 cm, respectively. In the case of samples containing admixtures of hybrid materials, a beneficial effect of lignin on the increase in paste plasticity was observed. Therefore, the SiO2-lignin hybrid system-1:5 wt./wt. is characterized by the largest flow diameter, i.e., for samples with 0.5 wt.% admixture it is 20 cm, and for 1 wt.%-20.5 cm.
The authors of publications [40][41][42][43] also observed an increase in the plasticity of mixtures with the addition of lignin or its derivatives. Kalliola et al. [40] reported a beneficial effect of the admixture of oxidized lignin on the flow rate for pastes, mortars and concretes. Moreover, in their tests, it was confirmed that this admixture had no negative impact on the compressive strength of the mature concrete and the aeration of the mix. In their study, Li et al. [41] obtained lignin from pinewood, and then subjected it to a series of solvent modifications to obtain a lignin-based water reducer. The authors showed the influence of the sulfonation degree of the obtained lignin on the workability of pastes, including their plasticity. Arel [42] used lignosulfonate, a derivative of lignin, in his research, and prepared two series of samples with an admixture of 0.4 and 0.8 wt.% for two types of cement. In his research, he confirmed that the setting time, compressive strength and water reduction values increased together with the increase in lignosulfonate in the admixture. In their work, Klapiszewski et al. [43] showed that lignin had a more favorable effect than lignosulfonate on the mechanical properties of cement mortars. Increased plasticity was noted for both lignin and lignosulfonate samples.
Compressive Strength Properties with the Silica, Lignin and Silica-Lignin Hybrid Materials
A comparison of the compressive strength results for pure cement paste and silica, lignin and silica-lignin hybrids at different weight ratios is presented in Figure 7. The admixtures were added to the cement paste in the amount of 0.5 and 1 wt.% in relation to the amount of cement. The lowest compressive strength was obtained for the sample with the addition of lignin alone. The results were slightly lower than the reference values obtained for pure paste without admixtures. With the increase in the amount of lignin in the composite, the value of compressive strength decreased, which is probably related to a greater aeration of the cement paste. Similar relationships have been reported by other researchers for lignosulfonates, hence the maximum amount of admixtures in cement composites is usually up to 0.5 wt.% in relation to cement [2, 43,44]. As the bio-admixture in the system was replaced with silica, an increase in compressive strength was observed for both 0.5 and 1 wt.% samples. A noticeable increase in strength was observed for the silica-lignin system 1:1 wt./wt., while the highest compressive strength values were noticed for cement paste modified with the SiO2-
Compressive Strength Properties with the Silica, Lignin and Silica-Lignin Hybrid Materials
A comparison of the compressive strength results for pure cement paste and silica, lignin and silica-lignin hybrids at different weight ratios is presented in Figure 7. The admixtures were added to the cement paste in the amount of 0.5 and 1 wt.% in relation to the amount of cement. The lowest compressive strength was obtained for the sample with the addition of lignin alone. The results were slightly lower than the reference values obtained for pure paste without admixtures. With the increase in the amount of lignin in the composite, the value of compressive strength decreased, which is probably related to a greater aeration of the cement paste. Similar relationships have been reported by other researchers for lignosulfonates, hence the maximum amount of admixtures in cement composites is usually up to 0.5 wt.% in relation to cement [2, 43,44]. As the bio-admixture in the system was replaced with silica, an increase in compressive strength was observed for both 0.5 and 1 wt.% samples. A noticeable increase in strength was observed for the silica-lignin system 1:1 wt./wt., while the highest compressive strength values were noticed for cement paste modified with the SiO 2 -lignin system (5:1 wt./wt.). Comparing the results for cement paste with pristine silica and lignin, it can be observed that the combination of both components into one bio-based admixture has a positive effect on both rheological and mechanical properties of cement paste, despite the significantly lower BET surface area of the hybrid system [33,35]. The combination of both materials has a very positive effect-an increase in the strength of the composite with less silica addition by almost 40%, compared to the reference sample, and on average by 11%-20%, compared to the lignin-free sample with silica. This confirms the dispersive properties of lignin and the beneficial effect on the physical and mechanical properties of the cement paste. In the case of the silica-lignin system, a better distribution of silica particles in the composite was obtained, which was confirmed by the plasticity test of cement composites presented in Figure 5. Herein, the spreads for the silica-lignin system (5:1 wt./wt.) were higher than those obtained for the reference sample and with pristine silica. Selected digital photos for samples after mechanical tests are shown in Figure 8.
Polymers 2020, 12, x FOR PEER REVIEW 10 of 15 lignin system (5:1 wt./wt.). Comparing the results for cement paste with pristine silica and lignin, it can be observed that the combination of both components into one bio-based admixture has a positive effect on both rheological and mechanical properties of cement paste, despite the significantly lower BET surface area of the hybrid system [33,35]. The combination of both materials has a very positive effect-an increase in the strength of the composite with less silica addition by almost 40%, compared to the reference sample, and on average by 11%-20%, compared to the lignin-free sample with silica. This confirms the dispersive properties of lignin and the beneficial effect on the physical and mechanical properties of the cement paste. In the case of the silica-lignin system, a better distribution of silica particles in the composite was obtained, which was confirmed by the plasticity test of cement composites presented in Figure 5. Herein, the spreads for the silica-lignin system (5:1 wt./wt.) were higher than those obtained for the reference sample and with pristine silica. Selected digital photos for samples after mechanical tests are shown in Figure 8. Polymers 2020, 12, x FOR PEER REVIEW 10 of 15 lignin system (5:1 wt./wt.). Comparing the results for cement paste with pristine silica and lignin, it can be observed that the combination of both components into one bio-based admixture has a positive effect on both rheological and mechanical properties of cement paste, despite the significantly lower BET surface area of the hybrid system [33,35]. The combination of both materials has a very positive effect-an increase in the strength of the composite with less silica addition by almost 40%, compared to the reference sample, and on average by 11%-20%, compared to the lignin-free sample with silica. This confirms the dispersive properties of lignin and the beneficial effect on the physical and mechanical properties of the cement paste. In the case of the silica-lignin system, a better distribution of silica particles in the composite was obtained, which was confirmed by the plasticity test of cement composites presented in Figure 5. Herein, the spreads for the silica-lignin system (5:1 wt./wt.) were higher than those obtained for the reference sample and with pristine silica. Selected digital photos for samples after mechanical tests are shown in Figure 8.
SEM Analysis of Cement Composite with the Silica, Lignin and Silica-Lignin Hybrid Materials
Microstructures of cement pastes with silica, lignin and silica-lignin systems as well as selected microstructures in magnification are shown in Figures 9 and 10, respectively.
The SEM image for a pure paste is typical for binders consisting of cement clinker. In case of the microstructure presented in Figure 10, a compact binder structure can be observed with characteristic products of hydration of the cement clinker: hydrated calcium silicates, the so-called CSH phase, Portlandite tiles and an elongated, bar-like shape ettringite [45,46]. When more than 0.25% by weight of lignin is added to cement paste (Figure 9c,f-h), small air bubbles are visible in the microstructure, evenly distributed throughout the binder. This results in a decrease in the compressive strength of the composite. With a higher ratio of silica in the hybrid system, from 1:1 wt./wt. composition to 5:1 wt./wt., and for paste with pristine silica itself, the amount of fine air bubbles is much lower (Figure 9b,d,e). The presence of higher amounts of silica, especially in a bio-admixture composition with a weight ratio 5:1 wt./wt., contributes to the microstructure's density, which in turn results in significant compressive strength gains. Figure 10b,c show enlarged microstructures of cement pastes with silica-lignin bio-admixture (5:1 wt./wt. and 1:5 wt./wt.), which are marked with arrows. For a cement paste with a higher amount of silica, the SEM image shows evenly distributed silica particles in the whole volume of the paste, at distances ranging from 1 to several micrometers. For a 1:5 wt./wt. silica-lignin hybrid, the distances between adjacent smaller particles of bio-admixture are much greater, which indicates a worse dispersion in the paste. Microstructures of cement pastes with silica, lignin and silica-lignin systems as well as selected microstructures in magnification are shown in Figures 9 and 10, respectively.
The SEM image for a pure paste is typical for binders consisting of cement clinker. In case of the microstructure presented in Figure 10, a compact binder structure can be observed with characteristic products of hydration of the cement clinker: hydrated calcium silicates, the so-called CSH phase, Portlandite tiles and an elongated, bar-like shape ettringite [45,46]. When more than 0.25% by weight of lignin is added to cement paste (Figure 9c,f-h), small air bubbles are visible in the microstructure, evenly distributed throughout the binder. This results in a decrease in the compressive strength of the composite. With a higher ratio of silica in the hybrid system, from 1:1 wt./wt. composition to 5:1 wt./wt., and for paste with pristine silica itself, the amount of fine air bubbles is much lower (Figure 9b,d,e). The presence of higher amounts of silica, especially in a bio-admixture composition with a weight ratio 5:1 wt./wt., contributes to the microstructure's density, which in turn results in significant compressive strength gains. Figure 10b,c show enlarged microstructures of cement pastes with silica-lignin bio-admixture (5:1 wt./wt. and 1:5 wt./wt.), which are marked with arrows. For a cement paste with a higher amount of silica, the SEM image shows evenly distributed silica particles in the whole volume of the paste, at distances ranging from 1 to several micrometers. For a 1:5 wt./wt. silica-lignin hybrid, the distances between adjacent smaller particles of bio-admixture are much greater, which indicates a worse dispersion in the paste.
Conclusions
An effective method of obtaining silica-lignin hybrid materials using the method of mechanical grinding of components was proposed. Weak physical interactions in the form of hydrogen bonds occurred between the components. This qualifies the resulting materials as a first class hybrid. In addition, based on dispersive-microstructural analysis, it was concluded that the silica-lignin hybrid materials possess primary particles in their structure that tend to aggregate and agglomerate. This process is more intense with the higher ratio of the biopolymer in the hybrid. In addition, determined properties of surface chemistry and values of porous structure parameters, as well as colorimetric data indirectly confirm the validity and effectiveness of the proposed method of obtaining hybrid systems.
It was established that the admixture based on the biopolymer and silica has a positive effect on rheological and strength properties of the cement paste. A significant improvement of compressive strength, by nearly 40% compared to the reference paste and 10%-20% compared to the paste with silica alone, was obtained for a silica-lignin bio-admixture at a 5:1 wt./wt. ratio. The combination of lignin and silica enabled good dispersion of silica in the cement paste which resulted in an improvement of the mechanical properties of the composite at a lower silica-lignin ratio. The paste with the admixture of this composition was also characterized by good microstructure density and low pore content. The admixtures with lignin content above 0.25 wt.% aerated the cement paste much more, which resulted in a decrease in compressive strength. In this case, the decrease in strength was also related to the lower ratio of silica in the admixture.
Summing up the research, it can be concluded that the proper design of the silica-lignin admixture composition leads to a product with specific properties-improved mechanical properties due to the presence of silica in the admixture and dispersing properties due to the presence of lignin.
Conclusions
An effective method of obtaining silica-lignin hybrid materials using the method of mechanical grinding of components was proposed. Weak physical interactions in the form of hydrogen bonds occurred between the components. This qualifies the resulting materials as a first class hybrid. In addition, based on dispersive-microstructural analysis, it was concluded that the silica-lignin hybrid materials possess primary particles in their structure that tend to aggregate and agglomerate. This process is more intense with the higher ratio of the biopolymer in the hybrid. In addition, determined properties of surface chemistry and values of porous structure parameters, as well as colorimetric data indirectly confirm the validity and effectiveness of the proposed method of obtaining hybrid systems.
It was established that the admixture based on the biopolymer and silica has a positive effect on rheological and strength properties of the cement paste. A significant improvement of compressive strength, by nearly 40% compared to the reference paste and 10%-20% compared to the paste with silica alone, was obtained for a silica-lignin bio-admixture at a 5:1 wt./wt. ratio. The combination of lignin and silica enabled good dispersion of silica in the cement paste which resulted in an improvement of the mechanical properties of the composite at a lower silica-lignin ratio. The paste with the admixture of this composition was also characterized by good microstructure density and low pore content. The admixtures with lignin content above 0.25 wt.% aerated the cement paste much more, which resulted in a decrease in compressive strength. In this case, the decrease in strength was also related to the lower ratio of silica in the admixture. | 9,053.8 | 2020-05-01T00:00:00.000 | [
"Materials Science"
] |
Global Surgery Priorities: A Response to Recent Commentaries
<jats:p>
</jats:p>
W e welcome the five published responses [1][2][3][4][5] to our editorial, 6 which outlined a research agenda for making surgery accessible in low-and middle-income country settings, where it is most needed. The commentators represent a good mix of academics, researchers and advocacy specialists, which demonstrates the growing global commitment to working together in the 'empirically evolving global surgery systems science. ' 3 There is considerable consensus in the messages, including the importance of collaborative research approaches, adapted to country contexts; a focus on district population needs; and the use of standardised routine data collection and evaluation methods. Here, we briefly touch on some important new perspectives and some diverging ones.
Peck and Hannah helpfully emphasise the importance of Participatory Action Research (PAR) as a tool to gather and involve all relevant stakeholders in all stages of the research process. 7 This is an important shift from the historical imposition of research agendas on local actors by researchers and research institutions from high-income countries. 8,9 Our surgical systems strengthening research experience has shown the importance of working with national societies of surgical practitioners and representatives of Ministries of Health (MoHs) and Local Government in developing and implementing the research agenda. This is an essential step towards ensuring local relevance, structural support for effective scale-up of any health systems strengthening intervention, and long-term sustainability. It is local MoHs' mandate and responsibility to improve health services in their countries. However, PAR involves challenges. Involvement of too many stakeholders with different interests and agendas can slow down implementation and disrupt or even derail an agreed research programme.
The commentaries diverge in the proposed approaches to coordinating the global surgery agenda in the pursuit of universal access to surgical, obstetric and anaesthesia care. Makasa 1 and Henry 2 suggest a globally coordinated approach, with international organisations defining directions for implementers to follow. While we acknowledge the importance of global coordination -uncoordinated parallel initiatives can overwhelm countries, from national to district level 10 -we propose a demand-driven approach, enabling local stakeholders to identify gaps and optimal solutions for the delivery of surgical services. In place of global coordination mechanisms, new initiatives need to place 'southern' partners (ministries) in the driving seat. Multiple global partners share responsibilities to ensure global coordination, which is being advanced at international conferences and meetings that bring together best practices from peer-reviewed published research and opportunities for cross-country learning. Most importantly, global initiatives need to ensure local relevance through engagement with country level stakeholders, who are in a position to lead on national and sub-national implementation of agreed activities. To achieve maximum impact, such interventions need to be designed and implemented as multi-country projects, to ensure crosscountry learning; and use appropriate PAR methods 3 and fitfor-purpose standardised tools and metrics. 2,3 Scale-up, a term widely used in health systems strengthening, needs to become the end goal for any systems intervention, thus it needs an explicit methodology and dedicated resources. While not always available, 5 external funding can help kickstart initial in-country interest, but must be accompanied by coordination between global and local actors, supporting and not taking over national government leadership of scale-up. Otherwise, external funding dependence at best results in oneoff successes; and at worst in disengagement of local parties. Engagement of stakeholders starts with the recognition of local authorities, key players, champions and other 'doers, ' committed to solving a given problem. Local participation in agreeing research and implementation methods and jointly gathering, analysing and interpreting findings is of equal importance. As Katz et al state: "research and policies that are not grounded in implementation science may be limited to academic exercises. " Also, interventions designed without an explicit scale-up strategy may not be sustained or replicated elsewhere, and therefore may not be incorporated into local policies. Scale-up strategies need to be embedded in the original project concept, which is why we reiterate the importance of country leadership. In Scaling up Safe Surgery for Rural populations (SURG-Africa -http://www.surgafrica. eu) we are seeking to adhere to these principles.
In the COST-Africa project (2011-2016) we developed a BSc course in general surgery for non-physician clinicians in Malawi. The team, comprising international researchers and national surgeons, drafted the training curriculum, embedded the course into the structures of the University of Malawi's College of Medicine, and coordinated implementation and evaluation. 11 The key element that led to sustainability was the leadership of the national surgeons, who had strong links with the national MoH. While the European Union funded project offered scholarships to the first cohort of trainees, two subsequent national cohorts are now being trained in one of Africa's poorest countries, without external financial support.
Routine surgical data collection and monitoring systems are needed to assess surgical needs; monitor surgical outputs and identify gaps; define interventions and monitor implementation; and evaluate impact. Internationally comparable, national level data help in advocating for funds to bridge the gap between well-and less well-endowed countries, 2 with a view to enhancing equitable access to surgical care. Indicators should be standardised, where possible, to facilitate cross-country comparisons and lessonlearning. However, the selection of data collection tools needs to be determined by the objectives, purpose and intended frequency of data collection. Henry proposes using the WHO SARA tool 12 to collect multi-country facility-based data on surgical capacity (service availability and readiness). This makes sense if the purpose is to inform government plans for investing in surgical capacity.
In SURG-Africa, we reviewed a range of published surgical capacity assessment tools that are used internationally. We concluded that the most appropriate instrument in terms of ease of administration, time spent in the field, complexity of data processing and analysis, and feasibility of repeat use, was the Personnel, Infrastructure, Procedures, Equipment and Supplies tool, developed by Groen et al and used in several settings internationally. 13 Its strength lies in its ability to produce a simple summary index score expressing surgical capacity in a numerical way, which allows comparisons over time and between countries. While the SARA tool also allows for cross-country and secular comparisons, it is more complex, not surgery-specific, and it requires more resources to administer, process and analyse the data -all of which militate against its periodic use for intervention monitoring. Debates like these, enabled by the IJHPM, help throw light on aspects of methodology. 3 All commentators agree on the importance of district level facilities as a focus of global surgery. This is where the most common surgical cases including the bellwether procedures should be treated, as a step towards universal access to surgical care, while protecting higher-level referral hospitals from congestion. District surgery must have a prominent place in National Surgical, Obstetric and Anaesthesia Plans, and the district voice needs to be heard throughout National Surgical, Obstetric, and Anesthesia Plan (NSOAP) formulation and implementation. Peck and Hanna describe a bottom-up approach in Latin America to incorporating a community perspective on data collection and national surgical planning.
In Africa, NSOAP development has often been "based on empirical observation and in-country experience by identified stakeholders. " 3 However, the experience drawn on is usually that of specialist surgeons, who may have limited understanding of the reality of surgery at district hospitals. In many sub-Saharan African countries, district surgery is mainly or wholly provided by non-specialists, including general medical officers and non-physician clinicians. 14 Although usually not part of surgical societies, they are capable to provide valuable input into national-level strategic planning. We endorse Katz et al's call for an "integrated surgical ecosystem to bridge the gap between the boots on the ground and policy-makers. " 5 Routine data on surgical output from operating district hospital theatre registers provide an essential and often neglected empirical base for NSOAPs, which specialist surgeons are best placed to interpret.
Peck and Hanna propose five core principles to inform the developing discipline of Global Surgery System Science. 3 We endorse principles 1-3. Common surgery-specific terminology and metrics language are needed to facilitate a common understanding (Principle 1). Mixed methods, PAR methods are needed to evaluate surgical systems (Principle 2); and scientific rigour and development of research methodologies are needed as well (Principle 3). We agree that effective transnational teams are needed (Principle 4), but place more emphasis on the role of national surgical societies who need to be empowered and supported to translate the global surgery agenda into national strategies. We also propose qualifying Principle 5, in that the 'learner' is not necessarily a surgeon -which implies a specialist -who would need to evolve into a 'systems aware global surgeon. ' In transitioning from COST-to SURG-Africa, we have learned to work with district surgical teams, which include a range of surgically active cadres (clinical officers, nurses and general doctors). Between them, if trained and supervised, they are well placed to provide a sustainable response to addressing district level surgical needs. 15 Moreover, the term 'global surgeons' suggests specialists from high-income countries who undertake locums in African countries; but they may contribute little to building sustainable systems for rural populations to access essential elective and emergency surgery. 16 National surgeons, often organised in a surgical society, are best placed to lead the national surgical system. External initiatives and global partnerships need to be aware of the consequences of engaging surgical specialists from lowresource countries into activities that take them away from the operating rooms in favour of non-surgical work. Globally driven strategic planning, project management and administration -in the form of consultancy work or positions with international agencies which are often well paid -may aggravate some of the problems that global initiatives try to solve, contributing to the shortage of surgical specialists at national and referral hospitals. Specialist surgical knowledge and skills sensu stricto are mainly relevant in the operating room, and for training and supervision of surgical trainees.
Where appropriate, the specialists should be called upon to provide expert opinion about strategic directions and solutions proposed to address surgical systems constraints. Implementation of the global surgery agenda should not only rely on global surgeons who are 'desirous and capable of addressing the social responsibility of resolving the global surgical burden, ' 3 but on local experts who are appropriately trained, preferably in the region, and fit for the task. Others have noted that lack of engagement of research leaders in global surgery may result in lack of depth of scientific enquiry. 17 When setting up global surgery projects of any scale, experts with formal training in research methods, design and statistical skills should be involved, as a source of knowledge about appropriate evaluation methods. There is also a need to define the set of skills, train experts other than surgeons and equip them with the required know-how on how to research, plan and administer surgical programmes, thereby building local capacity so as to reduce low-and middle-income country dependence on external technical support. Countries require trained health planners, human resource managers, financial specialists, and staff trained in public administration; as well as national and regional surgical specialist, generalist and non-physician surgical training programmes. 18 We thank the International Journal of Health Policy and Management for facilitating a debate on next steps for building capacity and consensus on how to design and evaluate national strategies for scaling up surgery, not only in sub-Saharan Africa but across low-and middle-income countries. 6
Ethical issues
Not applicable. | 2,641.8 | 2019-02-26T00:00:00.000 | [
"Philosophy"
] |
Economic aspects of fishing of dolphinfish in Sicily *
Dolphinfish fishing with kannizzi and purseseine nets is a traditional activity performed by some Sicilian fisheries. The fishing season starts in July with the placing of Fish Aggregating Devices (FADs) typical of the Mediterranean area called kannizzi, and finishes in December (Galea, 1961). The fleet dedicated to this type of fishing is generally made up of about 230 small to medium boats that in the remaining time of the year practise different types of fishing, such as swordfish and albacore fishing with long-line and/or gill-nets and trammel-nets (Morales-Nin et al., 1996). In the last few decades, the appeal of high earnings has led an increasing number of fishermen to turn to this resource (Bono et al., 1997). Earnings are constantly high, even if the abundance of dolphinfish, as for all pelagic species, is subject to considerable annual fluctuations because less abundant years cause prices to rise.
INTRODUCTION
Dolphinfish fishing with kannizzi and purseseine nets is a traditional activity performed by some Sicilian fisheries.The fishing season starts in July with the placing of Fish Aggregating Devices (FADs) typical of the Mediterranean area called kannizzi, and finishes in December (Galea, 1961).The fleet dedicated to this type of fishing is generally made up of about 230 small to medium boats that in the remaining time of the year practise different types of fishing, such as swordfish and albacore fishing with long-line and/or gill-nets and trammel-nets (Morales-Nin et al., 1996).In the last few decades, the appeal of high earnings has led an increasing number of fishermen to turn to this resource (Bono et al., 1997).Earnings are constantly high, even if the abundance of dolphinfish, as for all pelagic species, is subject to considerable annual fluctuations because less abundant years cause prices to rise.
MATERIALS AND METHODS
In the 1996 dolphinfish fishing season with FADs and purse seine all the boats dedicated to this activity in Sicily and in the Pelagic Archipelago were counted in a census.In 5 sample ports (Lampedusa, Linosa, Trapani, Sant'Agata di Militello and Siracusa) catches and the fishing effort were taken through interviews during landing, based on samples in time and space or based on a census.In addition, average prices per week paid to fishermen were taken through direct observation, and boat running costs were determined by interviewing shipowners.In 4 of the 5 sample ports (Linosa, Lampedusa, Siracusa and Sant'Agata di Militello) the surveys were carried out through a census in space and time; in practice, this involved interviewing all the fishermen every fishing day.However, in Trapani the sur-veys were carried out through sampling in space and time, that is, most of the fishermen were interviewed 3 times a week (Mondays, Wednesdays and Fridays).
Costs were divided into four categories: 1. Running costs: fuel, oil, ice, spare parts and repairs; 2. Cost of fishing gear concerning the whole cost of the FADs; 3. Cost of Capital including depreciation and opportunity cost; 4. Labour cost estimated according to the opportunity cost principle.
Repairs included both ordinary and special maintenance.The term capital means the value of the boat, inclusive of gear and facilities.The opportunity cost of capital was calculated by assuming that the capital stock value of the fleet would be invested in treasury bonds only during the months dedicated to the dolphinfish fishing (1 September 1996-31 December 1996).
This principle was also used to estimate the labour cost.The Italian system adopted by the fisheries to pay crew members is said to be on a lay.With this system the net revenue (gross revenue less some running expenses) represents the whole revenue which is divided fifty-fifty between the shipowner and crew members.It is clear that this system makes it difficult to check salaries.So we assumed that fishermen could be alternatively employed in the agriculture sector and we estimated the daily salary for a fishermen by the daily salary of a farm-hand.On the basis of that principle we believe that an average monthly salary is underestimated in comparison with to the one actually received through the system of being on a lay.
The revenues were estimated by multiplying weekly catches and prices.The profit was estimated by the difference between revenues and costs.Prices, costs, revenues and profits were estimated in ECUs (1 ECU ≅ 1.1 $).
RESULTS AND DISCUSSION
Table 1 illustrates weekly catches and total in the sample ports during the 1996 fishing season.Except for Siracusa, the fishing season starts in the first week of September.In Trapani and Lampedusa the fishing season finished in the second and the third week of November respectively, whereas in the other ports the season finished in the last but one week of December.The Trapani and Lampedusa fishing seasons are shorter than those of the other areas.This is because that around the fishing area of Trapani dolphinfish arrive, in a sufficient quantity to justify the fishing activity, later than elsewhere and by the middle of November they have already left the area (Cannizzaro et al., 1997).In Lampedusa, and initially in Linosa too, even though the resource is always very abundant, the fishermen usually wait for the dolphinfish to reach a more marketable size.The most abundant catches per weight were reported in October; around this period, as a matter of fact, 460 L. CANNIZZARO et al. the ratio between the number of specimens caught and somatic weight was the best.Table 2 illustrates the number of boats, fishing weeks, average catches per week and per boat, and the relative standard deviation in the 1996 fishing season.The most abundant catch per week and per boat was registered in Linosa, but taking into consideration the high variability it seems that in all the sample ports the catch was more or less of the same amount.
In the Pelagic Islands of Linosa and Lampedusa, the prices of dolphinfish (see Table 3) are generally lower than in the most important ports of Sicily, for two reasons: lack of an internal market and transport costs.They are fixed through negotiations between fishermen and wholesalers.Linosa and Lampedusa fishermen sell the majority of the catch to a local warehouse with which they have a privileged relation, which in turn transport the fish by ferry to Sicily, where it is distributed by Porto Empedocle, Catania and Palermo wholesalers.A small quantity of catch is usually sold locally.In 1996 in Lampedusa, wholesale prices were steady during the fishing season, because fishermen fixed them at the beginning of the season in agreement with the wholesalers.In Linosa, the wholesale price was 2.1 ECUs until 10 September and 1.5 ECUs for the remaining part of the season.By the middle of November, it is no longer convenient for dealers to pay dolphinfish at 2.1 ECUs per kg, as at the beginning of the season.Therefore, the fishermen from Lampedusa were forced to stop fishing.In 1996, the average wholesale price for dolphinfish varied in the most important ports of Sicily from a minimum of 3.0 ECUs to a maximum of 6.2 ECUs in Siracusa.In Siracusa and in Sant'Agata the wholesale prices are higher than in the other analyzed ports, because in eastern Sicily the dolphinfish demand is much higher.In Siracusa, the average wholesale price was about 5.2 ECUs, with a maximum of 6.25 ECUs and a minimum of 4.15 ECUs.In Sant'Agata and Trapani the average wholesale prices were respectively about 4.18 ECUs and 3.55 ECUs.
The highest prices were at the beginning of the fishing season because of a strong demand, and tell subsequently.Generally the interaction between demand and offer supply rise to small price fluctuations.
From the analysis of the total costs (see Table 4) met by the fleets dedicated to dolphinfish fishing it is deduced that labour costs have a greater impact than ECONOMIC ASPECTS OF DOLPHINFISH FISHERY 461 any other costs do.On average they make up 33%.The smallest impact is recorded in Trapani, whose fleet is made up of small boats that require 2 or 3 hands at most.The greatest impact (34%) is recorded in Linosa, Sant'Agata and Siracusa, although in Linosa the large impact of labour cost is not connected to the structure of the fleet, which is also characterised by small size boats, but rather to the small impact of other costs.Running costs (Siracusa, Lampedusa, and Trapani) or the cost of the fishing gear (Sant'Agata, Linosa), have less impact than labour costs.
As far as running costs (see Table 5) are concerned, fuel consumption is the common feature for all the fleets under consideration, being the highest share.It makes up 72% of running costs on average, varying from a minimum of 60% in Linosa to a maximum of 81% in Siracusa, depending on the number of hours spent at sea and the structural features of the fleet.
In Linosa and Lampedusa, running costs were lower than in the other ports because of the small size of the boats, the closeness of fishing areas and the higher repair costs.
Maintenance charges have little impact on running costs.They vary between a minimum of 14% in Siracusa and a maximum of 36% in Linosa.
The cost of kannizzi make up an average of 23% of the total cost.It is not a fixed cost because it is a disposable tool, which means that it is completely exploited for only one fishing season.The total cost depends on the number of boats and on the number of kannizzi at sea for the entire fleet dedicated to dolphinfish fishing.In 1996, each boat lowered on average 48 kannizzi with an average cost of 36.3ECUs per kannizzi.It was the Sant'Agata fleet that lowered the greatest number of FADs, about 65 per boat on average.
The capital cost (see Table 6), which made up an average of 20% of the total costs, was subdivided into depreciation and opportunity costs.Since all the boats employed in dolphinfish fishing are older than 10 years, they are completely depreciated and so the depreciation is referred to gear purchased during previous years.The average net rate of the Treasury Bonds in the period 1 September 1996 -31 December 1996 was 5.86% and the opportunity cost of the capital was calculated by assuming that the capital stock value of the fleet would be invested in State Bonds (Treasury Bonds) in the aforesaid period and rate.The opportunity cost varies from a minimum of 1,214 ECUs in Lampedusa (where only 2 boats prac- In 1996, on the islands of Linosa and Lampedusa, the good results obtained in the previous season convinced the owners of the boats practising dolphinfish fishing to purchase new gear and to lower a greater number of kannizzi.Two new boats were induced to practice this kind of fishing, attracted by the prospect of gaining high incomes.However, the gross incomes gained in that season were not as high as expected.As a matter of fact, in Lampedusa the total income was 9,797 ECUs, whereas the highest income was reported in Linosa with 56,458 ECUs. In both islands, since the price was more or less constant (only in Linosa was there a fluctuation from 2.07 ECUs to 1.55 ECUs from the first to the second half of October) the state of the incomes depended on the catches.
In Linosa, the highest incomes were gained in the second half of November and were equal to 12,227 ECUs, whereas in Lampedusa they were 3,237 ECUs equal to 26%.
In Trapani, Sant'Agata and Siracusa the total incomes were higher: 69,970, 222,382 and 463,196 ECUs respectively.In these ports the price was not constant, although the interacting of supply and demand caused some small price fluctuations, so the incomes were mostly influenced by the trend of the catches.From the analysis of the incomes in time it appears that in three ports (Linosa, Sant'Agata and Trapani) the highest income was gained in October, as a result of the greater quantity of fish caught than in other months.
The yield is equal to the difference between the total income and the total costs.Among the 5 ports under consideration it turns out that the yield, as a percentage of the total income, varies from a minimum of 10% in Lampedusa to a maximum of 46% in Linosa.In 1996, in Lampedusa the fishing of dolphinfish was not very profitable as a result of the fall in catches in comparison with the previous season.In the other ports this activity shows a good yield, although it is to be considered inclusive of tax.The good results obtained in Linosa are due to both the high catches (33% more than those obtained by an equal number of boats in Trapani) and the impact of the total costs on total incomes, which show the lowest value (54%) of all the ports.Whereas in Linosa and in Lampedusa the fishing of dolphinfish is a relatively new business, it is an ancient tradition in the other ports under consideration.In particular, in Sant'Agata and Siracusa, this activity has a significant economic importance.As a matter of fact, prices are usually higher there and the boats employed in the fishing of the dolphinfish are more numerous and bigger.Only in those two ports the total catches are about 70%.However, in Siracusa, even though catches were abundant, the high impact of the total costs on the total income (70%) due to the fleet structure eroded the yield, which was equal to 30%.In Sant'Agata and Trapani, the yield was 43 and 39%, respectively.
To conclude, except in Lampedusa where the fishing season has not given interesting results, in Sicily dolphinfish can be considered as a species that gives good profits, ensuring one of the highest profit rates (from 30 to 46%) in the fishery market from September to December.In de EC, dolphinfish is only marketed freshly, in Sicily and in the Balearic Islands (Morales-Nin et al., 1996).A price rise caused by an increase in demand, which might happen if the market for fresh and preserved dolphinfish in expands in the other European countries, could persuade other fishermen to catch dolphinfish instead of other overexploited species, such as swordfish between August and December (Cannizzaro et al., 1996;1997).
TABLE 1 .
-Catches in kg per week in the sample ports during the 1996 fishing season.
TABLE 2 .
-Number of boats, fishing week and average catch per boat and per week in the sample ports during the 1996 fishing season.
TABLE 4 .
-Total costs.If we compare the value of the opportunity cost with the value of the yield, it is clear that investing in fishing dolphinfish is more profitable than investing in State Bonds.For example, in Sant'Agata the yield is equal to 94,973 ECUs, whereas the opportunity cost is equal to 11,224 ECUs. | 3,381 | 1999-12-30T00:00:00.000 | [
"Economics",
"Environmental Science"
] |
Unveiling the Differential Antioxidant Activity of Maslinic Acid in Murine Melanoma Cells and in Rat Embryonic Healthy Cells Following Treatment with Hydrogen Peroxide
Maslinic acid (MA) is a natural triterpene from Olea europaea L. with multiple biological properties. The aim of the present study was to examine MA’s effect on cell viability (by the MTT assay), reactive oxygen species (ROS levels, by flow cytometry) and key antioxidant enzyme activities (by spectrophotometry) in murine skin melanoma (B16F10) cells compared to those on healthy cells (A10). MA induced cytotoxic effects in cancer cells (IC50 42 µM), whereas no effect was found in A10 cells treated with MA (up to 210 µM). In order to produce a stress situation in cells, 0.15 mM H2O2 was added. Under stressful conditions, MA protected both cell lines against oxidative damage, decreasing intracellular ROS, which were higher in B16F10 than in A10 cells. The treatment with H2O2 and without MA produced different responses in antioxidant enzyme activities depending on the cell line. In A10 cells, all the enzymes were up-regulated, but in B16F10 cells, only superoxide dismutase, glutathione S-transferase and glutathione peroxidase increased their activities. MA restored the enzyme activities to levels similar to those in the control group in both cell lines, highlighting that in A10 cells, the highest MA doses induced values lower than control. Overall, these findings demonstrate the great antioxidant capacity of MA.
Introduction
An imbalance between pro-oxidant and antioxidant molecules can lead to an oxidative stress situation that modifies normal cell physiology due to protein, lipid, carbohydrate and nucleic acid damage [1]. Many cellular processes depend on the variations in the levels of ROS and NADPH that take place during their development and that, fundamentally, are determined by the activity of the different production systems for this reduced coenzyme, especially those belonging to the pentose phosphate pathway (PPP), glucose-6-phosphate dehydrogenase (G6PDH), 6-phosphogluconate dehydrogenase (6PGDH) and, also, NADP-dependent isocitrate dehydrogenase (ICDH-NADP). The cellular redox state is key to interpreting the behavior of most of these key cellular processes for the vital development of organisms, such as cell differentiation [2][3][4], cellular growth [5][6][7][8], cell nutrition [6,[9][10][11] and cell aging [12]. Reactive oxygen species (ROS) and reactive nitrogen species (RNS) are the most important endogenous pro-oxidants produced by normal cellular metabolism [13]. Under a normal physiological situation, an antioxidant defense system neutralizes ROS. This antioxidant system involves enzymes such as catalase (CAT), which reduces hydrogen peroxide; superoxide dismutase (SOD), which detoxifies the superoxide radical; glutathione peroxidase (GPX), which reduces hydrogen peroxide and other organic peroxides; S-transferase glutathione (GST), which detoxifies harmful molecules; glutathione reductase (GR), which regenerates glutathione (GSH) from its oxidized form (GSSG) by an NADPH-dependent pathway; and G6PDH, which produces NADPH for GR's mechanism. Beside enzymes, other molecules can also act as antioxidants, such as NADPH, GSH and different vitamins, among other molecules [14][15][16]. Despite this antioxidant system, an excessive production of ROS can lead to an oxidative stress situation inducing several types of damage to different biomolecules [17]. In cancer processes, an excessive production of ROS has been related to genomic instability due to the induction of DNA damage and to the alterations in signaling pathways involved in survival, proliferation and apoptosis resistance. Moreover, it has been demonstrated that ROS induce vascular endothelial growth factor (VEGF) expression, producing neovascularization and fast expansion of the cancer, modifying angiogenesis and metastasis, which induce the malignancy of cancer cells [17]. The use of natural compounds with antioxidant capacity that reduce ROS levels could decrease metastatic progression [15,17]. Furthermore, the immune defense system can be altered by ROS, since high levels of these molecules produced by NADPH oxidase can inhibit monocytes and macrophages and weaken the response of T-cells due to the down-regulation of several cytokines [17].
Hydrogen peroxide (H 2 O 2 ) is originated from O 2 by SOD. It is not a free radical as such, but it is a reactive molecule, since it has the capacity to generate the hydroxyl radical in the presence of metals such as iron [1]. H 2 O 2 is an important metabolite that arises mainly during aerobic metabolism, although it can derive from other sources [18]. H 2 O 2 is a messenger molecule that diffuses through cells and tissues, inducing different effects that include deformations of the shapes of cells, the initiation of proliferation and the recruitment of immune cells. If not controlled, an excess of H 2 O 2 can cause uncontrolled oxidative stress in cells, producing irreparable damage [19]. Therefore, cell death and survival processes can be modulated by controlling intracellular H 2 O 2 levels by the use of antioxidants. Natural products are gaining interest due to their cost-effectiveness, few side effects and high availability. As such, products such as maslinic acid (MA) have received heightened interest.
MA, C 30 H 48 O 4 (2α,3β-dihydroxyolean-12-en-28-oic acid) is a natural pentacyclic triterpene, also known as crataegolic acid and derived from its structural analogue, oleanolic acid ( Figure 1). It is formed by 30 carbon atoms grouped into five cycles that, as substituents, have two methyl groups each on the C-4 and C-21 carbons; single methyl groups on the C-8, C-10 and C-15 carbons; two hydroxyl groups each on the C-2 and C-3 carbons; one carboxyl group on the C-28 carbon; and a double bond on the C-12 and C-13 [20][21][22][23].
Maslinic acid is present in various plants, many of them common in the Mediterranean diet, such as eggplants, spinach, lentils, chickpeas, and even different aromatic herbs [24]. Moreover, it is especially abundant in Crataegus oxyacantha, in the surface wax of the fruits and leaves of Olea europaea L. and in the solid waste from olive oil extraction [25]. Furthermore, MA is among the main triterpenes present in olives and olive oil. Its concentration in the oil depends on the type of olive oil and the variety of olive tree [23]. a positive effect for the prevention of LDL oxidation [28]. Research in macrophages revealed that MA can act in a similar way to catalase, decreasing the generation of H2O2, but does not produce any direct inhibitory effects on NO and superoxide formation [29]. In a previous study, we concluded that MA produced an increase in the ROS level under stress conditions caused by the absence of FBS in melanoma cells [15]. Furthermore, in this same study, MA had an antioxidant effect at lower assayed levels; however, at higher dosages, MA induced cellular damage by apoptosis. The aim of the present study was to evaluate MA's antioxidant effects in murine skin melanoma (B16F10) cells and in a healthy cell line derived from the thoracic aorta of an embryonic rat (A10), by analyzing the proliferation, ROS production and the activity of the main antioxidant enzymes under cellular stress conditions induced by H2O2.
MA Decreases Proliferation of B16F10 Cells by a Dose-Dependent Mechanism
We evaluated MA's effect on B16F10 melanoma and A10 cell line proliferation using the MTT assay (Figure 2A,B). B16F10 and A10 cells were exposed to different doses of MA (0 to 212 µM) for 24 h. Then, cell survival was compared with that of untreated control cells. The percentage of living cells decreased as the dose of MA increased in cancer cells (B16F10), while in healthy cells (A10), MA did not produce any cytotoxic effect, even at the highest doses used (210 µM). In B16F10 cells, the growth inhibition values (IC50) in response to MA were 42 µM after 24 h of addition of this compound. MA enjoys high pharmacological interest, owing to its anti-tumor effect in certain types of cancer besides its anti-inflammatory, antioxidant, anti-diabetogenic, anti-hypertensive, anti-viral and cardioprotective effects, among others [1,23]. Among the bioactivities attributed to MA, its antioxidant effect is the most contradictory. It has been observed that the oxidative status induced by CCl 4 was decreased by MA treatment, which reduced lipid peroxides in the plasma and the susceptibility of lipids to peroxidation [26]. In the same way, the LDL oxidation in the plasma of rabbits produced by CuSO 4was decreased by MA obtained from Punica granatum [27]. In human plasma, MA showed peroxyl-radical scavenging and chelating capacities for copper but did not show a positive effect for the prevention of LDL oxidation [28]. Research in macrophages revealed that MA can act in a similar way to catalase, decreasing the generation of H 2 O 2 , but does not produce any direct inhibitory effects on NO and superoxide formation [29]. In a previous study, we concluded that MA produced an increase in the ROS level under stress conditions caused by the absence of FBS in melanoma cells [15]. Furthermore, in this same study, MA had an antioxidant effect at lower assayed levels; however, at higher dosages, MA induced cellular damage by apoptosis.
The aim of the present study was to evaluate MA's antioxidant effects in murine skin melanoma (B16F10) cells and in a healthy cell line derived from the thoracic aorta of an embryonic rat (A10), by analyzing the proliferation, ROS production and the activity of the main antioxidant enzymes under cellular stress conditions induced by H 2 O 2 .
MA Decreases Proliferation of B16F10 Cells by a Dose-Dependent Mechanism
We evaluated MA's effect on B16F10 melanoma and A10 cell line proliferation using the MTT assay (Figure 2A Values are expressed as means ± SEM (n = 9).
H2O2 Modifies Cell Viability
We examined H2O2's effects (0 to 3 mM of H2O2 for 24 h) on the cell viability of B16F10 and A10 cells to determine the optimal dose capable of producing stress without inducing cell death ( Figure 2C,D). Cell survival was compared with that of untreated controls. The percentage of living cells decreased as the dose of H2O2 increased in both cell lines up to the concentration of 1.5 mM, beyoond which H2O2 had no cell viability effect. In both cell lines, we noticed an IC50 of 0.2 mM. In no case, with the doses used and the incubation time employed, was any mortality higher than 80% reached. Based on these results, the concentration of 0.15 mM of H2O2 was used to perform the rest of the studies.
Maslinic Acid's Influence on Mitochondrial-Membrane Potential
An assay of ROS production was performed to test the ROS levels that occurred over time in the presence of 0.150 mM H2O2 and different MA levels (IC50/4, IC50/2, IC50 and 2·IC50). The results are shown in Figure 3. After 24 h of H2O2 treatment, ROS levels increased significantly in both cell lines, compared to those in the controls. The increment observed upon H2O2 addition was offset by MA supplementation, which resulted in decreased intracellular ROS levels at any MA level used (IC50/4, IC50/2, IC50 and 2·IC50) in B16F10 and A10 cells. Moreover, in A10, ROS levels decreased below the value of control cells for all concentrations of MA tested.
H 2 O 2 Modifies Cell Viability
We examined H 2 O 2 's effects (0 to 3 mM of H 2 O 2 for 24 h) on the cell viability of B16F10 and A10 cells to determine the optimal dose capable of producing stress without inducing cell death ( Figure 2C,D). Cell survival was compared with that of untreated controls. The percentage of living cells decreased as the dose of H 2 O 2 increased in both cell lines up to the concentration of 1.5 mM, beyoond which H 2 O 2 had no cell viability effect. In both cell lines, we noticed an IC 50 of 0.2 mM. In no case, with the doses used and the incubation time employed, was any mortality higher than 80% reached. Based on these results, the concentration of 0.15 mM of H 2 O 2 was used to perform the rest of the studies.
Maslinic Acid's Influence on Mitochondrial-Membrane Potential
An assay of ROS production was performed to test the ROS levels that occurred over time in the presence of 0.150 mM H 2 O 2 and different MA levels (IC 50/4 , IC 50/2 , IC 50 and 2·IC 50 ). The results are shown in Figure 3. After 24 h of H 2 O 2 treatment, ROS levels increased significantly in both cell lines, compared to those in the controls. The increment observed upon H 2 O 2 addition was offset by MA supplementation, which resulted in decreased intracellular ROS levels at any MA level used (IC 50/4 , IC 50/2 , IC 50 and 2·IC 50 ) in B16F10 and A10 cells. Moreover, in A10, ROS levels decreased below the value of control cells for all concentrations of MA tested. shown in Figure 3. After 24 h of H2O2 treatment, ROS levels increased significantly in both cell lines, compared to those in the controls. The increment observed upon H2O2 addition was offset by MA supplementation, which resulted in decreased intracellular ROS levels at any MA level used (IC50/4, IC50/2, IC50 and 2·IC50) in B16F10 and A10 cells. Moreover, in A10, ROS levels decreased below the value of control cells for all concentrations of MA tested.
MA Exerts Antioxidant Activity, Modulating Enzymatic Defense System
The results obtained for SOD in B16F10 cells showed that H 2 O 2 increased its activity and MA (at any level) decreased it, but without significant differences induced by different MA concentrations. In A10, H 2 O 2 increased the SOD activity, an increment that was mitigated with MA addition. Moreover, 42.3 and 84.6 µM MA (IC 50 and 2·IC 50 , respectively) resulted in SOD activity levels lower than those found in the absence of H 2 O 2 and MA ( Figure 4A,B).
MA Exerts Antioxidant Activity, Modulating Enzymatic Defense System
The results obtained for SOD in B16F10 cells showed that H2O2 increased its activity and MA (at any level) decreased it, but without significant differences induced by different MA concentrations. In A10, H2O2 increased the SOD activity, an increment that was mitigated with MA addition. Moreover, 42.3 and 84.6 µM MA (IC50 and 2·IC50, respectively) resulted in SOD activity levels lower than those found in the absence of H2O2 and MA ( Figure 4A,B).
In B16F10 cells, GST activity increased in the presence of H2O2, whereas the incubation with the IC50/4 and IC50/2 of MA decreased the levels, making them equal to those in the control without H2O2 and MA. The use of the IC50 and 2·IC50 MA concentrations induced GST activities lower than those in the control. Similar results were found in A10 for GST activity, with the exception of MA at IC50/2, which also induced lower levels than control ( Figure 4C,D). In B16F10 cells, CAT activity decreased in the presence of H2O2 compared to that in the control, whereas MA increased the activity, inducing values higher than or similar to control when IC50/4 or IC50/2 were used. In A10 cells, H2O2 produced an increase in CAT activity, and its activity was not recovered until the highest doses of MA were used (IC50 and 2·IC50) ( Figure 5A,B).
Regarding G6PDH, the results indicated its activity decreased in the presence of H2O2 in B16F10 and MA at all the concentrations tested, with similar values to the control without H2O2 and MA. In the control. Similar results were found in A10 for GST activity, with the exception of MA at IC 50/2 , which also induced lower levels than control ( Figure 4C,D).
In B16F10 cells, CAT activity decreased in the presence of H 2 O 2 compared to that in the control, whereas MA increased the activity, inducing values higher than or similar to control when IC 50/4 or IC 50/2 were used. In A10 cells, H 2 O 2 produced an increase in CAT activity, and its activity was not recovered until the highest doses of MA were used (IC 50 and 2·IC 50 ) ( Figure 5A,B). GPX increased in the presence of H2O2 but no change was observed when MA was administrated in B16F10. In A10 cells, all concentrations of MA decreased the GPX activity increased by the H2O2, even below that in the control without H2O2 and MA ( Figure 6A,B).
In B16F10, GR activity was decreased by H2O2 and no changes were observed at any MA level, with lower levels of activity maintained compared to the control. In A10, H2O2 increased GR activity, an increment that was mitigated with MA addition. Moreover, 42.3 and 84.6 µM MA induced values of GR activity lower than those found in the absence of H2O2 and MA ( Figure 6C,D). Regarding G6PDH, the results indicated its activity decreased in the presence of H 2 O 2 in B16F10 and MA at all the concentrations tested, with similar values to the control without H 2 O 2 and MA. In A10 cells, no changes were induced by H 2 O 2 in G6PDH, whereas MA decreased its activity at any tested level ( Figure 5C,D).
GPX increased in the presence of H 2 O 2 but no change was observed when MA was administrated in B16F10. In A10 cells, all concentrations of MA decreased the GPX activity increased by the H 2 O 2 , even below that in the control without H 2 O 2 and MA ( Figure 6A,B).
In B16F10, GR activity was decreased by H 2 O 2 and no changes were observed at any MA level, with lower levels of activity maintained compared to the control. In A10, H 2 O 2 increased GR activity, an increment that was mitigated with MA addition. Moreover, 42.3 and 84.6 µM MA induced values of GR activity lower than those found in the absence of H 2 O 2 and MA ( Figure 6C,D). even below that in the control without H2O2 and MA ( Figure 6A,B).
In B16F10, GR activity was decreased by H2O2 and no changes were observed at any MA level, with lower levels of activity maintained compared to the control. In A10, H2O2 increased GR activity, an increment that was mitigated with MA addition. Moreover, 42.3 and 84.6 µM MA induced values of GR activity lower than those found in the absence of H2O2 and MA ( Figure 6C,D).
Discussion
Maslinic acid (MA) is a pentacyclic triterpene abundant in the surface wax of fruits and leaves of Olea europaea L. with many demonstrated biological activities [23,30,31]. For this reason, MA is appreciated as a chemopreventive agent in different diseases such as cancer [23,28], cardiovascular and neurodegenerative pathologies, etc. [26].
In different cell lines, MA's cytotoxic effect has been studied, including in both cancer and healthy cells. In this sense, it has been shown that this triterpene affects cells in different ways depending on the kind of cells and the conditions of the experiment. In the present study, MA showed cytotoxic effects only in B16F10 cells, and this effect was important, as the IC 50 of MA was 42 µM. In another study performed in melanoma cells by Kim et al. [32], the authors observed a lower MA cytotoxicity effect. Thus, they concluded that the IC 50 for MA in SK-MEL-3 was also 42 µM, but in their study, the incubation period was 48 h versus the 24 h used in the present study. Other results obtained in our research group showed that in colon cancer cells, the IC 50 value for MA was 30 µM in HT29 cells after 72 h of incubation [31,33,34], showing a higher cytotoxic effect in Caco-2 cells, in which the IC 50 of MA was 10.82 µM, also after 72 h [35]. Other authors observed IC 50 values for MA ranging between 32 and 64 µM in different cancer cells such as lung (A549), ovary (SK-OV-3), colon (HCT15) and glioma (XF498) cells after 48 h of incubation [32]. In several bladder cancer cell lines incubated with MA for 48 h, other authors found IC 50 values between 20 and 300 µM [36]. Notwithstanding this, compared to in melanoma cells, a non-cytotoxic effect was found in A10 healthy cells in the present study. These results are relevant since they demonstrate the selective cytotoxic effect of this compound. Studies focused on the selective effect of MA are scarce. Reyes et al. [22] observed that epithelial intestinal cells incubated with 30 µM MA for 72 h (IC 50 value for HT29) exhibited a survival rate of 78% for IEC-6 cells and 68% for IEC-18.
An intracellular redox balance is crucial to ensure viability, growth and the diversity of cell functions [15]. An excess of ROS is related to pathological processes producing oxidative damage when the antioxidant defense system is not able to counteract it. Hence, there is a growing interest of the scientific community and industry in obtaining substances of natural origin aimed at preventing Molecules 2020, 25, 4020 8 of 14 these pathologies and alterations caused by ROS. In this context, it is important to characterize the protective biochemical functions of natural antioxidants and to study their intracellular pathway signaling. A large number of plant constituents, such as MA, have antioxidant properties [37].
The present study examined the antioxidant effects of MA in skin melanoma cancer cells (B16F10) and thoracic aorta of embryonic rat cells (A10). The major findings were that MA improves the oxidative stress caused by a H 2 O 2 excess, decreasing the intracellular ROS levels in both cell lines. H 2 O 2 was used in this study because it is known that when present in excess, it is one of the major compounds that can damage cells [38]. Other authors have clearly shown the effects of H 2 O 2 on the viability and ROS production of different cell types that were subsequently treated with other antioxidant natural compounds that reversed the oxidative damage caused by the H 2 O 2 [39,40]. Furthermore, studies similar to ours with other ROS-producing molecules (i.e., CCl 4 ) observed that MA counteracted the lipid peroxidation generated in the nucleus [26]. Similarly, MA prevented the CuSO 4 − -induced oxidation of rabbit plasma LDL [41]. Moreover, Yang et al. [42] evaluated the antioxidant effects of MA derivatives that showed radical-scavenging activity and inhibition of NO production in RAW 264.7 cells.
Regarding MA's effect on antioxidant enzymes in the B16F10 cell line, as expected, SOD, GST and GPX activities were increased in response to the H 2 O 2 addition. However, H 2 O 2 decreased CAT, G6PDH and GR activities. A H 2 O 2 excess is responsible for cellular damage that includes an imbalance in membranes and biomolecule alterations. This fact supposes an extra energetic cost for the maintenance of cellular homeostasis in order to repair the damage produced. Both cellular damage and energy requirements could affect G6PDH activity, by either the impaired glucose transporters or enhanced glycolysis pathway, decreasing the glucose available for the pentose phosphate pathway. The regulation of NADPH levels is essential for understanding the behavior of numerous physiological processes, being especially important for growth and cell differentiation [2] but also for antioxidant processes, among others. NAPDH is mainly generated by the G6PDH enzyme and is required for the GPX and CAT activity involved in the H 2 O 2 removal pathway. Thus, CAT is protected from inactivation by NADPH. Moreover, this reduction equivalent is used by GR to regenerate the oxidized glutathione to its reduced form required for GPX activity [43,44]. The lower G6PDH activity observed when H 2 O 2 was added could result in a decrease in available NADPH levels, which might justify the reduction in CAT and GR activities. Similar results have been observed in previous studies in B16F10 cells subjected to stress conditions induced by FBS absence [15]. Mokhtari et al. [15] reported low activity levels for CAT that were produced by low NADPH levels due to an imbalance in cellular homeostasis.
It has been established that MA is a compound with antioxidant capacity [15,26,28]. When MA was added to the B16F10 cells, the changes induced by H 2 O 2 excess in SOD and GST activities were counteracted. In this sense, the scavenger MA's effects reduced ROS levels and, subsequently, the need for the action of these antioxidant defenses. Moreover, the observed decrease in ROS levels due to MA would result in less cellular damage, which would reverse the possible mechanisms responsible for the decrease in G6PDH when H 2 O 2 is added, raising its activity up to the control values. This recovered G6PDH activity would produce the NADPH level required to prevent and reverse the down-regulation of CAT [43]. Finally, a dose-response effect was observed for both GPX and CAT, highlighting the antioxidant behavior of MA in these cells.
The results found in this work, in A10 healthy cells, revealed that all the enzyme activities increased in the stressful conditions originated by the H 2 O 2 addition, except for G6PDH, whose activity was slightly higher. MA, in these cells, restored the antioxidant enzyme activities, inducing values similar to those in the control group in cells treated with the lowest MA levels (IC 50/4 ). Furthermore, when cells were treated with the two highest doses of MA (IC 50 and 2·IC 50 ), the activity of all antioxidant enzymes, with the exception of CAT, decreased below control levels. This fact confirms the relevant antioxidant effect of MA [15,28,43].
Compounds
Maslinic acid was obtained from olive pomace and kindly donated by Biomaslinic S.A., Granada, Spain. The extract is a chemically pure white powder composed of 98% maslinic acid and stable when stored at 4 • C. It was dissolved before use at 10 mg/mL in 50% dimethyl sulfoxide (DMSO) and 50% phosphate buffer solution (PBS). This solution was diluted in cell culture medium for assay purposes.
Cell Lines and Cultures
The mouse melanoma cell line B16F10 is a variant of the murine melanoma cell line B16. These cells show a higher metastatic potential than B16 cells [45]. A10 is a cell line derived from the thoracic aorta of an embryonic mouse, and it is used as a study model of smooth muscle cells (SMC). This cell type shows a great proliferative capacity and may be subcultured several times, allowing the rapid attainment of cell mass [46]. Both cell lines used were provided by the cell bank of the University of Granada (Spain). The cell lines were cultured in Dulbecco's modified Eagle's medium (DMEM) containing glucose (4.5 g/L) and L-glutamine (2 mM) from the commercial PAA brand, 10% heat-inactivated fetal bovine serum (FBS), and 0.5% gentamicin for B16F10 cells and 10,000 units/mL penicillin and 10 mg/mL streptomycin for A10 cells. The cell lines were maintained in a humidified atmosphere with 5% CO 2 at 37 • C. The cells were passaged at preconfluent densities by the use of a solution containing 0.05% trypsin and 0.5 mM EDTA. The cells were seeded in the culture dishes at the desired density. After 24 h, when the cells were attached to the dish, the cells were incubated with 0.15 mM hydrogen peroxide (H 2 O 2 ), in order to produce a stress situation in the cells. Following that, the cells were incubated with several concentrations of MA as indicated below.
MTT Assay
The MTT assay was performed as described by Mokhtari et al. [15]. Briefly, a 200 µL cell suspension of B16F10 or A10 (1.5 × 10 3 cells/well) was cultured in 96-well plates. Subsequent to the adherence of the cells, different MA dilutions on a scale of 10 to 210 µM were added separately. The incubation times were 24 h for all cases. MTT was dissolved in the medium and added to the wells at a final concentration of 0.5 mg/mL. Following 2 h of incubation, the generated formazan was dissolved in DMSO. Absorbance was measured at 570 nm in a multiplate reader (Bio-tek ® ). The absorbance was proportional to the number of viable cells. The MA concentration leading to 50% inhibition of cell proliferation (IC 50 ) was determined. The results are expressed as the percentage of live cells compared with the control considered as 100% cell viability. Cell viability in B16F10 and A10 cells was also studied in the presence of H 2 O 2 by the MTT assay. H 2 O 2 was dissolved in the culture medium of the cell lines. The concentrations used were made extemporaneously and protected from light before use to prevent degradation of the compounds. After 24 h of incubation, the medium was removed; fresh medium was added with different concentrations of H 2 O 2 per well-0, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.1, 1.5, 2.0, 2.6 and 3 mM-to a final volume of 200 µL for 24 h.
Flow-Cytometry Analysis of the Mitochondrial-Membrane Potential
Changes in the mitochondrial-membrane potential can be examined by monitoring the cell fluorescence after double staining with rhodamine 123 (Rh123) and propidium iodide (PI). Rh123 is a membrane-permeable fluorescent cationic dye that is selectively taken up by mitochondria in direct proportion to the MMP (mitochondrial-membrane permeabilization) [47]. B16F10 and A10 cells (4 × 10 5 cells/well) were seeded on 12-well plates with 2 mL of medium and treated with 0.15 mM H 2 O 2 for 24 h and MA at IC 50/4 , IC 50/2 , IC 50 and 2·IC 50 (10.6, 21.6, 42.3 and 84.6 µM, respectively) concentrations for 24 h more. Following the treatment, the medium was removed and fresh medium with dihydrorhodamine (DHR), at a final concentration of 5 µg/mL, was added. After 30 min of incubation, the medium was removed and the cells were washed and resuspended in PBS with 5 µg/mL of PI. The intensity of fluorescence from Rh123 and PI was determined using an ACS flow cytometer (Coulter Corporation, Hialeah, FL, USA), at the excitation and emission wavelengths of 500 and 536 nm, respectively. The experiments were performed three times with two replicates per assay.
Antioxidant Enzyme Assays
In order to prepare the samples for analytic procedures, cells were homogenized in RIPA buffer. Immediately, the cells were sonicated on ice for 5 min and maintained under moderate shaking at 4 • C for 1 h. Every 15 min, the samples were moderately shaken in a vortex. The lysates were spun in a centrifuge at 10,000× g at 4 • C for 15 min. All the enzyme assays were carried out at 37 • C using a Power Wave X microplate scanning spectrophotometer (Bio-Tek Instruments, Winooski, VT, USA) and run in duplicate in 96-well microplates. The optimal substrate and protein concentrations for the measurement of maximal activity for each enzyme were established by preliminary assays. The enzymatic reactions were initiated by the addition of the cell extract, except for SOD, where xanthine oxidase was used. The millimolar extinction coefficients used for H 2 O 2 , NADH/NADPH and DTNB (5,5-dithiobis (2-nitrobenzoic acid)) were 0.039, 6.22 and 13.6, respectively. The assay conditions were as follows: Superoxide dismutase (SOD; EC 1.15.1.1) activity was measured by the ferricytochrome c method using xanthine/xanthine oxidase as the source of superoxide radicals. The reaction mixture consisted of 50 mM potassium phosphate buffer (pH 7.8), 0.1 mM EDTA, 0.1 mM xanthine, 0.013 mM cytochrome c and 0.024 IU/mL xanthine oxidase. Activity is reported in units of SOD per milligram of protein.
One unit of activity was defined as the amount of enzyme necessary to produce a 50% inhibition of the ferricytochrome c reduction rate [48].
Glucose-6-phosphate dehydrogenase (G6PDH; EC 1.1.1.49) activity was determined at pH 7.6 in a medium containing 50 mM HEPES buffer, 2 mM MgCl 2 , 0.8 mM NADP + and glucose 6-phosphate used as substrate. The enzyme activity was determined by measuring the reduction of NADP + at 340 nm as previously described by Lupiáñez et al. [2] and Peragón et al. [50]. The change in absorbance at 340 nm was recorded and, after confirmation of no exogenous activity, the reaction started by the addition of substrate.
Glutathione peroxidase (GPX, EC 1.11.1.9) activity was determined using the method described by Flohe and Günzler [52], based on the oxidation of NADPH, which is used to regenerate the reduced glutathione (GSH) from oxidized glutathione (GS-SG) obtained by the action of glutathione peroxidase.
Glutathione reductase (GR, EC 1.8.1.7) was determined by the modified method of Carlberg and Mannervik [53]. We measured the decrease in absorbance produced by the oxidation of NADPH used by GR in the passage of oxidized glutathione (GS-SG) to reduced glutathione (GSH).
All enzyme activities (except for SOD) are expressed as units or milliunits per milligram of soluble protein (specific activity). One unit of enzyme activity was defined as the amount of enzyme required to transform 1 µmol of substrate per min under the above assay conditions. Soluble protein concentrations were determined using the method of Bradford, with bovine serum albumin used as a standard.
Statistical Analysis
Data are shown as mean ± standard error mean (SEM). The statistical significance among different experimental groups was determined by one-way analysis of variance (ANOVA) tests. When F values (p < 0.05) were significant, means were compared using Tukey's HSD test. The SPSS version 15.0 for Windows software package was used for statistical analysis.
Conclusions
In conclusion, the results obtained in the present study demonstrate that MA exerts a selective anti-proliferative effect against the B16F10 cell line, whereas in A10 healthy cells, MA did not present a cytotoxic effect. This natural compound prevents the oxidative stress caused by H 2 O 2 excess, decreasing the ROS production levels. Moreover, depending on the cell line, the antioxidant enzymatic responses were different in the presence of H 2 O 2 and without MA. Thus, in healthy A10 cells, all enzymes up-regulated their activity, but in B16F10 cells, only SOD, GST and GPX increased it. In most cases, MA treatment restored the activities of enzymes to levels similar to those in the control group, highlighting that in A10 cells, the highest doses of MA (IC 50 and 2·IC 50 ) resulted in values below control. Overall, these findings demonstrate the great antioxidant capacity of maslinic acid. | 7,904.4 | 2020-09-01T00:00:00.000 | [
"Medicine",
"Chemistry",
"Environmental Science"
] |
Dynamical Behavior of Two Interacting Double Quantum Dots in 2D Materials for Feasibility of Controlled-NOT Operation
Two interacting double quantum dots (DQDs) can be suitable candidates for operation in the applications of quantum information processing and computation. In this work, DQDs are modeled by the heterostructure of two-dimensional (2D) MoS2 having 1T-phase embedded in 2H-phase with the aim to investigate the feasibility of controlled-NOT (CNOT) gate operation with the Coulomb interaction. The Hamiltonian of the system is constructed by two models, namely the 2D electronic potential model and the 4×4 matrix model whose matrix elements are computed from the approximated two-level systems interaction. The dynamics of states are carried out by the Crank–Nicolson method in the potential model and by the fourth order Runge–Kutta method in the matrix model. Model parameters are analyzed to optimize the CNOT operation feasibility and fidelity, and investigate the behaviors of DQDs in different regimes. Results from both models are in excellent agreement, indicating that the constructed matrix model can be used to simulate dynamical behaviors of two interacting DQDs with lower computational resources. For CNOT operation, the two DQD systems with the Coulomb interaction are feasible, though optimization of engineering parameters is needed to achieve optimal fidelity.
Introduction
Realizing controllable quantum systems is of great interest and importance as their behaviors can unlock key understanding of other less controllable quantum systems, and lay the foundation for quantum technology applications. Physical systems, such as atoms, ions, spins, photons, and superconducting circuits, are used in quantum information processing, quantum simulation, and quantum computing [1][2][3]. Quantum dots (QDs) have been realized by many experimental approaches [4][5][6][7][8][9][10][11][12][13] and proven to possess advantages in terms of individual control, readout, and tunability [1] for quantum sensing, computing, and matter-light coupling in quantum communication [14,15].
The system of double quantum dots (DQD) consists of two quantum dots, each of which can constitute a two-level system known as a charge qubit, and in principle allows for a gate control desirable in quantum computing. Thus, the DQD behaviors, such as the electronic structure, and optical and dynamical properties, have been extensively and intensively investigated in theory, simulation and experiments. For example, in experiments, the charge qubit in DQD under high-speed rectangular voltage pulses was manipulated and the decoherence times were measured [4][5][6][7][8][9][10]. The electron transport in DQD was characterized [7,16]. The controlled-NOT (CNOT) operation of two strongly coupled semi-conductor charge qubits in GaAs/AlGaAs DQDs was demonstrated [11]. In simulation and theoretical studies, several methods, such as density functional theory (DFT), finite difference (FD), and tight binding (TB) methods, have been employed to investigate the behaviors of DQDs heterostructure of MoS 2 is only exemplary, and it can be modified to accommodate other materials by changing the potential parameter in the simulation.
The remaining of this article is organized as follows. Section 2 outlines the methodology describing the DQD structure and the Hamiltonian construction with the electronic potential in the potential model and the calculation of matrix elements in the matrix model. It also includes qubit operations. Results and discussion are presented in Section 3, where the DQD energies and CNOT operation efficiency are reported and discussed. Finally, conclusions and final remarks are in Section 4.
Structural Model
According to experiment and simulation in Ref. [21], MoS 2 in the semi-conducting 2H-phase changes to the metallic 1T-phase when irradiated by an electron beam. The transformed 1T-phase has a triangular shape, whose size depends on the radius of the beam, embedded in the rectangular 2H-phase (see Figure 1). This structure is a guiding model for simulation in this work, in which we can also investigate other parameters (e.g., QD dimensions, band offset of the heterostructure) and resulted behaviors. In experiment by Ref. [21], this heterostructure was achieved at room temperature. The periodic monolayers of 1T-phase and 2H-phase of MoS 2 are calculated by DFT to evaluate the effective masses and potential parameters. The temperature was set to be 300 K in these DFT calculations. These effective masses and band offset values were used to construct the heterostructures, where the results of calculated energy gap (i.e., the energy difference of electron and hole) were compared favorably with those from experiments. We used these same values of parameters to model DQDs in our simulation. However, in our simulation, the dynamics of states are carried out by the time-dependent Schrödinger equation without decoherence and energy dissipation, which corresponds to zero temperature behavior. As reported in experiment by an annealing process [50], the 1T-MoS 2 thin film changes significantly to the 2H-MoS 2 phase at temperature higher than 498 K. Thus, at room temperature or lower, the considered heterostructure of 1T-MoS 2 and 2H-MoS 2 should be thermal stable.
For the model setting, the system of the 2H-phase MoS 2 is assumed to be a rectangle of size dimensions L x nm and L y nm. The DQD is constructed from QDs with base length b nm and height h nm in the triangular shape of the 1T-phase MoS 2 . The QDs are placed with the inter-QD distance of d nm symmetric about the center of the rectangular L x × L y supercell. This constructs one DQD. In this work, two identical DQDs are placed side by side along the x-axis with a width a nm. We define the occupancy of an electron in the left dot and the right dot of the left DQD as the states |0 l and |1 l , respectively. Likewise, the states |0 r and |1 r define the occupancy in the right DQD, as identified by the bits 0 and 1 in Figure 1.
Electronic Potential Model
The Hamiltonians of the system of two interacting DQDs are modeled by two methods; namely, (i) the electronic potential model and (ii) the matrix model. The former is more physical, but the latter is more computationally effective. After calibrating both models we can use the matrix model for dynamical simulation. In the electronic potential model, the 1T-phase and 2H-phase MoS 2 shown by different colors in Figure 1 are represented by the electronic potential V in inside the QDs and V out outside the QDs, as described in Equation (1). For MoS 2 , the electron effective masses and potential parameters are taken from Ref. [21], in which these values were extracted from DFT calculations. The electron effective masses are m * e,2H = 0.54m e for the 2H-phase (outside QD) and m * e,1T = 0.29m e for the 1T-phase (inside QD), where m e is the mass of a free electron. The potentials of electron are V in = 0 inside the wells and V out = 0.915 eV outside the wells.
V(x, y) = V in ; inside QD V out ; outside QD (1) Figure 1. Schematic diagram of 2D two DQDs which different colors representing the different electronic potentials. This is an illustration of MoS 2 in Ref. [21]. Yellow rectangles denote the 2H phase, and violet triangles denote the 1T phase.
The Dirichlet boundary condition is applied for each DQD. The Hamiltonians of the ith (i = l, r) DQD are given in Equations (2) and (3). In these equations, the first term is the kinetic energy; the second term is the background potential energy of DQDs; the third term is the external potential from the applied electric field pulse; and the last term is the Coulomb interaction of an electron with another DQD. We abbreviate the last term from each equation as I l (x l , y l ) and I r (x r , y r ), respectively.
The Coulomb interactions I l (x l , y l ) and I r (x r , y r ) in Equations (2) and (3) have expensive computational cost when performed numerical calculation in each time step for dynamical evolution of the states of the DQDs. Therefore, the Coulomb interactions are approximated by Equations (4) and (5), respectively. Let ψ l (x l , y l , t) and ψ r (x r , y r , t) respectively denote the wavefunctions of the left and right DQDs. The qubit states |0 i and |1 i are represented by ϕ i,0 (x i , y i ) and ϕ i,1 (x i , y i ) which can be constructed by a linear combination of bonding and anti-bonding eigenstates of the non-interacting DQD. The vectors R i,0 and R i,1 are assumed at the centroid of each QD for representing the positions of the qubit states |0 i and |1 i of the ith DQD (here, i ∈ {l, r} indicates for the left and right DQDs). For the dynamics of a quantum state, the finite difference, effective mass and Crank-Nicolson method [51] is used to solve the time-dependent Schrödinger equation of the electronic potential model, as described in Equations (2)- (5).
Matrix Model
The charge qubits represented by two DQDs with the Coulomb interaction of electrons between the DQDs can be modeled by a 4 × 4 matrix. The Hamiltonian matrix in Ref. [11] is modified to become Equation (6): Here, the Hamiltonian matrix is written in the basis |00 , |01 , |10 and |11 , where ε l (resp. ε r ) is the energy detuning; ∆ l (resp. ∆ r ) is twice the inter-QD tunneling rate for the left (resp. right) DQD. The parameter ε can be modulated by the external electric field, and ∆ is obtained by the energy difference between bonding and anti-bonding states of the non-interacting DQD from the electronic potential model above. σ x and σ z are the Pauli X and Z matrices, respectively. We note that the parameters J 1 , J 2 , and J 3 are the inter-qubit coupling energies defined by the Coulomb interaction: J 1 = e 2 /4πε 0 r 1 , J 2 = e 2 /4πε 0 r 2 and J 3 = e 2 /4πε 0 r 3 . These correspond to the distance r 1 = L x + a − d, r 2 = L x + a, and r 3 = L x + a + d, as illustrated in Figure 1. Hence, J 1 , J 2 , and J 3 are related.
The matrix Hamiltonian H 2q in Equation (6) can be extracted for the subsystems, whose Hamiltonians are given by Equations (7) and (8), respectively.
The solution of the time-dependent Schrödinger equation with the governing Hamiltonian H 2q in Equation (6) can be written in the form However, Equations (7) and (8) constitute effective Hamiltonians for each subsystem, whose solutions can be, respectively, expressed as |L = a(t)|0 l + b(t)|1 l and |R = c(t)|0 r + d(t)|1 r . Then, the solution of the composite system can be approximated as a product state |Ψ ps = |L ⊗ |R . In some cases, such as an entangled state, the state cannot be written as the product of the subsystems; hence, the Equations (7) and (8) cannot be used. If the state can be written as a product of the subsystem, the solution of H 2q in Equation (6) and that from the product solutions of H l in Equation (7) and H r in Equation (8) are the same (see Section S1 of the Supplementary Materials).
In this work, the matrix model in Equations (7) and (8) is used to compare with the electronic potential model in Equations (2) and (3). The fourth order Runge-Kutta method [52] is used for solving the time-dependent Schrödinger equation of the matrix model.
CNOT Operation
Two DQDs with the Coulomb interaction of electrons are simulated for the feasibility of CNOT operation. Here, the right qubit (i.d., right DQD) is used to control the left qubit (i.d., left DQD). The left qubit is prepared in the initial state |0 l , while the right (control) qubit can be prepared in the initial state |0 r or |1 r . We need the initial state |01 to flip to the state |11 , and the initial state |00 does not change under the operation. The states are initialized by the applied external electric field, as the right (control) qubit is fixed in the initial state by the strongly electric field, but the left qubit is operated by a square electric field pulse (strongly electric field for the initial state and rapidly switching to zero electric field for operation time).
Transition Probability
In the electronic potential model, the DQDs have several electron eigenstates. For one DQD without an external electric field, the two lowest electron eigenstates are bonding states φ i,b (x i , y i ) and anti-bonding state φ i,ab (x i , y i ), respectively. The system is assumed to be a two-level system representing a qubit, and the higher levels are considered environment which can induce quantum leakage [26]. Since the state is written with probability amplitudes in the position space, we define the qubit states |0 i and |1 i in the position space as ϕ i,0 (x i , y i ) and ϕ i,1 (x i , y i ), respectively (i denotes the left or right qubits). Hence, the qubit states can be constructed by a linear combination of bonding and anti-bonding eigenstates of the DQD.
The dynamics of states for the electronic potential Hamiltonian in Equations (2) and (3) assume the form |Ψ ps = |L ⊗ |R = ψ l (x l , y l , t) ⊗ ψ r (x r , y r , t) with the aforementioned |L and |R . Therefore, the probability amplitudes in the qubit states |0 i and |1 i are given by
Parameter Optimization for Energy Tuning of DQD
In this section, the DQD is modeled by the 2D electronic potential, as mentioned in the previous section. The electron eigenstates are computed, but we are mainly interested in the first bonding and anti-bonding eigenstates by tuning parameters (i.e., b, d, V) in the model. The numerical value of potential and the effective mass of MoS 2 are obtained from Ref. [21]. The QD base length b is varied from 1.0 nm to 2.2 nm, and the height at h = b/2. The DQD energies converge when the supercell lengths (L x and L y ) are sufficiently large, as shown in Section S2 of the Supplementary Materials. In this simulation, the supercell of lengths L x = 9.0 nm and L y = 4.5 nm are selected.
In Figure 2, the wavefunctions of DQD are shown for the bonding state φ b (x, y) and anti-bonding state φ ab (x, y) for the QD base length b = 2.0 nm and the inter-QD distance d = 3.0 nm, where x ∈ {x l , x r } and y ∈ {y l , y r } depending on whether is considered the left or right DQD. The energies of the bonding and anti-bonding states are shown in Figure 3 when the inter-QD distance d varies in the x-axis. In Figure 3 also, the solid and dash lines of the same color (or symbol) denote the bonding and anti-bonding energies, respectively, whereas different colors (and symbols) represent lengths of the QD base b. The electron energy gap ∆ is defined as the energy difference between the bonding and anti-bonding eigenstates, as plotted in Figure 4. The DQD is still completely separated with least distance when d = b (the QDs bases are joined); at this point, the energy gap is maximum.
We define the potential parameter V = V out − V in , so that V = 0.915 eV for MoS 2 . Then, V is varied to cover the range of 0.60-2.00 eV, because some 2D materials have the energy band gap around 0-2 eV. Such variation can account for similar materials other than MoS 2 . Moreover, the external strain can adjust the band gap by a few tenths of eV around the original value [17][18][19][20]. Then, the energy gap is analyzed as a function of V and other engineering parameters. Below, we define a fitting function for the energy gap as a function of V, b, and d in the form: The motivation for fitting with the above equation stems from analyzing 1D double quantum wells with the WKB approximation [53,54], in which case the energy gap decays exponentially as a function of the inter barrier width. Here, the decay rate also depends on the parameters of a single quantum dot; see Section S3 in the Supplementary Materials. In our case, the contour plots of maximum energy gap ∆ max (V, b) and the exponential component α (V, b) are presented in Figure 5a,b, respectively. In Figure 6 for a fixed V, the exponent α depends linearly on b We found that the energy levels always decrease as b increases, but the maximum energy gap ∆ max is not monotonic in b. Additionally, the energy gap always decreases when the inter-QD distance d increases, and the decay rate depends more strongly on V or b. Since the energy gap is a parameter in the matrix model for the dynamics simulation, the results of the energy gap dependence on the structural parameters and potential will be used in the matrix model. We emphasize that the energy gap discussed above is the energy difference between the electronic states, not the energy difference of the electron and hole.
Dynamics of States on Bloch Sphere
The dynamics of states are simulated to examine the CNOT operation by both models mentioned earlier. The solutions are written in the form of the product of the subsystems: |Ψ ps = |L ⊗ |R = ψ l (x l , y l , t) ⊗ ψ r (x r , y r , t). The right qubit |R is fixed in the initial state (either |0 r or |1 r ) as a controlling qubit. The left qubit |L is prepared in the initial state |0 l and operated by the external electric field. As examples, Figure 8 depicts the dynamics of states in the electronic potential model via the probability as a function of time in the two-qubit states for the initial states |01 and |00 , respectively. The ideal qubit state, as defined in the Section 2, is assumed to be in a linear combination of the two-level system of the bonding and anti-bonding eigenstates of the DQD. Then, the initial states |0 or |1 are prepared by applying an external electric field in the x axis. If the applied electric field strength is weak, the initial state is in a superposition of |0 and |1 . If it is too strong, the initial state may exit the QD, or it still remains inside the QD but in a superposition of higher energy levels other than the desired bonding and anti-bonding states. Therefore, the electric field strength should be varied for suitable preparation of the initial state for each DQD, as can be seen in Section S4 of the Supplementary Materials. Furthermore, the dynamics of the states |L and |R can be represented by trajectories in the Bloch sphere, whose coordinates are calculated from x l/r (t) = Tr(ρ l/r (t)σ x ), y l/r (t) = Tr ρ l/r (t)σ y and z l/r (t) = Tr(ρ l/r (t)σ z ) [2]. Here, the density matrices are ρ l (t) = |L L| and ρ r (t) = |R R|. In Figure 9, the dynamics of states are represented in the Bloch sphere where the red and blue colors indicate the trajectories of the initial states |01 and |00 , respectively. We note that if there is no inter-qubit interaction, the state |L will precess around the x-axis of the Bloch sphere because it is an exact solution of the Hamiltonian in Equations (7) and (8) with J 1 = J 2 = J 3 = 0 eV. The inter-qubit interaction makes the precession of |L around some axis lying in the xz plane, but the radius of the precession depends on the axis and the initial state. Figure 8. The solution of the composite system is written as a product state |Ψ ps = |L ⊗ |R . Red and blue colors are the trajectories of the initial states |01 and |00 , respectively. The right qubit state |R is fixed in either |0 r or |1 r as a control qubit.
The matrix model can be utilized to simulate the same situation, with significantly shorter time than that of the electronic potential model (by a factor of 10 −3 or better). The comparison of simulated results from the two models are demonstrated in Figure 10 and in Section S5 of the Supplementary Materials. Due to increased effectiveness and reduced computational time in comparison to the electronic potential model, the matrix model is used to investigate the dynamics of the two interacting DQDs, in particular the performance of CNOT gate operation. It is worth noting that the matrix model contains only variables of energy, which can be calculated with high precision with other methods other than the finite difference. If the energy parameters are accurately determined, the matrix model prediction will improve.
CNOT Gate Efficiency
The two DQDs with the Coulomb interaction of electrons are used to construct a CNOT gate of two charge qubits, where the right qubit (DQD) is used to control the left one. To have a successful CNOT operation, the initial state |01 has to flip to the state |11 , and the initial state |00 remains unchanged under the operation. The results are shown in Figure 8. Apparently, the operation is not perfect as expected. We define the parameters P + and P − for the maximum change in flipping probability of the initial state |01 and that for the initial state |00 , respectively. Ideally, P + should tend to 1 and P − to 0 for high efficiency CNOT operation. Hence, the efficiency of the CNOT operation is defined by ∆P = P + − P − where ∆P = 1 for a perfect CNOT operation. So that ∆P has the value between 0 and 1.
The CNOT operation efficiency ∆P is studied by varying the parameters, such as the QD base length b, the inter-QD distance d, the potential V, and the inter-DQD distance a in both models. As we mention earlier, the effective mass and potential parameters of MoS 2 in Ref. [21] are used, but can be changed to other artificial potential values V (see Section S6 in the Supplementary Materials). In Figures 11 and 12, the inter-DQD distance a is varied in the x-axis; there the separate panels correspond to different values of the QD base length b, and the inter-QD distance d is varied with different symbols (colors).
The two models are consistent, especially when the inter-qubit interaction is weak, e.g., the inter-DQD distance a and the inter-QD distance d are large. Additionally, the agreement between the two models is further improved in the regime of higher potential V (see Section S6 in the Supplementary Materials). The quantity ∆P is sensitive to the DQD parameters, and it has a turning point of local maximum when the DQD parameters are varied. Each curve of Figures 11 and 12 is enumerated corresponding to the energy gap ∆, which is calculated from aforementioned DQD parameters. For each curve, the peak of ∆P depends on the inter-DQD distance a, which, in turn, depends on a set of inter-qubit coupling energies {J 1 , J 2 , J 3 }. Then, the maximum values of ∆P are extracted as a function of ∆, and their relationship is plotted in Figure 13. Thus, for a selected curve of ∆P, there is a set of {J 1 , J 2 , J 3 } which gives the maximum ∆P. As mentioned earlier, J 1 , J 2 , and J 3 are related, and J 2 lies between J 1 and J 3 . We choose J 2 to represent the set of the inter-qubit coupling energies to investigate the peak of ∆P with respect to the strength of interaction.
In principle, J 1 or J 3 could have been chosen, but it is more reasonable to represent by the parameter near the average, which also corresponds to the distance between the centers of two DQDs (although there is no charge at the centers). The parameter J 2 that yields the maximum ∆P are plotted as a function of the energy gap ∆ in Figure 14. Both J 2 and ∆ affect ∆P. At the maximum ∆P, J 2 and ∆ exhibit a linear relationship. The high energy gap requires the high inter-qubit coupling energy, but the inter-qubit coupling energy tends to the maximum when the two DQDs join at the base of the supercell (a = 0). After this point, J 2 cannot increase even as the energy gap increases, as shown by the saturation of curves in Figure 14. Consequently, the maximum ∆P decreases at the higher energy gap, as shown in Figure 13. In the matrix model, ∆P is quite sensitive to the relation of the set of the inter-qubit coupling energy. Hypothetically, suppose we change the parameters J 1 , J 2 , and J 3 (with conforming to the aforementioned relationship), say, J 1 is increased by 5% to 15% while the others are kept unchanged, then ∆P can increase by almost 2 to 3 folds. In the matrix model with extreme case, such as small J 2 ≈ J 3 and very strong J 1 ∆, the CNOT operation shows high efficiency with ∆P reaching nearly 1 (see Section S7 in the Supplementary Materials). However, in a realistic model, J 1 , J 2 , and J 3 cannot change independently. Moreover, the inter-qubit interaction in this simulation is not precisely determined since the other effects are not considered, such as the screening Coulomb interaction when the DQDs are placed in the permittivity dependent environment [55][56][57][58].
In Section 3.2, we have shown the electronic potential and matrix models yield consistent dynamics of states, and likewise the CNOT gate efficiency as indicated by ∆P. However in literature, the CNOT gate efficiency is often reported with the average fidelity F av comparing the ideal CNOT gate and experimental/simulation one. Let M ideal CNOT and M sim CNOT , respectively, denote the matrices of the CNOT operators for the ideal and one constructed from simulation. According to Ref. [59], the average fidelity F av can be calculated from: where M = M ideal CNOT † M sim CNOT , and n = 4 for the dimension of the Hilbert space for two qubits. In dynamical simulation by the matrix model, the operator M sim CNOT can be constructed, and so the average fidelity F av as a function of the inter-DQD distance a can be computed along with ∆P, as shown in Figure 15. In such cases, the peaks of the average fidelity are attained around 54% to 57%. To put in perspectives, the obtained fidelity from this work is slightly lower than previously reported in DQD experiments [11,12]. However, it should be emphasized that such comparison is not meaningful since the materials and methodology are different, but it indicates that a sensible figure is obtained. We remark that ∆P in Figure 8 is the change of probability from the highest to the lowest, which may occur at different times depending on the initial state. For the constructed M sim CNOT , the operation time which yields the maximum flipping probability of the initial state |01 is also used at the operation time of other initial states. If ∆P is computed at a fixed operation time (e.g., the operation of M sim CNOT ), then ∆P also shows similar behaviors (e.g., discontinuity) like the average fidelity, shown with dash lines in Figure 15. Thus, F av and ∆P both can indicate the efficiency of CNOT gate operation. We additionally remark that the discontinuity of the fidelity is sensitive to the inter-qubit interacting {J 1 , J 2 , J 3 }, in the sense that arbitrary increasing J 1 with a constant multiple, while keeping others parameters fixed, the discontinuity disappears. In summary, the two interacting DQDs with the Coulomb interaction in the heterostructure of materials, such as MoS 2 can be constructed and optimized for CNOT gate operation.
Conclusions
The DQDs are modeled by a heterostructure of two dimensional MoS 2 consisting of the 1T-phase triangular shape embedded in the 2H-phase square supercell. The two interacting DQDs are investigated for the feasibility of CNOT gate operation.
The Hamiltonian of the system is modeled by the 2D electronic potential and 4 × 4 matrix models. The DQDs in the electronic potential model and spatial dependent effective mass are studied with finite difference for DQD energy tuning, where the energy difference between the bonding and anti-bonding electronic eigenstates is maximized as a function of the electronic potential V and the QD base length b. The energy gaps can be explained well with the WKB approximation for quantum double wells, showing exponential decay depending on the inter-QD distance d, which decreases more rapidly when V and b increase. This information can be used to examine other QDs with different size or material make-up.
The two DQDs with the inter-DQD Coulomb interaction of electrons can be used to construct two interacting charge qubits with possible CNOT gate operation. The performance of CNOT operation via the dynamics of the two charge qubits are simulated by the Crank-Nicolson method in the potential model and by the fourth order Runge-Kutta method in the matrix model. For the comparison of the computational techniques, the matrix model ensures lower computational cost than the potential model, thus leading to the faster calculation. The results of the two models are in excellent agreement, and both show low CNOT operation efficiency ∆P with the pure Coulomb interaction. When varying the DQD parameters, the CNOT operation efficiency ∆P exhibits a peak of local maximum, which suggests that the engineering parameters can be tuned to optimize it. Additionally, the CNOT operation efficiency is reported with the average fidelity F av which exhibits the same trend as ∆P.
Finally, we believe that two interacting double quantum dots can be viable candidate for CNOT gate operation after selecting optimized DQD parameters, and our work sheds some light on how the behaviors of two interacting DQDs for CNOT operation based on QD systems depend on these parameters. This computational study can be beneficial in designing experiments in DQDs.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 7,183.4 | 2022-10-01T00:00:00.000 | [
"Physics"
] |
Riociguat Reduces Infarct Size and Post-Infarct Heart Failure in Mouse Hearts: Insights from MRI/PET Imaging
Aim Stimulation of the nitric oxide (NO) – soluble guanylate (sGC) - protein kinase G (PKG) pathway confers protection against acute ischaemia/reperfusion injury, but more chronic effects in reducing post-myocardial infarction (MI) heart failure are less defined. The aim of this study was to not only determine whether the sGC stimulator riociguat reduces infarct size but also whether it protects against the development of post-MI heart failure. Methods and Results Mice were subjected to 30 min ischaemia via ligation of the left main coronary artery to induce MI and either placebo or riociguat (1.2 µmol/l) were given as a bolus 5 min before and 5 min after onset of reperfusion. After 24 hours, both, late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) and 18F-FDG-positron emission tomography (PET) were performed to determine infarct size. In the riociguat-treated mice, the resulting infarct size was smaller (8.5±2.5% of total LV mass vs. 21.8%±1.7%. in controls, p = 0.005) and LV systolic function analysed by MRI was better preserved (60.1%±3.4% of preischaemic vs. 44.2%±3.1% in controls, p = 0.005). After 28 days, LV systolic function by echocardiography treated group was still better preserved (63.5%±3.2% vs. 48.2%±2.2% in control, p = 0.004). Conclusion Taken together, mice treated acutely at the onset of reperfusion with the sGC stimulator riociguat have smaller infarct size and better long-term preservation of LV systolic function. These findings suggest that sGC stimulation during reperfusion therapy may be a powerful therapeutic treatment strategy for preventing post-MI heart failure.
Introduction
Chronic heart failure (CHF) is a serious consequence of myocardial infarction (MI) and is associated with high mortality and morbidity. Reducing infarct size acutely after an ischemic event is assumed to reduce the risk of detrimental post-MI remodelling [1] and ensuing CHF. While much effort has gone into the identification of acutely protective strategies to reduce infarct size, relatively little has been directed to actually documenting their long-term post-MI effects.
The NO -sGC -PKG pathway is known to play an important role in the acute protection against cardiac reperfusion injury. We, and others, could show that an increase of cyclic GMP via phosphodiesterase 5 inhibition can afford powerful cardioprotection when applied at the onset of reperfusion. [2,3] The same is seen with the stimulation of soluble or particular guanylate cyclase. [4,5] All these interventions seem to require PKG as their downstream target. [6,7] NO not only can have a trigger role in this pathway at the level of sGC but also appears to be a downstream effecter by directly S-nitrosylating protective proteins. [8,9].
Recently, a new class of drugs, the so-called sGC stimulators, entered the clinical development for the treatment of pulmonary hypertension. Stimulators of sGC have a dual mode of action: they both sensitise sGC to endogenous NO by stabilizing the NO-sGC binding and also directly stimulate sGC independently of NO via a second binding site. The sGC stimulator riociguat, has recently undergone Phase III clinical trials in patients with several subforms of pulmonary arterial hypertension (PAH) and with chronic thromboembolic pulmonary hypertension (CTEPH). Exercise capacity was the primary endpoint for these studies and in all cases riociguat increased the patients' 6-minute-walking-distance. [10][11][12] Additionally, improvements were observed across secondary endpoints, including pulmonary hemodynamics, functional class and time to clinical worsening. Remarkably, riociguat is the first drug that has consistently demonstrated efficacy in both CTEPH and PAH.
Riociguat also produced positive clinical effects in smaller proofof-concept studies in patients with pulmonary hypertension secondary to left heart failure, interstitial lung disease and chronic obstructive pulmonary disease. [13] In animal models riociguat has been shown to be organ protective against cardiac and renal damage from hypertension and chronic renal failure and volume overload. [13][14][15].
In this study we tested the effects of riociguat on MI size and post-MI CHF development in a mouse model of ischaemia/ reperfusion. Gadolinium-enhanced magnetic resonance imaging (MRI) as well as FDG-PET was used to determine the resulting infarct size post-MI, while echocardiogram was used to measure LV function over a 28 day recovery period.
Methods
All procedures were conducted in accordance with the Animals (Scientific Procedures) Act 1986 (PPL 80/2393) and the University of Cambridge Policy on the Use of Animals in Scientific Research and approved by the Home Office (UK) Animals Scientific Procedures Department (ASPD).
In vivo mouse model of myocardial infarction
Infarct size following ischaemia/reperfusion an in situ open chest mouse model was measured as previously described. [16] Briefly, male C57/BL6 mice were anaesthetised with either pentobarbital (2 h reperfusion model) or gaseous isoflurane (24 h reperfusion model) and subjected to 30 min occlusion via a snare around the left anterior descending branch of the left coronary artery followed by either 2 h or 24 h reperfusion. Mice received either intravenous saline or 1.2 mmol/l riociguat 5 min before the onset of reperfusion via tail vein injection. L-NAME or KT5823 was given 10 min prior to the riociguat treatment. Cardiac Troponin I was measured in blood serum taken prior to heart removal at the end of each experiment. More details are described in the online supplement.
Blood pressure measurement
Blood pressure was either measured non-invasively using a tail cuff apparatus or through left ventricular catheterization via the right carotid artery using a 1.2 F pressure-catheter (Scisense Inc., London, Canada).
Echocardiography assessment of cardiac function
Transthoracic echocardiography was performed on mice anesthetized with 2% isoflurane, using a Vevo 770 ultrasound system (Visual Sonics, Toronto, Canada). Hearts were visualised in the two dimensional short-axis plane and analysis performed in Mmode in the consistent plane of the papillary muscles. Ejection fraction (EF) was calculated from end-systole and end-diastole measurements in at least three repeated cardiac cycles. Fractional shortening was calculated by the Simpson's method.
MRI and PET imaging
Animals were anaesthetised with gaseous isoflurane both for induction (3% in 1 l/min O 2 ) and maintenance (1.5-2% in 1 l/ min O 2 ). A pressure sensor for respiration rate was used to monitor anaesthesia depth, rate was maintained in the range 30-45 breaths per minute. Prospective gating of the MRI sequences was achieved with ECG monitoring. Body temperature was monitored using a rectal thermometer and a flowing-water heating blanket was used to maintain animal temperature at 37uC throughout the experiment.
Animals were transferred on the same bed to the Cambridge PET-MRI scanner. [20] Injection of 25 MBq 18 F-FDG was performed in situ, and list-mode gated PET acquired continuously for 45 min.
Image Analysis
Delineation of the LV at each phase of the cardiac cycle excluded the papillary muscles and trabeculae throughout. The regions from each slice were combined using Simpson's rule to provide LV mass, end diastolic volume (EDV), end systolic volume (ESV), stroke volume (SV) and ejection fraction (EF) using Segment v1.9. [21] The infarcted regions were manually delineated on the IR images. Values are expressed as ratios of the LV mass as measured from the cine images.
DENSE MRI images were analysed with in-house code using Matlab: phase images were extracted and unwrapped [22] to obtain displacement maps. The Green strain tensor (E) was calculated from the Jacobian matrix (F) by means of E = (F9F-I)/2. The tensor was then decomposed into radial and circumferential strain components. Global strains were obtained by integrating values over the LV.
FDG-PET Images were reconstructed using a 3D filtered backprojection algorithm in 8 cardiac phases and temporal frames of 15 min. The cardiac phase correspondent to end-diastole was selected for each subject. Images from PET and MRI were coregistered manually with a rigid transformation using SPM-Mouse bulk registration tool. [23].
Infarct size was assessed in the FDG-PET images by manual delineation in the final frame. Voxels where intensity was below 50% of the maximum heart uptake were considered non-viable, following Stegger et al. [24].
Relative mRNA quantification by RT-PCR
Details can be found in the online supplement.
Data analysis
All data are presented as mean 6 standard error of the mean (SEM). The infarct size is shown as percentage of area at risk. The blood pressure values are plotted as percentage of the respective baselines. Differences among groups were compared by one-way ANOVA with Turkey's post hoc test. A value of p,0.05 was considered significant.
Effects of riociguat on infarct size
Mice treated with riociguat at the onset of reperfusion showed a marked reduction of infarct size after 30 min LCA occlusion followed by 2 h reperfusion. The protection was still present when the NOS inhibitor L-NAME was given prior to the riociguat treatment, while the PKG blocker KT5823 blocked the riociguat effect (Fig 1A), suggesting a NO-independent effect through PKG signalling. Infarct size was determined with tetrazolium staining (Fig 1A) and blood serum levels of cardiac Troponin I (Fig 1B).
The profound protection affected by riociguat was still present after 24 h when infarct size was determined by late gadolinium enhancement LGE-MRI (Fig 2A, supplemental Table 1 in File S1). Fig 2B shows representative images of LGE-MRI and standard protocol TTC staining for comparison from the same heart after 24 h of infarction. Both techniques for assessment of infarct size show a good correlation in our hands. [19].
Furthermore, FDG-PET was used as an additional technique to assess functional infarct size data. As shown in Figure 3A, the direct comparison of infarct size measurements between LGE-MRI and PET reveal a good correlation between these two techniques. Fig 3B depicts example images of LGE-MGI and PET as well as the overlay of both techniques. Representative films of merged LGE-MRI/PET can be seen for control and riociguattreated mice in the online supplement (Videos S1 and S2). Effects of riociguat on cardiac function and heart failure development Left ventricular ejection fraction (LVEF) was assessed with cardiac MRI 24 h post-MI and with ultrasound 28 days later (for a technical comparison of both techniques see supplemental Fig 1 and 2 in File S1). LVEF in the riociguat-treated mice was much greater compared to control at this early time-point (Fig 2C). After 28 days, echocardiography still revealed a markedly greater LVEF in the riociguat treated mice (Fig 4, supplemental Table 2 in File S1), suggesting a long-lasting beneficial effect of a single dose of riociguat administered at the time of reperfusion. The preserved LVEF at 28 days was highly correlated with infarct size 24 h post MI in the same animals (R 2 = 0.85, supplemental Fig 2 in File S1).
MRI also allowed us to measure the ventricular wall's radial and circumferential strains one day post-MI. Radial strain is an indicator of the change in wall thickness during contraction. Hearts treated with riociguat had greater radial strain, indicating more wall thickening during systole when compared to the control group ( Fig 5A). Circumferential strain reflecting circumferential shortening with systole and healthy myocytes should have negative values. Fig 5B shows
Effects of riociguat on hemodynamics
Blood pressure measurements during the open-chest phase of myocardial infarction via an LV catheter showed a mild drop and a slightly slower recovery in mean blood pressure after the infusion of riociguat (Fig 6A). These differences did not reach significance, however. While this trend was still seen at 24 h post-MI, it was not seen at any of the later stages between the groups (Fig 6B). Heart rate was not significantly different in the control animals compared to riociguat treated mice. (Supplemental Fig 3 in File S1).
Effects of riociguat on cardiac fibrotic tissue remodelling
After sacrifice at 28 days, mRNA gene expression profiles in left ventricles of the mice were quantified by RT-PCR in order to characterize tissue remodelling processes. Although not significant, the average expression of Collagenes (Col1a1, Col3a1, Col4a1, Col6a1) were slightly less in myocardium of riociguat-treated mice compared to that in the untreated controls. This may indicate an attenuated fibrotic tissue remodelling process in the ventricle, when sGC activity is stimulated at the onset of myocardial reperfusion. Consistent with reduced expression of extracellular matrix molecules after riociguat treatment, other pro-fibrotic gene activities also tended to be expressed at a lesser extent, i.e. MCP-1 (Ccl2), tenascin C, ST2 (Il1rl1), galectin-3 or lipocalin-2 (Fig 7). Unfortunately none of the values in Fig 7 reached significance.
Discussion
In this study we present clear evidence for a strong and longlasting cardioprotective effect of riociguat, a novel sGC stimulator in an in vivo mouse model of MI and post-MI CHF. Riociguat not only acutely reduced infarct size and improved LV function but the effect persisted over the 28 days of the study. Infarct size was not only evaluated histologically with TTC staining, but also with novel state-of-the-art imaging techniques. LGE-MRI and FDG-PET were used 24 h after reperfusion and revealed beneficial effects of sGC stimulation on infarct size, cardiac function and morphology. The follow-up over 28 days showed a sustained preservation of mechanical function from the single riociguat treatment at reperfusion indicating that the entire benefit was derived from preventing acute loss of ventricular muscle.
NO activates PKG by causing SGC to increase cGMP in the cell. Impairments of the NO-sGC-PKG pathway have been implicated in various cardiac pathologies, including ischaemia/ reperfusion injury and CHF development. [5] Riociguat is a potent sGC stimulator, which is currently being investigated in phase III clinical trials for the treatment of pulmonary hypertension. Riociguat acts directly on sGC and sensitizes sGC to endogenous NO. [13,25] In animal models of hypotension riociguat afforded protection against cardiac and renal end organ damage [15] and it showed reduced left ventricular weight and cardiac interstitial fibrosis in low and high renin rats. Furthermore, in a model of chronic cardiac volume and pressure overload in salt sensitive rats riociguat reduced systemic hypertension and cardiac fibrosis as well as increased systolic heart function and survival. [14,15] In the present study, we could observe a non-significant, non-sustainable trend towards a mild reduction in systolic blood pressure in the riociguat-treated animals. We cannot rule out any effect of the blood pressure changes on infarct size or CHF Figure 6. Effects of riociguat on hemodynamics. (A) Systolic blood pressure data were obtained by using a LV-catheter during infarct size measurement in the acute model with 30 min ischaemia followed by 2 h reperfusion. There was a slight blood pressure drop after riociguat injection compared to the vehicle-treated mice in the control group, although changes are not significant. n = 3. (B) Long term systolic blood pressure data were obtained by using a tail cuff system. One day after injection of riociguat systolic blood pressure still showed a mild trend towards lower values, which fully disappeared later. n = 7. doi:10.1371/journal.pone.0083910.g006 development, although it is known that blood pressure reduction per se does not afford cardioprotection against ischaemia-reperfusion injury [26,27].
In recent years, imaging techniques such as MRI and PET became available for the functional and morphological assessment of mouse hearts and allowed non-invasive follow-up studies of MI and CHF. Here we used state-of-the-art techniques to assess myocardial function and morphology with the consecutive combination of LGE-MRI and FDG-PET. While the cardiac MRI provides accurate functional data on the basis of detailed morphology, PET is a highly specific complimentary technique providing functional data on a molecular level. The combination of both techniques overcomes the clear shortcomings of each one alone and represents the current gold standard in in vivo cardiac imaging. When we used LGE-MRI to assess infarct size after 24 h of reperfusion there was still a clear benefit in the riociguat-treated animals with smaller infarct sizes and better-preserved LV function. All the techniques used were highly congruent.
Furthermore, DENSE MRI imaging allowed us to determine displacement maps and calculate radial and circumferential strain. [17,22] Both parameters showed a clear benefit of early riociguattreatment post-MI with an increase in radial strain, suggesting thicker cardiac walls, and a reduced circumferential strain, indicating preserved myocyte contraction. Global values from tissue deformation imaging have shown high prognostic value for remodelling in infarct patients, and it has been suggested that these might more closely reflect myocardial contractility than traditional measures of systolic function [28].
These non-invasive imaging techniques allowed us to perform a follow-up of these animals and determine functional parameters after 28 days via echocardiography. Human data suggests that remodelling and CHF development is directly correlated to the extent of the infarcted area after an acute MI. Here we could show that the infarct size reduction due to early riociguat treatment was associated with improved LV function after 4 weeks, suggesting less detrimental post-MI remodelling. This is supported by our observation that many pro-fibrotic genes show a strong trend for lower expression in the treated animals' hearts.
Taken together, the present results indicate that a single dose of the sGC stimulator riociguat at the end of a 30 min period of coronary arterial branch occlusion cause an immediate reduction of infarct size and preservation of cardiac function in mice. Furthermore, the beneficial effects on cardiac function and morphology can still be seen 28 days after the ischemic event, leading to a reduction of CHF. The consecutive combination of cardiac LGE-MRI and FDG-PET allowed us to accurately assess infarct size non-invasively in a myocardial infarction model that is very close to the clinical scenario. We conclude that sGC stimulation with riociguat is a promising candidate for preventing post-MI heart failure in acute coronary syndrome patients undergoing reperfusion therapy.
Supporting Information
File S1 Supplemental methods and results.
(DOCX)
Video S1 Merged LGE-MRI/PET video 24 h after infarction of representative heart of control-treated mouse in 2-chamber view.
(WMV)
Video S2 Merged LGE-MRI/PET video 24 h after infarction of representative heart of riociguat-treated mouse in 2-chamber view. (WMV) | 4,088 | 2013-12-31T00:00:00.000 | [
"Medicine",
"Biology"
] |
Optical-pumping enantio-conversion of chiral mixtures in presence of tunneling between chiral states
Enantio-conversion of chiral mixtures, converting the mixtures composed of left- and right-handed chiral molecules into the homochiral ensembles, has become an important research topic in chemical and biological fields. In previous studies on enantio-conversion, the tunneling interaction between the left- and right-handed chiral states was often neglected. However, for certain chiral molecules, this tunneling interaction is significant and cannot be ignored. Here we propose a scheme for enantio-conversion of chiral mixtures through optical pumping based on a four-level model of chiral molecules, comprising two chiral ground states and two achiral excited states, with a tunneling interaction between the chiral states. Under one-photon large detuning and two-photon resonance conditions, one of the achiral excited states is eliminated adiabatically. By well designing the detuning and coupling strengths of the electromagnetic fields, the tunneling interaction between two chiral states and the interaction between one of the chiral states and the remaining achiral excited state can be eliminated. Consequently, one chiral state remains unchanged, while the other can be excited to an achiral excited state, establishing chiral-state-selective excitations. By numerically calculating the populations of two chiral ground states and the enantiomeric excess, we observe that high-efficiency enantio-conversion is achieved under the combined effects of system dissipation and chiral-state-selective excitations.
Note that in previous enantio-conversion works [37][38][39][40][41][42][43][44][45], the tunneling interaction between left-and right-handed chiral states was often neglected since the tunneling effect is usually considered to be sufficiently weak.However, for certain chiral molecules, the magnitude of the tunneling interaction strength may be comparable to the coupling strength, e.g. the tunneling can occur within 33 ms − 3.3 µs for small chiral molecules (like D 2 S 2 ) [41,58], namely, the tunneling interaction between two chiral ground states can not be ignored.It is therefore natural to ask the question: how to achieve highefficiency enantio-conversion of chiral mixtures in this case?
In this paper, we propose achieving high-efficiency enantio-conversion of chiral mixtures via optical pumping based on a four-level model of chiral molecules, composed of two chiral ground states and two achiral excited states.Compared with the previous four-level double-∆ model [37][38][39][40][41][42][43][44], additional tunneling interaction between the two chiral ground states is introduced here [59][60][61][62][63][64][65][66].Under the condition of the large detuning between the two chiral ground states and the symmetric achiral excited state, as well as the two-photon resonance between the two chiral ground states and the asymmetric achiral excited state, the symmetric achiral excited state can be eliminated adiabatically.In this case, by well designing the detuning and coupling strengths of the electromagnetic fields, the tunneling interaction between two chiral ground states and the interaction between the left-handed ground state and the asymmetric achiral excited state can be counteracted.Therefore, the left-handed chiral ground state remains undisturbed, while the right-handed one is excited to the asymmetric achiral excited state, establishing chiral-state-selective excitations.Meanwhile, this achiral excited state relaxes to two chiral ground states due to the system dissipation and thus the enantio-conversion of chiral mixtures can be realized in the steady state.To assess the efficiency of the enantio-conversion of chiral mixtures, we calculate numerically the populations of two chiral ground states and the enantiomeric excess of the chiral ground state.In addition, we analyze the effect of the system parameters (e.g. the detuning and the coupling strength) on enantio-conversion of chiral mixtures.
Model and Hamiltonian
As shown in Fig. 1(a), we consider a four-level model of chiral molecules consisting of two degenerated chiral ground states (the left-and right-handed chiral ground states |L⟩ and |R⟩) and two achiral excited states (the symmetric and asymmetric achiral excited states |S⟩ and |A⟩).The energies of the four states are hω A > hω S > hω L = hω R = 0. Here, the reason for the degeneracy of the two chiral ground states is that the tiny parity violating energy difference caused by the fundamental weak force is negligible [67].
In this work, we mainly focus on the case that all of the four states of the chiral molecules under consideration are in the electronic ground state (with different vibrational states) and the barrier of the double-well potential is relatively low.For instance, HSOH molecules [68] can be considered as an example to realize the four-level model of chiral molecules under consideration.Here we take the vibrational sublevels of |L⟩ and |R⟩ as the vibrational ground states in the left and right wells of the doublewell potential, respectively.Obviously, |L⟩ and |R⟩ are chiral states.The vibrational sublevels of |S⟩ and |A⟩ are the symmetric and asymmetric vibrational excited states (e.g.near or beyond the barrier of the double-well potential) with a distinguishable vibrational energy difference.When the rotational sublevels of the four states as well as the polarizations and frequencies of three electromagnetic fields are well designed, the transitions out of our working four-level model can be ignored.Specifically, we choose the rotational sublevels of |L⟩, |R⟩, |S⟩, and |A⟩ as |J kakcM = 0 000 ⟩, |0 000 ⟩, |1 010 ⟩, and (|1 101 ⟩ + |1 10−1 ⟩)/ √ 2 [28,69], respectively.Here |J kakcM ⟩ are eigenstates of asymmetric-top molecules [28,69].The three electromagnetic fields associated with the electric-dipole transitions |Q⟩ ↔ |S⟩ ↔ |A⟩ ↔ |Q⟩ are, respectively, Z-polarized, X-polarized, and Y-polarized fields.Since the wave vectors ( ⃗ k A , ⃗ k 0 , and ⃗ k S ) of the three electromagnetic fields cannot be parallel, the finite size of the sample inevitably produces the phase-mismatching problem.To ensure that the effect of the phase mismatching is negligible, the characteristic length of the sample l and the phase-mismatching wave vector [7,11,70].In addition, for simplicity, we have ignored the couplings among electronic, vibrational, and rotational degrees of freedom [37][38][39]42].Note that enantio-discrimination [46][47][48][49][50][51][52] and enantiospecific state transfer [53][54][55] have been experimentally achieved by using the three-level ∆-type model of chiral molecules.This three-level model offers possible experimental techniques for implementing the current four-level chiral-molecule systems.
Under the dipole approximation and rotating-wave approximation, the Hamiltonian of the system reads (h = 1) where η denotes the tunneling strength between the two chiral ground states |L⟩ and |R⟩.The parameters Ω 0 , Ω S , and Ω A (ω 0 , ω 1 , and ω 2 ) are, respectively, the coupling strengths (frequencies) of three electromagnetic fields applied to different electric-dipole transitions |S⟩ ↔ |A⟩, |Q⟩ ↔ |S⟩, and We are interested in the case of the one-photon resonance of |Q⟩ ↔ |A⟩ and the three-photon resonance, i.e., In the interaction picture with respect to Ĥ0 = ω 1 |S⟩⟨S|+ω 2 |A⟩⟨A|, the Hamiltonian (1) becomes where ∆ = ω S − ω 1 is the detuning between the transition |S⟩ ↔ |Q⟩ and the applied driving field of frequency ω 1 .For simplicity, we have assumed that η, Ω 0 , Ω S , and Ω A are real.Under the condition of large detuning |∆| ≫ Ω S ∼ Ω 0 ≫ Ω A ∼ η, the effective Hamiltonian can be obtained by using the Fröhlich-Nakajima transformation [71,72] to eliminate adiabatically the symmetric achiral excited state |S⟩.For that, we introduce an anti-Hermitian operator where we have defined It can be seen from Eq. ( 5) that the symmetric achiral excited states |S⟩ is decoupled to other three states (|L⟩, |R⟩, and |A⟩), then the evolution of two chiral ground states |Q⟩ will not be affected by |S⟩.Hence, the dynamics of the system can be described by the following reduced three-level Hamiltonian Ĥre = Λ|A⟩⟨A| + Λ(|L⟩⟨L| + |R⟩⟨R|) In order to establish the chiral-state-selective excitations with the right-handed chiral ground state being excited to the asymmetric achiral excited state and the lefthanded one being undisturbed, the appropriate detuning and coupling strengths of the electromagnetic fields should be chosen to eliminate the tunneling interaction between two chiral ground states and the interaction between |L⟩ and |A⟩.For this purpose, we design the system parameters to satisfy
Chiral-state-selective excitations
In this section, we will demonstrate the chiral-state-selective excitations in the absence of the system dissipation by calculating the populations of two chiral ground states.
We consider an initial racemic mixture, namely, the initial state of each molecule can be described by density operator ρ(0) = (|L⟩⟨L| + |R⟩⟨R|)/2.By numerically solving the Liouville equation dρ/dt = −i[ Ĥ, ρ] [ Ĥ is given in Eq. ( 3)], we can obtain the density operator ρ(t) of the system at time t and the evolved populations of the leftand right-handed chiral ground states P Q (t) = ⟨Q|ρ(t)|Q⟩.
The results showing the time evolution of the populations P Q (Q = L, R) of the leftand right-handed chiral ground states at the coupling strength Ω 0 /Ω S = 1 are presented in Fig. 2(a).Other parameters are ϕ = 0, η/2π = 0.02 MHz [41], Ω S /2π = 1 MHz (which mean ∆ 0 ≡ Ω 2 S /η = 2π × 50 MHz), ∆/2π = 50 MHz, and Ω A = Ω S Ω 0 /∆ = 2π × 0.02 MHz, which are experimentally feasible [46,47,[53][54][55].We can see that, the evolved population P L is almost unchanged and the population P R appears as a periodic oscillation.This indicates that the left-handed chiral ground state |L⟩ is almost undisturbed and the right-handed one |R⟩ can be excited to an achiral excited state, i.e., establishing the chiral-state-selective excitations.Figure 2(b) depicts the evolved populations Similarly, it is shown that the population P L is almost unchanged and the population P R appears as a periodic oscillation.In particular, we find that, comparing with that in Fig. 2(a), the period of oscillation for the population P R becomes shorter and its amplitude becomes smaller when the coupling strength increases to Ω 0 /Ω S = 5 in Fig. 2(b).The reasons are the following: (i) The period of oscillation for the population P R is dependent on the coupling strength 2Ω A between |R⟩ and |A⟩.When Ω 0 /Ω S = 1 (Ω 0 /Ω S = 5), the coupling strength is 2Ω A = 2π × 0.04 MHz (2Ω A = 2π × 0.2 MHz).Hence, compared with Ω 0 /Ω S = 1, the period of oscillation becomes shorter when Ω 0 /Ω S = 5. (ii) The amplitude of oscillation for the population P R depends on the detuning δ ≡ Λ − Λ for the interaction term −2Ω A [|R⟩⟨A| exp (iδt) + H.c.] of Hamiltonian (9) in the interaction picture.At Ω 0 /Ω S = 1, we find δ ≡ Λ − Λ = 0, i.e., the resonance coupling between |R⟩ and |A⟩ occurs.However, we find that the detuning δ ̸ = 0 when Ω 0 /Ω S = 5, thus the amplitude of oscillation is decreased compared with the case Ω 0 /Ω S = 1.
Enantio-conversion via optical pumping
In Sec. 3, we have discussed the chiral-state-selective excitations in the absence of the system dissipation.However, in the realistic situations, the system dissipation is inevitable and crucial for implementing enantio-conversion of chiral mixtures.In the following, we will demonstrate that high-efficiency enantio-conversion of chiral mixtures via optical pumping can be realized under the action of the combining effect of the system dissipation and the chiral-state-selective excitations.
The dynamics of the system is governed by the quantum master equation where Ĥ is given in Eq. ( 3) and Lρ ≡ L dc ρ + L dp ρ is the Lindblad superoperator that describes the dissipation of the system.The decay term L dc ρ reads [74] where γ S (γ A ) is the chirality-independent [42] decay rate of the chiral molecules from state |S⟩ (|A⟩) to |Q⟩ for Q = L, R and γ SA is the decay rate from state |A⟩ to |S⟩.
In order to analyze the efficiency of the enantio-conversion of the chiral mixtures, we show the time evolution of the populations of the left-and right-handed chiral ground states in the presence of the system dissipation.Specifically, we choose the experimentally feasible decay rates [46,47]: γ S /2π = γ A /2π = 0.1 MHz, γ SA /2π = 0.5 MHz, and γ ϕ /2π = 0.01 MHz.In Fig. 3(a), we show the evolved populations P Q (t) for different detunings ∆ at the coupling strength Ω 0 /Ω S = 1.It can be seen that the population of the left-handed (right-handed) chiral ground state is approximately equal to 1 (0) at the detuning ∆/∆ 0 = 1 when t > 150 µs, which indicates that the conversion of the chiral mixtures from |R⟩ to |L⟩ is achieved under the action of the combining effect of the system dissipation and the chiral-state-selective excitations.In particular, we calculate the enantiomeric excess ε ≡ (P L − P R )/(P L + P R ) of the chiral ground state to estimate the efficiency of the enantio-conversion of the chiral mixtures.From Fig. 3(a), we find that the steady-state enantiomeric excess is ε ≈ 98.3% at ∆/∆ 0 = 1 when t > 150 µs, i.e., achieving high-efficiency enantio-conversion of the chiral mixtures.In addition, we find that the population P L (P R ) decreases (increases) when the detuning ∆ diverges from ∆ 0 , meaning that the enantiomeric excess is decreased, i.e., the efficiency of the enantio-conversion of the chiral mixtures decreases when the detuning ∆ diverges from ∆ 0 .The reason is that the tunneling interaction between two chiral ground states can be eliminated only when ∆ = ∆ 0 .In Fig. 3(b), we depict the evolved populations P Q for different detunings at Ω 0 /Ω S = 5.It is shown that the populations are P L ≈ 0.998 and P R ≈ 0.001 (i.e., the steady-state enantiomeric excess ε ≈ 99.8%) at ∆/∆ 0 = 1 when t > 20 µs, which indicates that the high-efficiency enantio-conversion of the chiral mixtures can be achieved and the required time to complete enantio-conversion is shorter.In addition, we find that the population P L (P R ) decreases (increases) slowly as the detuning ∆ deviates from ∆ 0 .Note that once the three electromagnetic fields are turned off, the system will begin to oscillate between |L⟩ and |R⟩ as a result of tunneling.
To demonstrate the effect of the detuning ∆ on enantio-conversion of chiral mixtures, we show the steady-state enantiomeric excess ε of the chiral ground state versus the detuning ∆ when the coupling strength Ω 0 /Ω S = 1 and 5 in Fig. 4(a).It can be seen that the steady-state enantiomeric excess (i.e., ε ≈ 98.3% or 99.8%) is obtained at ∆/∆ 0 = 1 when Ω 0 /Ω S = 1 or 5, namely, the high-efficiency enantio-conversion of chiral mixtures is realized.In addition, we find that, at Ω 0 /Ω S = 1 (Ω 0 /Ω S = 5), the enantiomeric excess decreases quickly (slowly) as the detuning ∆ deviates from ∆ 0 .In Fig. 4(b), the steady-state enantiomeric excess ε is plotted versus the coupling strength Ω 0 at ∆/∆ 0 = 1.We observe that the enantiomeric excess ε increases (decreases) with the increase of the coupling strength Ω 0 when Ω 0 /Ω S < 1 (Ω 0 /Ω S > 20).In a middle region of 1 < Ω 0 /Ω S < 20, the enantiomeric excess ε can reach maximum, which means that the high-efficiency enantio-conversion of chiral mixtures can be realized in this region at ∆/∆ 0 = 1.The reason is that in this middle region, the parameters of the system are closer to the condition |∆| ≫ Ω S ∼ Ω 0 ≫ Ω A ∼ η of the adiabatic elimination.
We also analyze how the efficiency of the enantio-conversion in the chiral mixtures depends on the dephasing rate.Specifically, we show the steady-state enantiomeric excess ε of the chiral ground state as a function of the dephasing rate γ ϕ for different coupling strengths Ω 0 in Fig. 5.It can be seen that the steady-state enantiomeric excess ε decreases with the increase of the dephasing rate γ ϕ .This means that the dephasing will affect evidently the enantio-conversion of chiral mixtures.In addition, we find that as the coupling strength Ω 0 increases, it becomes possible to achieve the high-efficiency enantio-conversion of chiral mixtures, even in cases where the dephasing rate is larger than the other decay rates.For example, for the large-dephasing case of γ ϕ /γ S = 30, the steady-state enantiomeric excess ε is about 91.6% when Ω 0 /Ω S = 20 (purple solid curve).
In the above discussions, we have mainly focused on the case that the initial chiral mixture is mixture, i.e., the initial state of each is = (|L⟩⟨L| + |R⟩⟨R|)/2.Below, we will discuss the influence of different initial states of the system on the enantiomeric excess of the chiral ground state.In Fig. 6(a), we show the time evolution of the enantiomeric excess when the initial state of each molecule is ρ(0) = x|L⟩⟨L|+(1−x)|R⟩⟨R| with x = 0.3, 0.5, 0.7.It can be seen that the steady-state enantiomeric excess is the same for the different initial conditions.
Conclusion
Based on the four-level model of the chiral molecules, we have demonstrated that highefficiency enantio-conversion of chiral mixtures can be achieved via optical pumping when the tunneling interaction between two chiral ground states can not be ignored.In this four-level model, the chiral-state-selective excitations can be established by choosing the appropriate detuning and coupling strengths of the electromagnetic fields.The numerical results show that, under the action of the combining effect of the system dissipation and the chiral-state-selective excitations, high-efficiency enantio-conversion of chiral mixtures can be achieved when ∆ = Ω 2 S /η and Ω A = Ω S Ω 0 /∆ in the largedetuning region.In addition, by analyzing the dependence of the enantiomeric excess on the detuning and coupling strengths of the electromagnetic fields, the optimal parameters of achieving high-efficiency enantio-conversion of chiral mixtures have been analyzed.Our work opens up a route to achieve high-efficiency enantio-conversion of chiral mixtures in the presence of the tunneling interaction between two chiral states.
Figure 1 .
Figure 1.(a) The schematic diagram of the four-level model of chiral molecules.Here |L⟩ and |R⟩ are, respectively, the degenerated left-and right-handed chiral ground states, while |S⟩ and |A⟩ are, respectively, the symmetric and asymmetric achiral excited states.Three electromagnetic fields with frequencies ω 2 , ω 0 , and ω 1 are applied to couple the four-level model in ∆-type substructures of |Q⟩ ↔ |A⟩ ↔ |S⟩ ↔ |Q⟩ with Q = L, R under three-photon resonance conditions, where the corresponding coupling strengths are Ω A , Ω 0 , and Ω S .ϕ L and ϕ R are the overall phases of the two ∆-type substructures with ϕ R = ϕ L + π.In addition, there is a tunneling interaction between |L⟩ and |R⟩, with η being the tunneling strength.(b) The four-level model is simplified to the effective three-level model by eliminating adiabatically the symmetric achiral excited state |S⟩ in the large-detuning region.Here the blue dashed lines represent the indirect interaction between |Q⟩ and |A⟩, with a coupling strength of −Ω S Ω 0 /∆, induced by the two-photon processes |Q⟩ ↔ |S⟩ ↔ |A⟩.Similarly, the orange dashed line indicates the indirect tunneling interaction between |L⟩ and |R⟩ with coupling strength −Ω 2 S /∆ that is induced by the process |L⟩ ↔ |S⟩ ↔ |R⟩.(c) Under the condition of ϕ L = 0 (that means ϕ R = π), ∆ = Ω 2 S /η, and Ω A = Ω S Ω 0 /∆, the lefthanded chiral ground state |L⟩ is decoupled to |R⟩ and |A⟩.
Figure 1 (
Figure 1(b) shows this effective three-level model.Here the blue solid and dashed lines express the single photon processes |Q⟩ ↔ |A⟩ and the two-photon processes |Q⟩ ↔ |S⟩ ↔ |A⟩, respectively.The orange solid (dashed) line corresponds to the direct (indirect) tunneling interaction between the two chiral ground states.This indirect tunneling interaction with coupling strength Λ ≡ −Ω 2 S /∆ is induced by the process |L⟩ ↔ |S⟩ ↔ |R⟩.In order to establish the chiral-state-selective excitations with the right-handed chiral ground state being excited to the asymmetric achiral excited state and the lefthanded one being undisturbed, the appropriate detuning and coupling strengths of the electromagnetic fields should be chosen to eliminate the tunneling interaction between two chiral ground states and the interaction between |L⟩ and |A⟩.For this purpose, we design the system parameters to satisfy
Figure 4 .Figure 5 .
Figure 4. (a) The steady-state enantiomeric excess ε of the chiral ground state as a function of the detuning ∆ when the coupling strength Ω 0 /Ω S = 1 and 5. (b)The steady-state enantiomeric excess ε as a function of the coupling strength Ω 0 at ∆/∆ 0 = 1.Other parameters are the same as those in Fig.3. | 4,828.6 | 2023-01-15T00:00:00.000 | [
"Physics"
] |
Robust Indoor Localization Methods Using Random Forest-Based Filter against MAC Spoofing Attack
With the development of wireless networks and mobile devices, interest on indoor localization systems (ILSs) has increased. In particular, Wi-Fi-based ILSs are widely used because of the good prediction accuracy without additional hardware. However, as the prediction accuracy decreases in environments with natural noise, some studies were conducted to remove it. So far, two representative methods, i.e., the filtering-based method and deep learning-based method, have shown a significant effect in removing natural noise. However, the prediction accuracy of these methods severely decreased under artificial noise caused by adversaries. In this paper, we introduce a new media access control (MAC) spoofing attack scenario injecting artificial noise, where the prediction accuracy of Wi-Fi-based indoor localization system significantly decreases. We also propose a new deep learning-based indoor localization method using random forest(RF)-filter to provide the good prediction accuracy under the new MAC spoofing attack scenario. From the experimental results, we show that the proposed indoor localization method provides much higher prediction accuracy than the previous methods in environments with artificial noise.
Introduction
As wireless networks and smart phones have become widespread, indoor localization systems (ILSs) started receiving much attention. While tracking the location of the user's devices, indoor localization systems can provide the location information to the service client and the service providers in an indoor environment. For example, in an art gallery, indoor localization systems can provide the location of devices so that visitors can obtain a description of the artwork they are currently viewing. In addition, gallery operators can place artworks based on statistical information of the user's location.
Especially, Wi-Fi is commonly used for indoor localization because the wireless access point (WAP) information can be used without additional hardware [1]. To predict the location of user, Wi-Fi-based indoor localization systems generally use the received strength signal(RSS) values of user's device captured by multiple WAPs. Here, RSS is a measurement of the power present in a received radio signal. However, their usage is limited because Wi-Fi-based indoor localization systems show low performance in environments with natural noise such as shading and multiple path fading [2].
As a representative method to overcome the performance degradation of Wi-Fi-based indoor localization systems under the environment with natural noise, some studies proposed a method using channel state information (CSI) that contained more location-related information than RSS. Even though CSI can improve the localization accuracy of the Wi-Fi-based indoor localization systems, the usage in practical applications is limited because it requires the modification of the device [3,4].
To address such practical issue, studies on removing natural noise using traditional filters such as moving average filter and particle filter are proposed [5][6][7]. As shown in Figure 1A, after removing the natural noise using filters, the user's location is estimated through a heuristic classification algorithm such as decision tree (DT) and random forest (RF). Moreover, most recent studies try to apply deep learning techniques to estimate the user's location by learning the characteristics of the RSS values and natural noise as shown in Figure 1B [8][9][10]. Let us note that the performance of both methods significantly decrease under the environment with artificial noise caused by media access control (MAC) spoofing attack [11]. Here, MAC spoofing attack is an attack that attackers spoof their MAC address into MAC address of user's device to perform a man-in-the-middle (MITM) attack. In this paper, we introduce a specific MAC spoofing attack scenario, where the localization accuracy of the state-of-the-art indoor localization methods decreases. In this attack scenario, after spoofing the user's MAC address, an adversary sends his signal to wireless access points (WAPs) distant from the actual location of the user as if his signal comes from a normal user's device. As a result, RSS values with the same one as the user's device ID are captured even at locations where the user device is not actually located. Such results can cause severe budget losses on discount stores which are sensitive to small rearrangement the interior.
To deal with this attack scenario using the artificial noise, we propose a new deep learning-based indoor localization method whose overall operation are shown in Figure 1C. Different from the previous deep learning-based methods in Figure 1B, the RF-based filter is applied to remove artificial noise before feeding into the deep learning model. The RF-based filter learns noise patterns of MAC spoofing attack. After identifying whether the RSS value includes artificial noise generated by MAC spoofing attack or not, the RF-based filter removes the artificial noises.
Main contributions of this paper can be summarized as follows: (1) After analyzing the problem of the previous indoor localization systems, we introduce a possible attack scenario decreasing their localization accuracy; (2) We propose a new deep learning-based indoor localization method using RF-based filter to show the good localization accuracy under the environment with artificial noise; (3) From the experimental results with multi-building, multi-floor dataset [12], we show that the deep learning-based indoor localization method shows better localization accuracy against MAC spoofing attack than the state-of-the-art deep learning-based indoor localization method.
Since we applied a Random Forest filter to remove artificial noise generated by MAC spoofing attack, this paper is similar to the work of Alotaibi et al [13]. However, in this work, Alotaibi et al. did not consider RSS time-series information. Different from the work of Alotaibi et al. by considering RSS time-series information, we consider when fake user's signals are captured by the AP in a space different from the actual user during the time the indoor localization system estimates the user's indoor location.
The rest of this paper is organized as follows. In Section 2, we introduce the existing works related to Wi-Fi-based indoor localization systems. In Section 3, we show some preliminary experimental results for designing the proposed indoor localization method. After describing the details of the proposed MAC spoofing attack scenario and deep learning-based indoor localization method in Section 4, we evaluate the performance of the proposed deep learning-based indoor localization method from the experimental results using a multi-building and multi-floor indoor localization dataset in Section 5. Finally, we conclude the paper in Section 6.
Related Work
According to the data collected from user's mobile devices, the Wi-Fi-based indoor localization systems are categorized into using a CSI and using a raw RSS in general. In 2017, Hao Chen et al. proposed a convolutional neural network (CNN)-based Wi-Fi localization algorithm using a time-frequency matrix organized from CSI. After converting complex number information of CSI into a feature image, they showed about 91% accuracy through a CNN model consisting of three convolutional layers and two fully connected layers [3]. Shangqing Liu et al. used CNN to extract the relationship between the channel information of CSI and the number of people in the multi-human environment. Also, they used long short term memory (LSTM) model to analyze the dependence between the number of people and CSI. This method showed average accuracy by as much as 86.4% under the environment with five or more people [4].
However, Wi-Fi-based indoor localization systems using CSI have the limitation that the existing device driver should be modified. As an alternative, the Wi-Fi-indoor localization systems using raw RSS from WAP have been widely studied [14]. The indoor localization systems using raw RSS are mainly categorized into two groups: (1) filtering-based approach [5][6][7]15,16]; (2) deep learning-based approach [8,9].
The filtering-based methods remove noise before estimating the user's location by using a classifier. Henri Nurminen et al. proposed running a light-weight fallback filter in the background of real-time particle filter and forward-backward recursions-based smoother for 2D, 3D door positioning [5]. Bodhibrata Mukhoopadhyay et al. suggested estimating a particular location using mode values of RSS and removing high frequency noise using moving average filter to improve indoor position accuracy [16]. Zhu Nan et al. proposed a new particle filter based on Rao Blackwellized particle filter (RBPF) to address the issue that WAPs could be sparse and short range [6]. In addition to improving performance using filters, the classification system for indoor positioning has been advanced through machine learning techniques. Rafał GÓRAK et al. proposed a modified random forest algorithm for indoor localization system [15]. They also showed that the indoor localization system worked without errors even in situations where some WAPs were turned off. Sunmin Lee et al. proposed a system that estimated the indoor location of smart watch devices using random forest, and used basic service set identifier (BSSID) as well as RSS to address the problem of similar signal strength [7]. The location-based radio map and BSSID list-based radio map is also used to reduce the number of comparison.
The deep learning-based methods train noise itself, such as shading and fading. Kim et al. proposed a hierarchical deep neural network (DNN) architecture consisting of a stacked autoencoder for multi-label classification of building, floor, and location [9]. By appending autoencoder models according to the number of buildings and floors, their DNN architecture for multi-building and multi-floor indoor localization can cover a large-scale complex of many buildings. To address the interference of moving objects and co-channel interference, Qiwu Zhu et al. proposed an ensemble model consisting of fuzzy classifier and multi layer perceptron(MLP) at the indoor parking localization [17]. They showed high accuracy through experiments using indoor parking lots in real shopping mall. Mai Ibrahim et al. presented a CNN-based method for indoor localization from multi-building and multi-floor dataset [8]. Their method showed 100% accuracy for building and floor prediction by using RSS time-series information. However, the performance of the deep learning-based methods decreases when fake RSS tuples artificially generated from active attacks such as MAC spoofing attack are injected into fingerprint DB.
Preliminary
In this section, we introduce the experimental environment used at Mai Ibrahim et al.'s method [8], which is a state-of-the-art CNN-based indoor localization using RSS time-series data, and shows the measured prediction accuracy using a public RSS dataset, called UjiIndoorLoc [12]. We also introduce the limitation of the deep learning-based method under a threat model through active attacks such as MAC spoofing attack. Such a limitation motivated us to design the proposed deep learning-based indoor localization method using RF filter.
In Figure 2, we show an example which shows the overall operation of how to predict user's indoor location using RSS values in photo exhibition. Let us consider a user in front of the eagle photo whose position ID is one. First, the user's mobile device sends the RSS values together with the phone ID(: 1 ). Then, nearby WAPs, i.e., WAP(A) and WAP(B), capture RSS. Second, WAPs (A) and (B) send the captured RSS and phone ID to a server(: 2 ). Third, after collecting RSS values from two WAPs and storing in the fingerprint DB, a server predicts position ID 1 matched with user's location(: 3 ). Fourth, a server sends the narrative information related to position ID 1 to user's mobile device(: 4 , 5 ). As described in [8], we generate a feature image using T number of RSS tuples with the same phone ID stored in the fingerprint DB for S seconds. For example, let us assume that T is set into 3 and RSS tuples are collected from 6 WAPs. As shown in Figure 3a, three raw time-series RSS tuples with phone ID 1 are used to generate a feature image, whose size is 3 × 6. Using the feature image, we train and test the deep learning-based method. In practice, after implementing the experimental environment used at Mai Ibrahim et al.'s method [8], we measured the prediction accuracy of indoor localization using UjiIndoorLoc dataset [12], which has RSS values collected from 520 WAPs. From S and T values set into 60 and 3 respectively, we create the feature image whose size is 3 × 520 × 1, i.e., T×(# of WAPs)×1. Also, we train and test the deep learning-based method using 6453 number of training data and 138 number of test data, respectively. When training CNN model composed of two convolutional layers, we set the parameters into: ReLU, Softmax, Adam and MaxPooling for activation function, output layer activation function, optimizer, and pooling method, respectively [18,19]. Also, both the kernel size and the stride are set to 2. As a result of training with 30 epochs, the CNN model showed the indoor location prediction accuracy by as much as 94.93%. When T was set to 4, the accuracy were improved up to 100%.
Let us note that by targeting on the environment shown in Figure 2, as shown in Figure 3b, an adversary can generate a manipulated RSS tuple including artificial noise with the same phone ID. By injecting a fake RSS tuple(red-colored tuple) into fingerprint DB through active attack such as MAC spoofing attack, the adversary can cause the wrong prediction from a deep learning-based indoor localization method. That is, such manipulation results in poor indoor localization prediction accuracy. As shown in Figure 4, with the MAC spoofing attack, a user who requests some location information receives the wrong information from server. To prevent attacks from such a threat, refining RSS tuples stored in the fingerprint DB is necessary before generating the feature image. As an efficient method to eliminate artificial noise generated from such a threat, we propose a deep learning-based indoor localization method using RF filter.
Proposed Method
In this section, we describe the proposed deep learning-based indoor localization method using the RF filter to eliminate artificial noise. After introducing MAC spoofing attack scenario to generate a fake RSS tuple with the artificial noise, we show a novel defense method, which is a deep learning-based indoor localization method using RF filter.
MAC Spoofing Attack Scenario
As the user moves to the falcon photo from the eagle photo, the user's mobile device sends RSSs to the surrounding WAPs (D), (E), and (F). What if the adversary sends the fake RSS data with the disguised user's device ID to WAPs (A), (B), and (C)? As a result, the user receives information about the eagle, not the falcon, because the indoor localization system on the server returns an abnormal prediction result. To understand the targeted MAC spoofing attack scenario, let us consider an artificial noise injection example through MAC spoofing attack in Figure 5. First, after spoofing the MAC address of the targeted user's mobile device at position 2, the adversary sniffs and analyzes RSS data from the user's mobile device (: 1 ). Second, the adversary sends the fake RSS data with phone ID of the user's mobile device (: 2 ). Third, WAPs (A), (B), and (C) nearby the adversary capture RSS. Third, WAPs (A), (B), and (C) send the captured RSS and phone ID to a server (: 3 ). Fourth, after collecting RSS values from three WAPs and storing in the fingerprint DB, a server predicts position ID 1 matched with the predicted user's location (: 4 ). Fifth, a server sends the narrative information related to position ID 1, i.e., eagle information, to the user's mobile device, not falcon information (: 5 , 6 ).
Overall control flow for the artificial noise generation is shown in Figure 6. First, an adversary sorts the collected raw RSS data in ascending order by timestamp, building ID, floor ID, and phone ID. Second, the adversary generates each group data which has the same phone ID, building ID, Floor ID from the sorted data. Third, the adversary sets the start time, the end time and the targeted phone ID. Here, S is set into the end time minus the start time. Finally, the adversary may generate from 0 to 50 the fake RSS data with the targeted phone ID, the different building ID and floor ID. This is because the artificial noise can also be generated from multiple locations within the specific range of signal. As a result, as shown in Figure 6, the fake data is injected into fingerprint DB with the original group data. Therefore, the prediction accuracy of Wi-Fi-based indoor localization systems decreases severely.
Deep Learning-Based Indoor Localization Method Using RF Filter
Due to the existence of the fake RSS data, it is ineffective to use data in fingerprint DB for the indoor localization without refining. Indeed, these fake data make the highly performing indoor localization system less accurate. To address such an issue, we propose a new deep learning-based localization method, which applies the RF filter to remove the artificial noise from the RSS data.
In Figure 7, we show the overall operational procedure of the proposed indoor localization method with RF filtering. In contrast to the deep learning-based methods such as Mai Ibrahim et al.'s method where the original fingerprint DB is used, the modified fingerprint DB where the artificial noise is removed using RF filter is used. When the RSS tuples in the original fingerprint DB are given as the input data of RF, the independent n number of trees are used to classify the given input data. By conducting majority vote for the classified results from every trees, the final class, i.e., 'fake' or not 'fake', is made. If the final class for a RSS tuple is given into 'fake', the RSS tuple is removed and otherwise, the RSS tuple is kept.
Evaluation Result
In this section, we show how much the accuracy of the previous deep learning-based indoor localization method decreases under the introduced MAC spoofing attack scenario. Then, we show the performance evaluation result of the proposed deep learning-based indoor localization method for the artificial noise data. We also show the comparison results with the state-of-the-art indoor localization methods, such as noise training method and moving average filtering-based method, under artificial noise injected dataset.
Experimental Environment
To measure the performance of deep-learning-based indoor localization methods, the GPU server is used. The GPU server consists of an Intel(R) Xeon(R) CPU E5-2630v3 @2.40GHz with 8 cores, 62 GB RAM and an NVIDIA(R) GeForce RTX 2080 Ti. UjiIndoorLoc dataset [12] is used as the input dataset to evaluate the performance of the indoor localization method in multi-building and multi-floor environment. It contains RSS data collected from 520 WAPs, which are connected with 25 Android devices in the three buildings each of which has four floors. The RSS data has a negative integer values between −104 and 0 while +100 means that signal is not detected by a specific WAP.
Influence of Artificial Noise Injected Data on Indoor Localization System without Countermeasure
To observe the influence of the introduced MAC spoofing attack scenario on indoor localization methods without refining the artificial noise, we measured the accuracy of Mai Ibrahim et al.'s method described in Section 3. Before generating the feature images from the fingerprint DB, we replaced the RSS value of +100 into −110 and then, performed normalization before feeding it into CNN model.
In Figure 8a,b, we show the validation accuracy of CNN model for the original validation dataset and the artificial noise injected dataset, respectively. While the validation results using the original dataset show almost the prediction accuracy by as much as 94.93% on average, the validation results using artificial noise added data show the prediction accuracy by as much as 2.27% on average.
Indoor Localization Accuracy of the Proposed Method
To evaluate the proposed deep learning-based indoor localization method using RF filter, we measured the prediction accuracy of the following three indoor localization methods: (1) proposed deep learning-based method with RF filter; (2) deep learning method using artificial noise training [20]; and (3) filtering-based method using moving average filter [21].
Accuracy of the Proposed Indoor Localization Method with RF Filter
To evaluate the performance of the proposed indoor localization method, we trained the RF model to find the artificial noise injected data through MAC spoofing attack. From the experiments where RF consists of 100 estimators (decision trees), the proposed indoor localization method using RF filter showed the indoor localization prediction accuracy by as much as 95.31% at maximum as shown in Table 1 and by as much as 94.81% on average as shown in Figure 9a. This result implies that the proposed indoor localization method using RF filter can successfully remove the artificial noise with high accuracy. [20]. (c) Filtering-based method using moving average filter [21].
In Table 2, we also measured the performance of the proposed indoor localization method under various signal-to-noise ratio (SNR) situations where there is more fake data added by noise injection attacks. In this situation, the performance of the indoor localization system can be further worsen since the number of fake data is much more than the number of original real data. For high SNR situation where from 0 to 50 fake data was generated, the proposed indoor localization method using RF filter showed the indoor localization prediction accuracy by as much as 95.31% at maximum. For low SNR situation where from 50 to 100 fake data was generated, the proposed indoor localization method using RF filter showed the indoor localization prediction accuracy by as much as 94.68% at maximum. These results imply that the proposed indoor localization method using RF filter is effective in both high SNR and low SNR environments. To measure the performance of the deep learning-based method using artificial noise training, we train a CNN model using the artificial noise injected dataset instead of the original dataset. Similar to adversarial training [22], after adding artificial noise to the original RSS data, we train the CNN model. From Table 1 and Figure 9b, we observe that the deep learning-based method training with artificial noise injected dataset provides the prediction accuracy by as much as 54.46% at maximum and by as much as 53.77% on average, which are way lower than the proposed method, under artificial noise injected dataset. This is because even though the deep learning-based method was trained using the artificial noise injected dataset, the prediction accuracy decreased because RSS data appeared in multiple locations at the same time. For example, if the user is in location A, only WAPs around location A should have a RSS value between −104 and 0 while others should not. However, if WAPs around another location B have a value between −104 and 0 due to the artificial injected noise, the CNN model will be confused due to such conflicting RSS values collected from WAPs located in different locations at the same time. This result implies that while the deep learning-based indoor localization method using the artificial noise training can comprehend natural noise such as shading and fading, the artificial noise generated from MAC spoofing attack cannot be identified.
Accuracy of Filtering Method Using the Moving Average Filter
As one of the most common signal filters in digital signal processing(DSP), the moving average filter has been often used in many practical applications. For successive data, the moving average filter estimates the average value for the given window size. After setting the window size to 5, a dataset is generated with the average value of each five consecutive RSS data and fed into a CNN model. As shown in Table 1 and Figure 9c, the filtering-based method using moving average filter showed the lowest prediction accuracy by as much as 0.57% at maximum and by as much as 0.55% on average under artificial noise injected dataset. This result implies that likewise the deep learning-based method using the artificial noise training, the filtering-based method using moving average filter is effective when reducing natural noise, but is not effective when reducing artificial noise generated from MAC spoofing attack.
Conclusions
Indoor localization systems have been implemented using various technologies such as Wi-Fi, RFID, Bluetooth, and so on. Among them, Wi-Fi technology is commonly used for indoor localization systems because it does not need any additional hardware. The existing Wi-Fi-based indoor localization systems use filtering or deep learning techniques to remove natural noise such as shading and fading. However, the previous methods are vulnerable to artificial noise generated from active attacks such as MAC spoofing attack. In this paper, we introduced a MAC spoofing scenario which generates the artificial noise injected data. In our MAC spoofing scenario, an adversary sends his signal to WAPs distant from the actual location of the user as if his signal comes from user's device. The evaluation results show the prediction accuracy of the previous Wi-Fi-based indoor localization system without filtering decreased from 94.93% to 2.27%. To address the performance degradation problems due to the artificial noise, we also proposed a new deep learning-based indoor localization method using RF filter. The RF filter in proposed deep learning-based indoor localization method learns noise patterns of MAC spoofing attack to identify and remove artificial noises. From the experimental results using a public dataset, we showed that the proposed indoor localization method increased the prediction accuracy from 2.27% to 95.31% under the artificial noise injection attack.
Conflicts of Interest:
All the authors confirm that there is no conflict of interest. | 5,790.4 | 2020-11-26T00:00:00.000 | [
"Computer Science"
] |
Chitosan-Coated 5-Fluorouracil Incorporated Emulsions as Transdermal Drug Delivery Matrices
The purpose of the present study was to develop emulsions encapsulated by chitosan on the outer surface of a nano droplet containing 5-fluorouracil (5-FU) as a model drug. The emulsions were characterized in terms of size, pH and viscosity and were evaluated for their physicochemical properties such as drug release and skin permeation in vitro. The emulsions containing tween 80 (T80), sodium lauryl sulfate, span 20, and a combination of polyethylene glycol (PEG) and T20 exhibited a release of 88%, 86%, 90% and 92%, respectively. Chitosan-modified emulsions considerably controlled the release of 5-FU compared to a 5-FU solution (p < 0.05). All the formulations enabled transportation of 5-FU through a rat’s skin. The combination (T80, PEG) formulation showed a good penetration profile. Different surfactants showed variable degrees of skin drug retention. The ATR-FTIR spectrograms revealed that the emulsions mainly affected the fluidization of lipids and proteins of the stratum corneum (SC) that lead to enhanced drug permeation and retention across the skin. The present study concludes that the emulsions containing a combination of surfactants (Tween) and a co-surfactant (PEG) exhibited the best penetration profile, prevented the premature release of drugs from the nano droplet, enhanced the permeation and the retention of the drug across the skin and had great potential for transdermal drug delivery. Therefore, chitosan-coated 5-FU emulsions represent an excellent possibility to deliver a model drug as a transdermal delivery system.
Introduction
The transdermal drug delivery system (TDDS) offers several advantages over the other conventional drug delivery systems [1]. The TDDS is known to avoid the first-pass metabolism, offer stable drug delivery, exhibit a decreased systemic drug interaction, improve patient compliance, reduce the frequency of drug administration, and offer higher therapeutic efficacy and safety [2,3]. The infiltration of drugs into skin and their flexible expansion are constrained by the obstacle function of the highly structured nature of the stratum corneum (SC) components [4]. Various permeation enhancers have been used to enhance the topical delivery of drugs, i.e., dimethyl sulfoxide (DMSO), dimethylacetamide (DMAC), dimethylformamide (DMF) [5,6], pyrrolidone's [7], cyclodextrins [8], and azones [9]. However, permeation enhancers are associated with various problems, for example, an increase in the concentration of DMSO can trigger erythema or SC swelling. It can also cause the denaturing of skin proteins that may lead to scaling, stinging, erythema, irreversible membrane damage, urticaria contact, and a burning sensation as well [10,11].
Chitosan is one of the commonly used polymers in transdermal drug delivery systems due to the fact that it is biocompatible, non-toxic and improves the drug absorption through the skin epithelial layers [12]. It helps to increase the permeation of hydrophobic drugs through the skin and drug retention in the dermal epidermis via an interaction with the skin surface that leads to a change in the SC morphology and the disruption of the tight junctions of the corneocyte layers [13].
The strong interaction of chitosan with the skin surface allows for a long retention time and the enhancement of the permeation/absorption of drugs across the skin [14,15]. This can be attributed to the dual combination factors, such as: (1) mucoadhesive properties and (2) the transient opening of cellular tight junctions for the passage of hydrophilic macromolecules [16,17]. Chitosan improves the permeability of 5-fluorouracil (5-FU) across SC by transforming the prearrangement of phospholipids in the epithelial cell membrane that enhances the fluidity of lipid bilayers in the skin membrane. As a result, it may lead to the transportation of 5-FU via the transcellular pathway. 5-FU is a highly polar drug molecule that has been commonly prescribed for cancer treatment since the 1930s. However, in the USA the topical use of 5-FU for superficial cancer lesions was approved in the 1970s [18].
Attributed to low drug permeation through the SC barrier, the conventional topical formulation is limited to superficial dermal layers and requires a 5-FU dose of about 5% to achieve the desired drug effect. Hence, there is a high risk of undesired side effects and toxicity that may cause poor patient adherence to such treatment [19]. Some recently approved commercial topical formulations of 5-FU (0.5%) include Carac ® ; Sanofi, Gentilly, France [20], Fluoroplex 1% 5-FU solution (Allergan, Inc., Irvine, CA, USA), creams (Efudex ® , Valeant Pharmaceuticals, Bridgewater, NJ, USA) and a 0.5% microsphere-based cream (Carac ® , Valeant Pharmaceuticals) [21]. A report has suggested that commercially available topical products offer the demerit of low retention time at the delivery site that results in inadequate skin permeation along with skin irritation reactions, such as dryness, redness, swelling, and burning pain of the upper layer of skin [22,23].
To overcome the problem of inadequate skin permeation, the incorporation of 5-FU into the transdermal drug delivery system using emulsions may enhance the 5-FU permeation effectively into the deeper layers of skin with fewer adverse effects. Therefore, in the present study chitosan was used as a coating material to prepare emulsions of 5-fluorouracil. The formulated emulsions of 5-FU were further investigated for the influence of various surfactants on their physicochemical properties. Moreover, this study also involved the addition of diverse surfactants. For examples, span 20, SLS, T80, and PEG 4000, that were investigated in terms of their influence on the physicochemical characteristics that have potential effects on drug release and skin permeation of 5-FU across the skin.
The present study highlights the formulation of modified chitosan-(α-type chitosan) coated 5-FU emulsions for the first time, with enhanced permeability and retention across the skin. The findings of the present study will support the scientific community in the development of emulsions of 5-FU as a transdermal drug delivery system to deliver the model drug efficiently for cancer treatment. This study is not only expected to offer better drug delivery options in comparison to conventional drug therapies, but it also shows how the dosing related side effects and toxicity can be overcome.
Preparation of the Emulsions
Oil in water (O/W) emulsions were formulated by mixing different ratios of oil, surfactant, and aqueous solutions. The aqueous phase was prepared by dissolving the derivatives of chitosan (0.25 g w/v) in distilled water (44.4 mL w/v), followed by a drop wise addition to olive oil (5 mL w/v) and 0.25 g w/v of T80, SLS, Span 20 and PEG, respectively. The mixture was continuously homogenized for 2 min at 10000 rpm using a homogenizer (Daihan Scientific Co. Ltd., SANGWOLGOG, ONG SUNGBUK KU, Seoul 136120, Korea). The prepared emulsions were finally stored at 25 • C for further experiments.
Size and Zeta-Potential
The size and zeta-potential of the emulsions were determined using the Malvern Zetasizer Nano ZS90 (Malvern Instruments LTD., Malvern, Worcestershire, WR14 1AT, United Kingdom) as per the standard procedure with a minor modification [24]. Briefly, the test sample was diluted by ultrapure water (1:10) ratio. It was measured at a 90 • angle using a disposable electrode cuvette after rinsing with ethanol and ultrapure water.
Morphology of Emulsions
A light microscope (CX41RF, Olympus, Shinjuku-Ku, 163-0914 Tokyo, Japan) was used to observe the microscopic morphology of the emulsions. It was equipped with a digital eyepiece connected with a camera. A total of 1 mL of the emulsions was dropped on the glass slide and a thin smear was formed under the microscope to observe the shape and size of the emulsions.
pH of the Emulsions
The pH of a dermal emulsion is an important factor to be considered for skin compatibility. The electrode of the pH meter was immersed in 10% of each emulsion to detect the pH [25].
Viscosity Determination
A Brooke Field Viscometer (RVTD, Middleboro, Stoughton, 02072-MA, USA) equipped with an UN-adapter was used to measure the viscosity of the emulsions at a temperature of 25 • C. All the experiments were carried out in triplicate.
Emulsification and Phase Separation Study
The emulsions were diluted in series and a change in phase was observed optically. Briefly, oil was added to the surfactants in series ranging from 1:1 to 1:9 and added to 50 mL of distilled water. It was kept for 2 h, then a UV reading was measured at 260 nm by a UVvis spectrophotometer (Shimadzu 1601, Shimadzu, Kyoto 604-8511, Japan). In the phase separation study, 1 mL of the emulsions was taken into three different 10 mL volumetric flasks, and distilled water was added up to a mark to dilute it. It was inverted several times until a proper mixture was formed and stored for 2 h. The visual inspection was performed to determine the phase separation of the emulsions according to the previous method [26].
Drug Loading
Different quantities of the drug in increasing order were dissolved in the emulsions. Next, centrifugation was performed at 5000 rpm for 30 min to collect the supernatant and then it was dissolved in a suitable solvent. The absorbance was measured at 260 nm (Shimadzu 1601, Shimadzu, Kyoto 604-8511, Japan) [27]. The % of drug entrapment efficacy and loading capacity were estimated according to Equation (1):
Stability Study of the Emulsions
To determine the stability of the emulsions, various temperatures (5 • C, 25 • C and 40 • C) were employed up to 30 days for a visual observation of any cracking, creaming or phase separation [28].
Skin Irritancy Test
The skin irritancy test was performed based on the standard procedure using male Sprague Dawley rats [29]. A single dose (1 mL) of the emulsions was applied to the left ear (treatment) and right ear (control) of the rat and any development of erythema was noticed over a period of 24 h.
In Vitro Release of the Emulsions
The in vitro drug release profile of the emulsions was performed using Franz diffusion cells (K-C type, Pakistan). The donor and receiver chambers were separated by a cellophane membrane (pore size: 0.45 µm). For the study, 1 mL of the emulsion was placed on the surface of the prepared cellophane membrane. A phosphate buffer saline (PBS, pH 5.5) was utilized as a dissolution media. The temperature of the cells (32 ± 1 • C) was maintained by covering the water jacket (in simulation with the skin surface temperature) and the dissolution media was constantly stirred at 100 rpm. The samples (2 mL) were taken at each specific time interval (0, 0.5, 1, 2, 4, 8, 12, 16, 20 and 24 h) and diluted up to 5 mL with a substitute of the dissolution media. The samples were scanned at 260 nm by a UV spectrophotometer (Shimadzu 1601, Shimadzu, Kyoto 604-8511, Japan). Then, the percentage (%) of cumulative drug release was estimated [30,31].
Another method used to study the release of 5-FU from the emulsions was the centrifugation method. Briefly, in this method the PharmTest dissolution apparatus (Pharma Test Apparatebau AG, Siemensstrasse 5. D-63512 Hainburg, Germany) was used. A total of 5 mL of the emulsions was added into the USP apparatus II (Paddle) with 500 mL of release medium (Phosphate Buffer, pH 5.5). The temperature of the release medium was maintained at 32 ± 1 • C and the paddle rotation rate was maintained at 100 rpm. A total of 5 mL of the sample was taken at specific times intervals (0, 0.5, 1, 2, 4, 8, 12, 16, 20 and 24 h) and centrifuged at 1000× for 5 min. The filtrate was collected and analyzed on a UV spectrophotometer [32]. The dissolution profile of the 5-FU solution was obtained in a similar way.
Drug Release Kinetics
The Weibull equation was considered to determine the drug release kinetics in this study [33,34]. The obtained data were fitted according to Equation (2).
where, M t is the accrued mass dissolved at time t, M ∞ is the mass dissolved at an infinite time, a is the scale parameter and b is the shape parameter.
Ex Vivo Skin Permeation of the Drug
Franz diffusion cells were used to determine the skin permeation ability in ex vivo studies (K-C type, locally made, Pakistan) through freshly collected rat's skin. The rat's cervical part (Sprague Dawley; 200-250 g/kg/b.wt.) was separated by humane sacrificing. The abdomen section was marked and carefully shaved using a sharp razor blade. Excessive fat was removed from the subcutaneous parts of the entire abdomen skin using a surgical seizure. It was then gently washed with normal saline (0.9% NaCl) and stored at −20 • C by wrapping with aluminum foil for further use. The skin was pre-hydrated for 2 h to soften. Next, the SC (epidermis) side was placed facing the donor chamber while the dermal side was placed facing the receiver chamber. After that, they were carefully placed inside the Franz diffusion cell. After maintaining the temperature at 37 ± 1 • C and filling the receiver compartment with PBS (pH 7.4), it was stirred by a magnetic rotor at a speed of 100 rpm. After the donor compartment was filled with 1 mL of the emulsions, it was sealed with parafilm to maintain the occlusive conditions included in the samples. The samples (2 mL) were transferred to a tube at routine intervals (0, 0.5, 1, 2, 4, 8, 12, 16, 20 and 24 h). The obtained samples were filtered using a membrane filter (0.2 µm) and the absorbance was measured by a UV spectrophotometer [35].
Skin Drug Retention
Following the permeation test, phosphate buffer saline (PBS) with pH 7.4 was used to wash the skin to eliminate the additional formulation from the skin surface. Then the diffusion zone was slashed into a small section and dispersed in PBS (pH 7.4). It was then sonicated for 10 min and homogenized for 5 min. The homogenized sample was centrifuged for 15 min to collect the supernatant. Finally, it was filtered using a HPLC filter (0.2 µm) and the absorbance was measured at 260 nm (Shimadzu 1601, Shimadzu, Kyoto 604-8511, Japan) to determine the amount of skin drug retention.
Physicochemical Characterization of the Skin
The mechanism of the skin permeation of the emulsions was determined based on the physicochemical characterizations of the tested samples using ATR-FTIR chemical analysis. Following the drug permeation experiment, the skin was transferred and washed softly along with PBS (pH 7.4) to eliminate the emulsions from the skin surface. It was then placed on a zinc selenide crystal. An ATR-FTIR reading was taken with 16 cm −1 resolution, 675-4000 cm −1 and 1.5 min as the acquisition time [36].
Statistical Analysis
The obtained data were statistically analyzed using ANOVA (one-way analysis of variation) and t-test (IBM ® SPSS ® Statistics version 19, Armonk, New York 10504-1722, United States) and Statistical Package Minitab ® version 20 (Minitab, LLC, Pennsylvania, State College, PA 16801, USA). The data were statistically significant with a value of p < 0.05. All the tested data were described as triplicate (n = 3) and mean ± standard deviation (S.D).
Physicochemical Characterization of the Emulsions
The physicochemical properties provide a better insight into the formulation dynamics and a better understanding of the product concerning its application.
Droplet Size and Zeta-Potential
The overall particle size distribution indicates the quality of the formulation. The formulated emulsions exhibited a mean droplet size ranging from 109.6 ± 7.23 to 141.3 ± 9.31 nm in Table 1 with a narrow polydispersity index, which indicates a homogenous system, as shown in Figure 1a. The combination of the surfactant (T80) and the co-surfactant (PEG) resulted in the smallest droplet size, whereas SLS resulted in the largest size ( Table 1). The zeta-potential is another important parameter that determines the stability of emulsions, as well as their interaction with the biological tissues. The prepared emulsions exhibited zeta-potential ranging from +3.7 ± 0.61 mV to +5.5 ± 0.52 mV, attributed to the presence of chitosan, as shown in Figure 1b and Table 1. The use of a mixed system of surfactants and co-surfactants resulted in a higher positive zeta-potential by keeping chitosan at the surface of the nanodroplets.
Drug Loading and % Entrapment Efficiency
Entrapment efficiency (EE) is an important parameter as it determines the dose and packaging of the emulsions. The percent EE data of the formulated emulsions shown in Table 1 reveals that there was no significant difference (p > 0.05) in %EE among F1 (74.3 ± 2.1), F2 (69.7 ± 3.5) and F3 (76.9 ± 2.7). However, the %EE data of the F4 emulsions (80.4 ± 3.2) clearly demonstrates a statistically significant difference when compared with F1, F2 and F3. The %EE data clearly suggests that a combined use of a surfactant and co-surfactant plays a vital role in the enhancement of the %EE of 5-FU. The results of the current investigation are also supported by the study of Artiga-Artigas et al. and Sarheed et al. which reported that the presence of low surface tension between nano droplets covered with a surfactant improves drug solubility and prevents the droplet coalescence to ensure drug retention [37].
Morphology of Emulsions
The morphology has an important influence on the stability of formulations. A microscopic image of the formulated emulsions is shown in Figure 1c. The shape of the nano droplets in the emulsion was found to be spherical. Different compositions of nano formulation did not show any difference in droplet shape.
pH of Emulsions
The acid-base balance plays an important role in development of emulsions as it reflects the suitability of the emulsions on the skin. Table 2 indicates the pH of the formulated emulsions. All formulations showed a pH in the range of 5-6 (that is close to the pH of the skin), which justifies their suitability for topical application [38]. Data were expressed as mean ± S.D., n = 3.
Viscosity of Emulsions
Viscosity plays a key role in the emulsion's stability and spread ability. The viscosities of all the emulsions were found to be lower than 20 cps (Table 2), except for the F2 formulation that contained SLS. The formulations F1, F2 and F3 exhibited low viscosity values, which is an ideal property of emulsions [39].
Ease of Emulsification and the Phase Separation Study
Transmittance and phase dilution studies were conducted to estimate the emulsification and phase separation of the emulsions. The ease of emulsification indicates the high quality of emulsions as well as the therapeutic efficacy. During the phase separation study, as all the emulsions exhibited no phase separation, they were subjected to an additional assessment. Among the different formulations, F4 showed the maximum transmittance (Table 3), indicating better emulsification properties in comparison to the others.
Skin Irritancy Test
The compatibility of the emulsions was also tested in terms of skin irritancy. The test was performed on the rat ear; the presence of erythema was related to the irritancy potential of the emulsions to the skin. The resultant data given in Table 3 reveal that emulsions F1, F3 and F4 were well tolerated for 24 h, whereas the F2 emulsion exhibited skin irritancy, which is presumed to be due to the presence of SLS in emulsions [40].
Stability Study
The normality of the distribution of the data by a suitable test such as the Ryan-Joiner (same as Shapiro-Wilk) test or the Kolmogorov-Smirnov (K-S) test was measured before applying a one-way ANOVA. p > 0.05 is the probability that the null hypothesis is true.
However, a statistically significant test result (p ≤ 0.05) means that the test hypothesis is false or should be rejected. A P-value greater than 0.05 means that no effect or changes were observed. The stability of the chitosan-coated emulsions was monitored for 30 days. During this period there was no phase separation, cracking or sedimentation in the emulsions; however, a small variation in droplet size was observed ranging from 10-25 nm in Figure 2a. In size, the normality of the data distribution showed a p-value greater than p > 0. 15 It was noticed that the F4 formulation was the most stable among all the nano emulsions in size. This is attributed to its small droplet size which enhanced the stability of emulsions [41,42]. While in storage, the formulations were subjected to pH measurement, as shown in Figure 2b. The data show that were no effects on the pH of formulations F1 and F4 at day 10, 20 and 30 (p > 0.15). Moreover, there were significant (p < 0.01) changes reported in formulation F2 (Day 20 and 30) and F3 (Day 10 and 20). Usually, the appropriate pH for the topical application of emulsions ranges from 4 to 6. Surprisingly, no significant change (p > 0.15) in pH was observed while the formulated emulsions were in storage, which indicates the stability of the formulations. The viscosity of the emulsions was measured during storage (the resultant data is presented in Figure 2c). With regard to the viscosity of formulations F1, F3 and F4 (Day 10, 20 and 30), no significant (p > 0.15) changes were reported to have occurred during the stability period, except for the F2 formulation (p < 0.01) (Day 20 and 30), which provides supporting data to show the stability of the pharmaceutical formulations. The SLS-containing emulsions exhibited a high viscosity in comparison to the other surfactant-containing emulsions. The durability of the product was determined by observing changes in the droplet size, the pH and the viscosity of the four formulations (F1 to F4). It was observed that no change occurred in any of the three (droplet size, pH and viscosity) parameters in the F4 formulation. Therefore, this study suggests that the F4 formulation is the most stable based on the obtained data.
In Vitro Drug Release Study
Franz diffusion cells were employed to determine any premature drug release from the developed emulsions on the skin surface. A coating was added to the emulsions using chitosan to control the release of the drug. The drug release of the chitosan coated 5-FU emulsions was compared with uncoated 5-FU emulsions in buffer media at pH 5.5. The chitosan-coated 5-FU emulsions showed significant differences (p < 0.05) on the release of 5-FU compared to uncoated 5-FU emulsions. It is noteworthy that the 5-FU release was retarded from all the formulations and less than 35% of 5-FU was released within the first 500 min, whereas more than 80% of 5-FU was released from the solution in Figure 3a. The variation was noticed on the release of 5-FU among the formulations due to the types and concentrations of surfactant used in the formulation. The release of 5-FU from the emulsions was also performed using the centrifugation method. The 5-FU emulsions without a coating were released (100%) within 2 h, whereas the formulation of the emulsions retarded the release of 5-FU. According to the centrifugation method, the release of the drug from the F1 emulsion was significantly higher (p < 0.05) than the other formulations. More than 50% of the drug was released between 300 and 600 min, as shown in Figure 3b.
Drug Release Kinetics
The release kinetic mechanism was determined using the Weibull equation shown in Table 4. The value of 'b' varied from 0.643 ± 0.54 to 0.897 ± 0.16 and R 2 ranged from 0.6951 ± 0.47 to 0.8860 ± 0.96. The release kinetic data indicated the diffusion mechanism occurred via Euclidean space (F1), whereas the formulations (F2, F3, F4) also followed a diffusion mechanism within a normal Euclidian substrate, representing a different release mechanism from F1. The drug release mechanism model allows for a better understanding of the delivery system to elucidate the carrier mechanism [43]. It allows the drug release rate to be predicted from the matrix that may provide a preliminary idea to design the formulation. Figure 4 shows the amount of 5-FU permeability that occurred across the rat skin in 24 h using the receptor compartment (PBS, pH 7.4) of the Frances diffusion cell. The combination of the surfactants (tween) and the co-surfactant (PEG) showed the best drug penetration profile in the present study. In comparison to the control group (5-FU solution), the formulated emulsions of 5-FU were able to penetrate through the skin. It was found that initially, a higher penetration profile from 0 to 4 h was recorded that gradually reached a plateau due to the small droplet size of the emulsions, the concentration gradient, chitosan, and the surfactants used that directly enhanced the penetration capability through the SC in skin surface [39]. It can be concluded from the results that a reduction in the droplet size increases the permeability of emulsions through the skin. The smaller the droplet size, the better the spread ability covering a large surface area will be, allowing the transportation of the incorporated drugs in the emulsions deeper through the skin [44]. The formulation facilitates the skin diffusion process by the influence of the concentration gradient. The viscosity is another important factor that plays a key role in the penetration of the drug molecules transversely through the skin. Optimum viscosity is needed for the emulsions to pass through the SC; however, emulsions with too low or too high a viscosity will flow down or become sticky. Other factors such as droplet size and high spread ability also contribute to easier penetration across the skin [45]. Increasing the viscosity of the emulsions leads to changes in the formulation type from transdermal to topical drug delivery [46,47]. All formulations coated with chitosan exhibited greater skin permeation of 5-FU. This may be due to the cationic nature of chitosan polysaccharide which interacts with the negatively charged keratin in the skin (lipids and protein) and leads to drug permeation across the skin. However, the SLS-containing formulation showed minimum permeation across the skin, as shown in Figure 4. This may be due to SLS simply removing the detection layer of lipids above the critical micelle concentration (CMC) [38]. Formulation F3 contained span 20 as a surfactant which may have affected the intercellular lipids by enhancing the fluidity of the emulsion and enhancing diffusivity [39]. Formulations F1 and F4 were incorporated with T80 that caused an increase in the drug solubilization and penetration deeper into the intercellular lipids of the SC due to the non-ionic nature of this surfactant. T80 tends to interact and bind with skin keratin filaments which interrupt the corneocytes. The combination of T80 and PEG 4000 in F4 exhibited the highest permeation across the skin, as shown in Figure 4. The surfactants may have a dual effect in enhancing drug permeation on the skin components. PEG 4000 also acts as a drug permeation enhancer because of its potential interaction with the lipid constituents of the skin layer. The diverse effects of the surfactants on drug infiltration depends entirely on their ability to disrupt or fluidize of lipid composition of the SC [44]. Hence, F4 can be considered as a potential topical formulation because of its cutaneous retention of the drug within the skin. The small droplet size and the synergistic effect of combination of surfactants could be the potential factors for drug accumulation on the skin [41]. The results revealed that the difference in particle sizes also influences the drug uptake within the skin layers. The exact mechanism that causes such an increased drug uptake is still unclear and further investigations must be performed in the future. , where F4 exhibited significant differences from the rest of formulations, data are expressed as mean ± S.D, n = 3, * denotes (p < 0.05) and ** denotes (p < 0.01) statistical significance.
The Physicochemical Characterization of the Skin
After the permeation study, the tested skin was employed for ATR-FTIR analysis to determine the mechanism of the drug permeation across the skin membrane. The ATR-FTIR data reveal that all the formulations were influential in disturbing or fluidizing the lipids and proteins of the skin, except formulation F2. The fluidization of skin lipids and proteins intensified the permeation and retention of drugs in the skin. The peaks appeared in the epidermis at 3300 cm −1 , 2920 cm −1 and 2850 cm −1 that moved to a high frequency in the emulsion samples (F1, F3, F4), as shown in Figure 6. The peak that appeared from 3300 cm −1 to 3330 cm −1 represents O-H and N-H groups of keratin, ceramid, and additional lipophilic components in the SC [18,37]. This implies that emulsions primarily affect the lipids and proteins of the SC resulting in a high level of drug permeation across the skin. In the dermis, the spectra data showed the fluidized nature of the skin where peaks appeared from 3280 cm −1 to 3330 cm −1 , corresponding to the presence of O-H and N-H groups [47]. The peaks that appeared from 2850 cm −1 to 2930 cm −1 signify the asymmetric groups (CH 2 ), of skin keratin, lipids, and ceramide in formulations (F1, F3, F4), shown in Figure 7.
Conclusions
The current study intended to develop modified chitosan-coated emulsions containing 5-fluorouracil for transdermal delivery. The incorporation of olive oil, and a combination of surfactants and chitosan-coated 5-FU emulsions exhibited a uniform droplet size, representing the potential stability of the emulsions within an acceptable range of pH, suitably controlled drug release, increased skin permeability and a deeper penetration across the skin. The drug permeation data suggested that formulation F4 exhibited the highest permeation across the skin up to 1500 min. This could be attributed to the combination of T80 and PEG 4000 that can overcome the obstacle of drug solubilization and penetration via an interaction with and binding with skin keratin filaments. The formulated modified chitosan-based emulsions of 5-FU developed in this study can be considered as promising potential topical carriers for the controlled-release delivery of 5-FU due to its cutaneous retention of the drug within the skin. The emulsions of 5-FU developed in this study are not only expected to offer better drug delivery in comparison to conventional drug therapies, but they are also predicted to be able to overcome the dosing-related side effects and toxicity. However, further optimization studies including stabilization and targeting should be performed both in vitro and in vivo. The current study recommends that formulated emulsions of 5-FU should be investigated for further clinical studies.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 7,001.4 | 2021-09-29T00:00:00.000 | [
"Medicine",
"Materials Science",
"Chemistry"
] |
Spherical-Wave Source-Scattering Matrix Analysis of Coupled Antennas ; A General System Two-Port Solution
Expressions are given for the coupling between two antennas in terms of each antenna’s spherical-wave source-scattering matrix. 4 comparison with the “classical” scattering matrix representation is given in sufficient detail to permit conversion back and forth between the source-scattering matrix and the classical scattering matrix. Expressions for the transmission formulas, showing two different expressions corresponding to reversing the direction of propagation are given. However, if both antennas are reciprocal with equal characteristic waveguide impedances, then the two-port scattering matrix is a symmetric matrix. E XPRESSIONS FOR THE coupling between two antennas when each antenna is described by a spherical-wave source-scattering matrix representation are presented. Our expressions are derived using matrix algebra, thus avoiding the cumbersome modal-subscript and summation notation otherwise required. We explicitly account for multiple reflections between the antennas, and exhibit formal differences between transmission formulas when the propagation directions are reversed. Thus, we present a complete analytical picture from which we extract the simplified coupling equation commonly used as a starting point for analyzing spherical near-field scanning. Previously, Jensen [l] expressed the transmission from an arbitrary antenna to a probe, using the Lorentz reciprocity relation, in order to formulate a spherical near-field to far-field transformation. Subsequently, Wacker [2] reexpressed Jensen’s transmission formula from a scattering matrix approach, neglecting multiple reflections. In the general context of describing antenna coupling using scattering-matrix analysis, the earliest work was the plane-wave scattering-matrix formulation of Kerns and Dayhoff [3]. Wasylkiwskyj and Kahn [4] used spherical-wave scattering matices to express the coupling between minimum-scattering antennas, which by the definition of minimum scattering ignores multiple reflections between the antennas. Yaghjian [5] gave a complete cylindrical-wave source-scattering matrix analysis of two-antenna coupling presenting the first-order multiple-reflection term. Yaghjian also coined the term, source-scattering matrix, to distinguish a scattering matrix formulation, using cylindrical (or spherical) waves, in which the exciting spatial modes contain Bessel functions of the first kind as the radialdistance function. This particular radialManuscript received May 1982; revised May 26, 1987. The author is with the Electromagnetic Fields Division, National Bureau of Standards, Boulder, CO 80303. IEEE Log Number 8717294. variable function enters into the formulation in expressing the effect, on the cylindrical or spherical modes, of radially translating the coordinate origin. A source-scattering matrix formalism is also introduced by Appel-Hansen [6, ch. 81, in which he reexpresses Yaghjian’s cylindrical analysis [5] and then goes on to reobtain Wacker’s result [2]. Finally, a summary version of the present analysis is given in the 1981 Antennas and Propagation Society Symposium Digest [7]. The spherical-wave source-scattering matrix representation of an antenna may be expressed as This constitutes the source-scattering matrix representation for the “exterior” region, which consists of the region exterior to a spherical surface enclosing the antenna. Here, bo and a. represent the emergent and incident mode coefficients at the waveguide leads to the antenna, while Q and P are infinite column matrices representing the emergent and exciting spatial-mode coefficients at the hypothetical spherical boundary enclosing the antenna. For simplicity, only a single waveguide feed mode is assumed to propagate. The normalization on the modal Coefficient a. is such that 112 I a. I 2/Zo represents the incident power at a hypothetical terminal surface in the waveguide feed, where Z , is the characteristic impedance of the waveguide. The elements of the P matrix are coefficients of vector spherical wave functions whose product sums up to equal the electric field incident on the enclosing spherical boundary. Also, r is the waveguide reflection coefficient of the antenna, T is an infinite column matrix representing the antenna’s transmission properties, R is an infinite row matrix representing the antenna’s receiving properties, while S is an infinite square matrix representing the antenna’s scattering properties. The spatial modes associated with the Q matix each contain a spherical Hankel function of the f i s t kind, which represents an outgoing-wave mode when an e-ht time dependence is assumed. The spatial modes associated with the P matrix, on the other hand, each contain a spherical Bessel function as the radial coordinate function. We can contrast the source-scattering matrix (1) with the classical spherical-wave scattering matrix [SI, for which the indicent or exciting spatial-wave modes contain spherical Hankel functions of the second kind. Thus, the classical scattering matrix for the “exterior” region may be expressed, U.S. Government work not protected by U.S. copyright
Spherical-Wave Source-Scattering Matrix Analysis of Coupled Antennas; A General System Two-Port Solution Abstract-Expressions are given for the coupling between two antennas in terms of each antenna's spherical-wave source-scattering matrix.4 comparison with the "classical" scattering matrix representation is given in sufficient detail to permit conversion back and forth between the source-scattering matrix and the classical scattering matrix.Expressions for the transmission formulas, showing two different expressions corresponding to reversing the direction of propagation are given.However, if both antennas are reciprocal with equal characteristic waveguide impedances, then the two-port scattering matrix is a symmetric matrix.
E XPRESSIONS FOR THE coupling between two antennas when each antenna is described by a spherical-wave source-scattering matrix representation are presented.Our expressions are derived using matrix algebra, thus avoiding the cumbersome modal-subscript and summation notation otherwise required.We explicitly account for multiple reflections between the antennas, and exhibit formal differences between transmission formulas when the propagation directions are reversed.Thus, we present a complete analytical picture from which we extract the simplified coupling equation commonly used as a starting point for analyzing spherical near-field scanning.Previously, Jensen [l] expressed the transmission from an arbitrary antenna to a probe, using the Lorentz reciprocity relation, in order to formulate a spherical near-field to far-field transformation.Subsequently, Wacker [2] reexpressed Jensen's transmission formula from a scattering matrix approach, neglecting multiple reflections.In the general context of describing antenna coupling using scattering-matrix analysis, the earliest work was the plane-wave scattering-matrix formulation of Kerns and Dayhoff [3].Wasylkiwskyj and Kahn [4] used spherical-wave scattering matices to express the coupling between minimum-scattering antennas, which by the definition of minimum scattering ignores multiple reflections between the antennas.Yaghjian [5] gave a complete cylindrical-wave source-scattering matrix analysis of two-antenna coupling presenting the first-order multiple-reflection term.Yaghjian also coined the term, source-scattering matrix, to distinguish a scattering matrix formulation, using cylindrical (or spherical) waves, in which the exciting spatial modes contain Bessel functions of the first kind as the radialdistance function.This particular radial-variable function enters into the formulation in expressing the effect, on the cylindrical or spherical modes, of radially translating the coordinate origin.A source-scattering matrix formalism is also introduced by Appel-Hansen [6, ch.81, in which he reexpresses Yaghjian's cylindrical analysis [5] The spherical-wave source-scattering matrix representation of an antenna may be expressed as This constitutes the source-scattering matrix representation for the "exterior" region, which consists of the region exterior to a spherical surface enclosing the antenna.Here, bo and a.
represent the emergent and incident mode coefficients at the waveguide leads to the antenna, while Q and P are infinite column matrices representing the emergent and exciting spatial-mode coefficients at the hypothetical spherical boundary enclosing the antenna.For simplicity, only a single waveguide feed mode is assumed to propagate.The normalization on the modal Coefficient a. is such that 112 I a.I 2/Zo represents the incident power at a hypothetical terminal surface in the waveguide feed, where Z , is the characteristic impedance of the waveguide.The elements of the P matrix are coefficients of vector spherical wave functions whose product sums up to equal the electric field incident on the enclosing spherical boundary.Also, r is the waveguide reflection coefficient of the antenna, T is an infinite column matrix representing the antenna's transmission properties, R is an infinite row matrix representing the antenna's receiving properties, while S is an infinite square matrix representing the antenna's scattering properties.The spatial modes associated with the Q matix each contain a spherical Hankel function of the fist kind, which represents an outgoing-wave mode when an e-ht time dependence is assumed.The spatial modes associated with the P matrix, on the other hand, each contain a spherical Bessel function as the radial coordinate function.
We can contrast the source-scattering matrix (1) with the classical spherical-wave scattering matrix [SI, for which the indicent or exciting spatial-wave modes contain spherical Hankel functions of the second kind.Thus, the classical scattering matrix for the "exterior" region may be expressed, where I is the identity matrix, U = 1/2P+ Q, and V = 1/2 .P. The attraction of this representation is that each mode clearly exhibits the characteristics of an incoming or outgoing wave.
In Fig. 1 we show two antennas; the antenna on the left is associated with the unprimed coordinate system, while the antenna on the right is associated with the doubly primed coordinate system.In Fig. 1, the Euler angles 6, 8, x describe the orientation of the singly primed coordinate system with respect to the unprimed coordinate system.Here, z' and z" are collinear while the remaining singly primed coordinates are, respectively, parallel to their doubly primed coordinate 'system counterparts.Also, r is the separation distance between the unprimed and doubly primed coordinate systems.Equation (1) gives the source-scattering matrix for the antenna on the left, while the exterior-region source-scattering matrix for the antenna on the right is given by Here, the carets on the scattering matrix elements designate the right-hand antenna, while the primes designate the coordinate system.Similarly, the absence of carets on the scattering matrix elements in (1) designates that the left-hand antenna is represented, while the absence of primes designates that the scattering matrix elements are expressed with respect to the unprimed coordinate system.
We now wish to determine the effect, on the scattering ma& elements f ' I , 2 ' I , ?, and SI', of transforming the doubly primed coordinate system into the unprimed coordinate system.That is, although the two antennas shown in Fig. 1 remain fixed in space, we can employ a coordinate system transformation to obtain a scattering-matrix representation, referenced to the unprimed coordinate system, for the antenna on the right in Fig. 1.Then, with the scattering matrices of both antennas referenced to the same coordinate system, the mutual coupling equations can readily be obtained.The resulting "interior-region" source-scattering matrix representation for the antenna on the right, referenced to the unprimed coordinate system, would relate the same spatial-mode coefficient matrices, P and Q, that are related by (1).Thus, we seek the transformed source-scattering matrix elements in the relation This "interior-region" representation is valid for the region interior to a sphere, centered about the unprimed coordinate system origin, whose volume just excludes the antenna represented, (i.e., the antenna on the right in Fig. 1).Thus we have a common region, between the regions of validity of ( 1 field.In Fig. 1 , the solid-line circles each denote the boundary of the circumscribed antenna's "exterior" region representation, while the dotted-line circle denotes the boundary of the right-hand antenna's "interior" region representation.The common region is represented as the space between the two concentric circles. We still require a determination of the scattering matrix elements of (3) in terms of the known scattering matrix elements of (2).These results, upon applying the coordinatesystem transformation operations to the matrix elements in (2), Here, D is a known [lo] coordinate-system rotation matrix corresponding to a rotation through the Euler angles 4 , 8, x.
Dt denotes the transposed complex conjugate of the D matrix, which means that it corresponds to the inverse rotation.Also, C is a coordinate-system translation matrix corresponding to a rigid coordinate-system translation a distance r along the z' axis (6) Upon substituting ( 5) into (6) we are able to evaluate the equivalent system-two-port scattering matrix, which is defined as (cf. [12]) matrix are defined [ 10, ch.41 as and which is formed by considering just the waveguide leads to the two antennas.There results
r +RS^(I-SS^)-' T R(Z-s^S)-* T (7)
It may be noted that this result is similar to that obtained by Kerns [12] using plane-wave scattering-matrix analysis.Here, we have used the matrix identity, (I -S^S)-'S^ = s^(I -As special cases of ( 7), we have the transmission formulas, SQ-1.
neglecting multiple reflections,
M ~~~~= R T D + T (8)
and The principal diagonal elements of ( 7) just reduce to the waveguide reflection coefficients when multiple reflections are neglected.At this point, we present expressions showing the angular dependence of the two oppositely directed transmission formulas.The spherical angles 0 and 4 are defined in Fig. 1 The d;p(0) functions are closely related [lo] to the Jacobi polynomials.
As special cases of (lo), when the antenna on the right in Fig. 1 is an ideal x-directed dipole, it has been verified for x = 0 and x = d 2 that our expression for Mo 11 is proportional to the 0 and 4 components, respectively, of the vectorspherical-wave-function expansion of the electric field radiated by the antenna on the left.We conclude by noting the reciprocity relation that exists between ( 10) and (1 1) when both antennas in Fig. 1 are reciprocal.For this case, it is shown in Appendix II that the preceding expressions are related according to where 20, 2: are the characteristic impedances of the waveguide leads to the two antennas.
APPENDIX 1 COORDINATESYSTEM TRANSLATION AND ROTATION TRANSFORMATIONS OF THE SOURCE-SCATTERING MATIUX ELEMENTS
Following Stratton [ 141, we introduce vector spherical wave functions M,, and N,, which satisfy the vector Helmholtz equation, V2F + k2F = 0; k = 2n/X where X is the wavelength.Here, N,, = l/kV x M,, a n d M , , = -i.x V$,,,,, where i. is the radial vector and Gm,, satisfies the scalar wave equation (Vz + k2)$,, = 0.In particular, $ , , contains the associated Legendre function of degree n and order m as well as the spherical Bessel function of the first kind of order n.We also introduce the vector spherical wave functions NZb = l / k V X Mi; and M$; = -i X V$(') where $EAcontains a Spherical Hankel function of the first kind as the function of the radial coordinate.Thus, the first set of vector basis functions would be associated with the coefficients in the P matrix, while the second set of vector basis functions would be associated with the coefficients in the Q matrix, where the P and Q matrices are introduced by (1).We have yet to specify the normalization and the functional dependence of the equatorial angle 6 for the scalar spherical wave functions $ , , and $$;, as constraints on these choices are imposed by the rotation of coordinates transformation.Rather than carry two separate notations for the vector spherical wave functions, we defiie mn ' with similar definitions for F::, :s = 1 , 2 .We now complete the determination of F,,, and F::, by requiring that the normalization and choice of the angular-variable functions be such that under a rotation of coordinates transformation the vector spherical wave functions transform according to " m = -n and similarly for F$! Here, the primes on the vector spherical wave functions indicate that these functions are defined with respect to rotated coordinates, where the singly primed coordinate-system's axes are parallel to the axes of the doubly primed coordinate system as shown in It should be obvious, for corresponding matrix elements in M(') and N('), that one or the other of these matrix elements will be equal to zero.
We can now write (15) as the matrix equation With the vector-spherical-wave-function row matrices defined, we can express the electric field vector E, with respect to the unprimed coordinate system, as Similarly, the electric field vector can be expressed with respect to the doubly primed coordinate system as E = F"p" + F(1)" Q" .Similarly, upon applying ( 18) and ( 19) to the second term on the right in ( 22) and comparing that result with (21) we see that P , = D a Q " . (25) Now from (2) we have Q" = Tjf a: + s^" P" .If we substitute this expression into (25) and use (24) for P" we obtain P=DaT'"a," +Das^"CDiQ. (26)
[ 111.The cy matrix corresponds to a coordinate-system translation in the opposite direction.The construction of the cu and C matrices is presented in the Appendices, where it is shown that the elements of the a and C are related by ayii = ( -)'+JCyii.The source-scattering matrix transformations (4) are derived in Appendix I.The coupling between the two antennas shown in Fig. 1 is determined by solving (1) and (3) to obtain and t?; =fad' +RQ bo=I'ao+RP.
, while the rotation of the right-hand antenna about the z N -axis is characterized by the angle x .Let us define the translated receiving and transmitting characteristic matrices R ' = l?" C and T' = a?".Then (8) and (9) can be written, respectively, as and c Similar expressions have also been obtained by h s e n [ 131.In the above, the index s characterizes either transverse electric (TE) or transverse magnetic (TM) modes, while rn and n characterize the order and degree of the spherical wave functions.The respective indexes p and m serve as functions.Also in the above, R,, and Tsmn are receiving and transmitting matrix elements from (l), while xipn and Ttsw are translated receiving and transmitting matrix elements characterizing the right-hand antenna.The asterisk in (10) denotes the complex conjugate, while the elements of the D D;,(+, 0, ~) = e -~~* d ; ~( 0 ) e -' P X .(12)
Fig. 1 .F
The constraints imposed by (14) on the angular functions used in the vector spherical wave functions are similar to the constraints noted byGriffin[lS] for specifying the spherical harmonics.Our resulting expressions for M,,, N,, and q !, NA! w i l l correspond with those definitions of the vector spherical wave functions given in [ 161, except that our expressions will require the additional factor d(n-rn)!/(n+m)!.Noting that d;,(B) is equal to this square-root factor multiplied by the associated Legendre function P;(cos 8); we can express the scalar spherical wave functions as $,=jn(kt)d;,(8)e'mQ, 4 = h;1)(/cF)d;,(B)e'm& where F, 8, 6 denote the spherical coordinates of the observation point, j,(x) is the spherical Bessel function of the first kind, of order n and argument x, while h!)(x) is the spherical Hankel function of the first kind [ 171.The translation of coordinates transformation, from the singly primed to the doubly primed coordinate system, can be expressed as the addition theorem [ (1, I p I) denotes the larger of 1 or I p I. The fact that an addition-theorem expansion of this type must exist is self evident, since M:v.and NlV constitute a complete set of basis functions for describing fields (or field components) at and in the vicinity of the origin 0".The above expansion applies to a rigid coordinate-system translation from 0 = 0'.to 0" a distance r along the z"-axis (see Fig.1).As stated in [ 161 , when translating from 0 " to 0 ' the coefficients A and B z are preceded by the factors( -) f l + v and ( -) n + u + l, respectively.The translation coefficients corresponding to this latter case are given in [16], from which we define A and B Z , except that our definitions include the additional factor d(np)!(v + p)!/(n + p ) ! (vp)! to account for replacing P;(COS 6) with d;,(B).Let us now form an infinite row matrix of the vector spherical wave functions F:2fl, which we shall call F(*).For our purposes, the matrix element F:2n is located at position I(s, rn, n) in F(l), where I(s, m, n) = 2[n2 + (s -1)n -11 + s + m + n.That is, for a given value of n, all of thes = 1 elements would be grouped together, starting with m = -n and running through m = + n ; then these in turn would be followed by all of the s = 2 elements and then by elements corresponding to the next value of n.We also define the row matrices M(') and N(') with suitably interspaced zero elements such that
FUF
suitably interspaced zero elements.For translation in the opposite direction we have the matrix equationFW'' = F ' a .(18)From (31) of Appendix 11, it is apparent that the elements of the a and C matrices are related by aii = ( -)i+jCg.Next, we can rewrite (14) in the formF' = FD.(19)Moreover,since D is a unitary matrix, D-= Dt so that
F
Let us consider the first term on the right in (21).From (16) and (20) we see that Now the right-hand side of (23) represents the exciting electric field with respect to the doubly primed coordinate system.Consequently, upon comparing (23) with (22) we see that and then goes on to reobtain Wacker's result [2].Finally, a summary version of the present analysis is given in the 1981 Antennas and Propagation Society Symposium Digest [7]. | 4,849.6 | 1987-12-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Spin Valve behavior in Current-Perpendicular-to-Plane Crossover Structural Fe3Si/FeSi2/Fe3Si Trilayered junctions
Yuki Asai, Ken-ichiro Sakai*, Kazuya Ishibashi, Kaoru Takeda, and Tsuyoshi Yoshitake** 1Department of Applied Science for Electronics and Materials, Kyushu University, Kasuga, Fukuoka 816-8580, Japan 2Department of Control and Information Systems Engineering, Kurume National College of Technology, Kurume, Fukuoka 830-8555, Japan 3Department of Electrical Engineering, Fukuoka Institute of Technology, Fukuoka 8110295, Japan
We have studied Fe3Si/FeSi2 artificial lattices, in which the ferromagnetic Fe3Si and semiconducting FeSi2 layers were alternatively accumulated in nano scale by facing targets direct current sputtering (FTDCS) and their spin-dependent electrical properties accompanied by a change in the interlayer coupling induced between the Fe3Si layers across the FeSi2 layer.The combination of Fe3Si and FeSi2 has the following merits [10][11][12][13][14][15][16]: (i) magnetoresistance effects in the currentperpendicular-to-plane (CPP) structures is easily detectable since the electrical resistivity of FeSi2 spacer layers is distinctively larger than that of Fe3Si layers; (ii) a spin injection efficiency might be higher than that of TMR films; (iii) the epitaxial growth of Fe3Si layers on Si(111) substrates is successively kept up to the top Fe3Si layer across FeSi2 spacer layers, which is beneficial to the coherent transportation of spin-polarized electrons; (iv) Fe3Si is feasible for a practical use since it has a high Curie temperature of 840 K and a large saturation magnetization, which is half of that of Fe.
Spin-dependent scattering of polarized carriers have been studied through GMR and TMR effects thus far.A switch between the parallel and antiparallel magnetization alignments of ferromagnetic layers in multilayered films, so-called spin valve, is a key operation for modulating spin-polarized current [17][18][19][20].Spin valves are classified into two types.One is a multilayered film wherein antiferromagnetic interlayer coupling is induced between ferromagnetic layers at a zero magnetic field.The antiparallel alignment is switched to a parallel alignment by applying magnetic fields.The other is multilayer films comprising ferromagnetic layers with different coercive forces.The antiparallel/parallel alignments, which depends on the applied magnetic field, are induced owing to the different coercive forces.Practically, a difference in the coercive force is produced as follows: (i) the combination of different materials for ferromagnetic layers; (ii) the employment of pining layers; and (iii) purposely changing the crystalline quality (polycrystalline or epitaxial) and film thickness between ferromagnetic layers, for instance between top and bottom layers for trilayered films.
In this work, the (iii) method was employed.In addition, in order to produce antiparallel alignments in the wide range of applied magnetic field, Fe3Si/FeSi2/Fe3Si trilayered films were prepared in crossover structure by employing a mask method.It was experimentally demonstrated that the magnetic field range for producing antiparallel alignments can be extended owing to a magnetic shape anisotropy.
Experimental procedure
Fe3Si (700 nm)/FeSi2 (0.75 nm)/Fe3Si (100 nm) trilayer films were deposited by FTDCS, combined with a mask method in a procedure schematically shown in Fig. 1(a).First, a p-type Si(111) substrate with a specific resistance range of 1000-3000 Ω cm, which was produced by a floating zone (FZ) method, was cleaned with 1% hydrofluoric acid and rinsed in deionized water before it was set into a FTDCS apparatus together with a mask.An Fe3Si bottom layer (100 nm) was deposited on the Si(111) substrate, using the 1st mask with line widths of X (X = 0.4, 0.6, and 0.8) mm.After the deposition, the sample was temporarily took out from the FTDCS apparatus to replace the mask.After replacing the mask, FeSi2 (0.75nm) and Fe3Si (700 nm) layers were successively deposited.All the depositions were carried out at a substrate temperature of 300 C.The base pressure was lower than 3×10 5 Pa and the film deposition was carried out at 1.33×10 1 Pa.The crystalline structure of the films was characterized by X-ray diffraction (XRD) using Cu K radiation.The magnetization curves were measured at room temperature using a vibration sample magnetometer (VSM).The external magnetic field was applied parallel to the line of bottom Fe3Si layer with the thickness of 100 nm.It corresponds to the horizontal direction in Fig. 1(b).
Results and discussion
The 2θ-θ XRD patterns of a Si(111) substrate as a background and CPP film deposited on the Si(111) are observed.The Fe3Si-220 peak is attributable to non-oriented crystallites.Figure 3 shows a polefigure concerning the Fe3Si-422 plane with a rotation axis of Fe3Si [222].It was confirmed that oriented crystallites are also in-plane ordered.Totally considering the results including those of our previous research, wherein Fe3Si thin films are epitaxially grown on Si(111) substrate even at room temperature, the bottom Fe3Si layer should epitaxially be grown on the Si(111) substrate.On the other hand, although the top Fe3Si thick layer deposited on FeSi2 layer might partially be oriented with the same orientation relationship as the bottom layer, it might be predominantly polycrystalline due to the temporal exposure to air for the replacement of the masks.
Figure 4 shows the top view of trilayered films prepared with no masks and three types of masks.It is confirmed that the trilayered films prepared with the masks have crossover structures, as expected.
Figure 5 shows the hysteresis loops of the trilayered films.As shown in Fig. 5(a), the trilayered film with no crossover structure exhibits the hysteresis loop with a weak step, which indicated that a difference in the coercive force between the top and bottom Fe3Si layers is not so large.
The hysteresis loops of the crossover structural films exhibit steps clearly as shown in Fig. 5(b), 5(c), and 5(d), as compared with that of non-crossover structural film.The clearness of the step is evidently enhanced with decreasing line width, which implies that the effective magnetic field is strongly affected by the geometry, concretely line width.The effective magnetic field is defined as follows: = + : effective magnetic field : external magnetic field : demagnetizing field
011504-4
The demagnetization field is strongly dependent on the shape of ferromagnetic materials.With decreasing line width, the difference in the demagnetizing field between the top and bottom Fe3Si layers is increased, which results in the effective magnetic field being further differently applied to the top and bottom Fe3Si layers.Since the magnetic field is applied parallel to the line of the bottom Fe3Si layer, the demagnetization field in the bottom layer is weak.On the other hand, for the line of the top Fe3Si layer, the magnetic field is applied perpendicular to the line.Thus, the demagnetizing field become so large.With decreasing line width, the difference in the demagnetizing field between the top and bottom Fe3Si layers become extreme, in other words, the effective magnetic field is remarkably reduced for the top Fe3Si layer and is hardly changed for the bottom Fe3Si layer.
Considering the above-mentioned magnetic shape anisotropy, the change in the magnetization alignment can qualitatively considered as follows.The magnetization of the bottom Fe3Si layer follows the reverse applied magnetic field at first, from the parallel alignment.As a result, the magnetization of the bottom Fe3Si layer become parallel with the reverse magnetic field while the magnetization of the top Fe3Si layer keeps antiparallel with the reverse magnetic field.In the reverse magnetic field range, antiparallel alignments are produced between the top and bottom Fe3Si layers magnetizations.With increasing reverse magnetic field, the magnetization of the top Fe3Si layer is reversed, and both magnetizations become parallel.The reverse of the top layer magnetization from the antiparallel to parallel alignment is further delayed by the enhanced demagnetizing field.Since the demagnetizing field is enhanced with decreasing line width, the crossover structural films with the small line widths exhibits the clear steps in the hysteresis loops.
Conclusion
CPP-structural Fe3Si/FeSi2/Fe3Si trilayered junctions with crossover structures were prepared on Si(111) by FTDCS combined with a mask method.The hysteresis loops evidently exhibited steps that originates from the antiparallel alignment between the top and bottom Fe3Si layers magnetizations owing to a difference in the coercive force of the Fe3Si layers.The difference in the coercive force is caused by differences in the layer thickness and crystalline quality (epitaxial or polycrystalline).In addition, it was experimentally demonstrated that the magnetic field range for producing the antiparallel alignment can be extended by forming crossover structures.It should be attributable to a magnetic shape anisotropy, and the line width dependent change in the hysteresis loop profile with the applied magnetic field was qualitatively explained well. | 1,887.2 | 2015-01-01T00:00:00.000 | [
"Physics",
"Engineering",
"Materials Science"
] |
Neural Network Modeling Based on the Bayesian Method for Evaluating Shipping Mitigation Measures
: Climate change caused by greenhouse gas emissions is of critical concern to international shipping. A large portfolio of mitigation measures has been developed to mitigate ship gas emissions by reducing ship energy consumption but is constrained by practical considerations, especially cost. There are di ffi culties in ranking the priority of mitigation measures, due to the uncertainty of ship information and data gathered from onboard instruments and other sources. In response, a neural network model is proposed to evaluate the cost-e ff ectiveness of mitigation measures based on decarbonization. The neural network is further enhanced with a Bayesian method to consider the uncertainties of model parameters. Three of the key advantages of the proposed approach are (i) its ability to simultaneously consider a wide range of sources of information and data that can help improve the robustness of the modeling results; (ii) the ability to take into account the input uncertainties in ranking and selection; (iii) the ability to include marginal costs in evaluating the cost-e ff ectiveness of mitigation measures to facilitate decision making. In brief, a negative “marginal cost-e ff ectiveness” would indicate a priority consideration for a given mitigation measure. In the case study, it was found that weather routing and draft optimization could have negative marginal cost-e ff ectiveness, signaling the importance of prioritizing these measures.
Introduction
With the global economy's continuous growth, global seaborne trade is forecasted to grow by 3.8% between 2018 and 2023 by the United Nations Conference on Trade and Development [1]. Shipping is becoming a significant contributor to global carbon emissions that affect climate change. The Marine Environment Protection Committee (MEPC 72), meeting at the International Maritime Organization (IMO), has published a preliminary strategy to reduce carbon emissions by half by 2050 [2,3]. To help achieve this goal, the IMO has proposed some indicators, such as the Energy Efficiency Design Index (EEDI), to improve ship energy efficiency [4]. For instance, newly built ships need to comply with the hull design index, including the EEDI. IMO has also proposed some emission reduction measures to reduce ship gas emissions, such as the vessel speed reduction imposed on ships [5]. Some mitigation measures have been implemented for most existing ships to save energy and reduce emissions [4].
To reduce ship gas emissions, the IMO has proposed more than 50 mitigation measures, which can be grouped into technical measures and operational measures [6]. The technical measures are related Sustainability 2020, 12, 10486 2 of 14 to the marine vessel and equipment for reducing ship fuel consumption. The operational measures include route optimization, speed reduction, and others that do not require engineering improvements to the ship. In order to evaluate the performance of these mitigation measures, it is necessary to evaluate the impact of the corresponding factors for these mitigation measures on ship energy consumption. For this reason, most studies are focused on predicting and estimating the impact of these factors on energy consumption either from a top-down approach or a bottom-up approach. The top-down approach generally relies on the statistics of fuel consumption data, such as those published in [7], to estimate carbon emissions. The bottom-up approach employs detailed contributing factors, such as ship speed, ship size, and other metered data from different data sources, such as the Automatic Identification System (AIS) and the Noon Report, to estimate carbon emissions [8].
The bottom-up approach is usually preferred due to its accuracy and the availability of various data sources. Several models have been developed in the bottom-up approach, such as dynamic regression [9], the LASSO (Least Absolute Shrinkage and Selection Operator) [10], the linear regression model [11], the artificial neural network (ANN) [12], and the Gaussian process (GP) [13]. Studies in [12,14] show that the ANN has better prediction performance when dealing with nonlinear relationships compared with the dynamic regression, linear regression, and LASSO models. The GP and ANN models have comparable performance in prediction and have been applied to evaluate ships' mitigation measures [15].
GP models are generally flexible and can account for various uncertainties associated with input data and variables used in the model [16]. However, the GP's performance deteriorates when the input data dimension or the data size is increased [15,16]. In comparison, ANN models are more suitable for handling high dimensional problems and a large amount of data [15,16]. Due to the high volume of shipping data from various sources, an ANN model is proposed in this paper to evaluate mitigation measures.
ANNs have been widely used to predict energy consumption, such as natural gas consumption [17], thermal energy consumption [18], and ship energy consumption [12]. Particularly in shipping, several ANN models have been developed, such as the recurrent neural network [19], and backpropagation neural network (BPNN) [20,21]. Among the various models, BPNN has demonstrated a better performance in predicting ship energy consumptions. With more detailed environmental data becoming available from many sources [9], more factors (such as weather and marine factors) can be selected to evaluate mitigation measures based on the BPNN.
Despite the fact that the information and data are recorded, very often in a non-duplicative manner, in different databases or sources, most of the studies in the literature tend to focus on the use of a single data source, mainly due to concern over data consistency. The data used for model development might inevitably contain noise or errors. For instance, Aldous (2013) highlighted that the Noon Report is a low-resolution database with data recorded by the crew at 24-h intervals and a high risk of incorrect 24-h average values being recorded [22]. Such errors and noise in data may cause uncertainty when developing the model. Some erroneous and missing data in the Noon Report can be processed by data mining methods, such as k-means and outlier score base [23]. It is also possible to combine AIS and Noon Report data (i.e., multiple data sources) for improving the quality of shipping data [24]. Nevertheless, the uncertainties associated with error data and noise data cannot be eliminated and should be considered in the modeling. Some uncertainty quantification techniques can be used to consider the impact of uncertainty [25]. One of the most commonly used techniques for quantifying uncertainty in data-driven regression models is through the Bayesian framework. Wright proposed to consider the input uncertainty in BPNN using a Bayesian method [26]. However, their work only provided the general framework to account for the input uncertainty when using the BPNN. The evaluation of ship mitigation measures usually has different data sources and heterogeneous uncertainties. The use of BPNN to evaluate ship mitigation measures, taking into account the uncertainties of different data sources and parameters, still needs a comprehensive study.
In this paper, a Bayesian neural network approach is proposed to predict ship energy savings and mitigation potentials of operational measures through a combined BPNN and Bayesian method. Furthermore, the predicted results are incorporated into the mitigation measures' ranking and selection, based on the marginal cost-effectiveness. The advantages of the proposed approach over existing methods are threefold. First, multiple available data sources are combined to develop the proposed neural network model to improve the robustness of the model when handling different data sources. Next, the proposed approach is able to consider the heterogeneous uncertainties in the model variables and input data based on the Bayesian framework. The evaluation results are more reliable by accounting for these input uncertainties. Last, the proposed approach is able to evaluate the "marginal cost-effectiveness" of mitigation measures based on cost and emissions, which can provide an important reference for decision making.
Ship Energy System
Using a typical chemical tanker as an example, a ship can be represented as a complex energy system, as shown in Figure 1. In this energy system, the ultimate source of energy is from the chemical energy released through the combustion of fuel. The main engine, auxiliary engine, and boiler are three fuel-related devices that interact with the other shipboard equipment to provide various energies for the entire ship. The daily driving of the ship mainly depends on the power provided by the main engine. It transmits the power to the propeller through a gearbox. The power generated by the auxiliary engine drives the generator set; the electricity generated by the generator set is used throughout the ship. The boiler is mainly responsible for providing thermal energy [27].
Sustainability 2020, 12, x FOR PEER REVIEW 3 of 14 heterogeneous uncertainties. The use of BPNN to evaluate ship mitigation measures, taking into account the uncertainties of different data sources and parameters, still needs a comprehensive study. In this paper, a Bayesian neural network approach is proposed to predict ship energy savings and mitigation potentials of operational measures through a combined BPNN and Bayesian method. Furthermore, the predicted results are incorporated into the mitigation measures' ranking and selection, based on the marginal cost-effectiveness. The advantages of the proposed approach over existing methods are threefold. First, multiple available data sources are combined to develop the proposed neural network model to improve the robustness of the model when handling different data sources. Next, the proposed approach is able to consider the heterogeneous uncertainties in the model variables and input data based on the Bayesian framework. The evaluation results are more reliable by accounting for these input uncertainties. Last, the proposed approach is able to evaluate the "marginal cost-effectiveness" of mitigation measures based on cost and emissions, which can provide an important reference for decision making.
Ship Energy System
Using a typical chemical tanker as an example, a ship can be represented as a complex energy system, as shown in Figure 1. In this energy system, the ultimate source of energy is from the chemical energy released through the combustion of fuel. The main engine, auxiliary engine, and boiler are three fuel-related devices that interact with the other shipboard equipment to provide various energies for the entire ship. The daily driving of the ship mainly depends on the power provided by the main engine. It transmits the power to the propeller through a gearbox. The power generated by the auxiliary engine drives the generator set; the electricity generated by the generator set is used throughout the ship. The boiler is mainly responsible for providing thermal energy [27]. The complexity of the energy system lies in the many devices involved in maintaining the energy required by the ship. The equipment may be affected by certain factors during operation to affect fuel consumption. Mitigation measures are usually proposed for some marine equipment based on these influencing factors to reduce energy consumption. Sometimes there are also interactions between ship equipment, which may affect the potential of mitigation measures [28]. From a system perspective, selecting more influencing factors can help evaluate the impact of mitigation measures on fuel consumption more comprehensively.
According to the analysis of IMO and the availability of the factors, four operational mitigation measures are considered in this article, including speed reduction (10%), draft optimization, trim optimization, and weather routing. Among these measures, the main engine's fuel consumption is The complexity of the energy system lies in the many devices involved in maintaining the energy required by the ship. The equipment may be affected by certain factors during operation to affect fuel consumption. Mitigation measures are usually proposed for some marine equipment based on these influencing factors to reduce energy consumption. Sometimes there are also interactions between ship equipment, which may affect the potential of mitigation measures [28]. From a system perspective, selecting more influencing factors can help evaluate the impact of mitigation measures on fuel consumption more comprehensively.
According to the analysis of IMO and the availability of the factors, four operational mitigation measures are considered in this article, including speed reduction (10%), draft optimization, trim optimization, and weather routing. Among these measures, the main engine's fuel consumption is directly related to vessel speed, which is considered an influencing factor. The draft is defined as the ship's depth entering the water, which affects the ship's resistance and thus, the energy consumption. The difference between the ship's aft and forward drafts leads to the existence of trim. The trim has different effects on the maneuverability and the vessel speed under different ship states. Therefore, the drafts and trim are considered as two influencing factors. In addition to the above factors, the conditions of wind speed, wind direction, wave height, and wave direction are considered as four influencing factors on the energy consumption of weather routing. This paper aims to develop a model to study the impact of the seven influencing factors on ship fuel consumption under uncertainty, so as to estimate the potential of corresponding mitigation measures.
Data Sources
A chemical tanker was used as the reference ship for this study. The tanker had two main engines and two auxiliary engines. The length and width of the tanker were 181 and 31.3 m, respectively. The maximum capacity was 51,000 m 3 . The maximum draft was 12.4 m. The data of seven factors and fuel consumption used for estimating mitigation measures were recorded in multiple data sources. The AIS, the Noon Report, the weather report, and onboard measurement data of this tanker were collected from January 2017 to March 2018. The way of recording data for different data sources is different. Some data sources may have a certain degree of uncertainty. For instance, the Noon Report contains the data record of ship working conditions during ship navigation by crews, manually. Human error is inevitable in the Noon Report [22]. The AIS records a real-time record of static and dynamic data during the ship navigation through a global positioning system. The accuracy of AIS data is slightly higher than Noon Report data [29]. However, they both have data uncertainty, as do the weather reports and onboard measurements.
This paper combines shipping data of the same ship and the same period from multiple data sources and extracts the parameters required by neural networks to evaluate mitigation measures. On the one hand, the quality of the data can be improved. On the other hand, the neural network model trained by multiple data sources is more effective. However, the uncertainty remains in the merged data and parameters not required by the model to be filtered out. For the combined data, each data source recorded the data at different intervals. The average of daily data from multiple data sources was used as the model parameters.
BPNN Modeling Without Uncertainty Analysis
BPNN is a multi-layer feedforward neural network trained according to the error backpropagation algorithm, which includes forward and backpropagation, and a simple topology can adapt to almost any nonlinear relationship [30,31]. It has good parallelism and can map the relationship between multiple inputs and outputs at the same time. Neural networks are also suitable for processing large amounts of data. For the evaluation of mitigation measures, there is usually a large amount of data from different data sources. The goal was to develop a BPNN to represent the relationship between mitigation measures and ship energy consumption. Therefore, the corresponding factors of four mitigation measures were taken as inputs to the model, and the ship fuel consumption was the output of interest. Let x = {x 1 , x 2 , . . . , x d } denote the d dimensional corresponding factors of the considered mitigation measures, such as the vessel speed of the mitigation measure speed reduction, and y denote the output of interest.
BPNN consists of an input layer, some hidden layers, and an output layer. Every layer has some neurons. Here, the input layer has d neurons that represent d dimensional input factors, and the output layer has one neuron that represents the output of interest. The number of neurons in hidden layers has to be determined. There are weights between neurons in adjacent layers, and each layer has a bias to improve the network's fitting capabilities. During the forward propagation of neural networks, each hidden layer has inputs and outputs. The weighted sum of the previous layer's outputs is taken as the input of the next layer. For the input layer, the weighted sum of the input factors is taken as the first hidden layer's inputs. Let z l i denote the output of the ith neuron in the lth layer, and a l+1 j denote the input of the jth neuron in the (l + 1)th layer. Then, the relationship between the output of the lth layer and the input of the (l + 1)th layer can be represented as follows: Here, i ∈ {1, 2, . . . , h l } and j ∈ 1, 2, . . . , h l+1 , where h l denotes the number of neurons in the lth layer; b l+1 j denotes the bias of the jth neuron of the (l + 1)th layer, which can improve the fit of the data [32]; w l+1 ij is the weight connecting the ith neuron in the lth layer and the jth neuron in the (l + 1)th layer [33]. The weights represent the importance of different factors. The weighted sum of z l i is taken as the input a l+1 j of the next layer. When l = 1, z 1 i = x i , which denotes the i-th input factor of the input layer.
Each neuron in the hidden layers contains an activation function to ensure that the BPNN can approximate nonlinear relationships. The input to the jth neuron in the lth layer a l j is activated by the activation function f a l j , which is the output of the jth neuron in the lth layer z l j . There are currently three most commonly used activation functions in neural networks: sigmoid, rectified linear unit (ReLU), and tansig [21], where the ReLU is suitable for processing big data and deep neural networks, and the tansig and sigmoid functions may lead to the existence of gradient disappearance. However, the study in [21] found that when using the BPNN to predict the relationship between seven influencing factors and energy consumption, a simple three-layer neural network model is enough to ensure prediction performance. Moreover, the activation functions using sigmoid and tansig have a higher prediction performance than that of using the ReLU. Since tansig has a faster convergence speed than sigmoid, it was selected as the activation function in this study to save computing time. The formula of tansig is expressed as Equation (2).
It should be noted that the output of lth layer z l can be taken as the input of the next layer. However, there is no activation function in the output layer of the neural network for the regression problem. When l is the output layer, y predict = a l . It denotes the predicted output value. The propagation of the BPNN is to predict the output value of the output layer, that is, the predicted ship energy consumption. Here, a three-layer neural network topology in Figure 2 was simply drawn from MATLAB software to show the process of neural network forward propagation. The number of hidden layer neurons n h needs to be further determined by input layer neurons n i and output layer neurons n o , usually as shown in Equation (3). The constant α is 1 minus 10.
With the predicted value y predict , obtained from the output layer, a loss function g(w, b, x, y) is defined to calculate the error value between the predicted output y predict and the observed output y. Suppose the error value is not within the expected range; in that case, the BPNN performs the backpropagation algorithm by the gradient descent method to readjust weights and biases until the minimum error value is obtained [34]. There are various loss functions, such as the mean square error (MSE) and the root mean square error, that have been used to represent the error indicators of the [20]. Here, the most widely used loss function, MSE, was used, which can be represented as Equation (4): where n denotes the number of observations. The backpropagation algorithm corrects the weights from the output layer to the input layer until it finds a weight vector that minimizes the loss function g(w, b, x, y). Sometimes it is necessary to adjust multiple times to obtain the minimum error or fail to find the minimum error. When creating a neural network in a program, the iteration parameter is added as a stopping criterion after the backpropagation algorithm. It is used to indicate the number of weight adjustments from the output layer to the input layer. The goal error is another stopping criterion. Generally, the goal error value and the maximum number of iterations is set in the program.
If the value of g(w, b, x, y) is found within the goal error range, or the maximum number of iterations is reached, the program will stop, and running information, such as iterations, error, and gradient, can be shown. In a neural network model, gradient and error can indicate the model's prediction performance, and iterations can display the model's computing performance.
the activation function , which is the output of the th neuron in the th layer . There are currently three most commonly used activation functions in neural networks: sigmoid, rectified linear unit (ReLU), and tansig [21], where the ReLU is suitable for processing big data and deep neural networks, and the tansig and sigmoid functions may lead to the existence of gradient disappearance. However, the study in [21] found that when using the BPNN to predict the relationship between seven influencing factors and energy consumption, a simple three-layer neural network model is enough to ensure prediction performance. Moreover, the activation functions using sigmoid and tansig have a higher prediction performance than that of using the ReLU. Since tansig has a faster convergence speed than sigmoid, it was selected as the activation function in this study to save computing time. The formula of tansig is expressed as Equation (2).
It should be noted that the output of th layer can be taken as the input of the next layer. However, there is no activation function in the output layer of the neural network for the regression problem. When is the output layer, = . It denotes the predicted output value. The propagation of the BPNN is to predict the output value of the output layer, that is, the predicted ship energy consumption. Here, a three-layer neural network topology in Figure 2 was simply drawn from MATLAB software to show the process of neural network forward propagation. The number of hidden layer neurons needs to be further determined by input layer neurons and output layer neurons , usually as shown in Equation (3). The constant is 1 minus 10. With the predicted value , obtained from the output layer, a loss function ( , , , ) is defined to calculate the error value between the predicted output and the observed output . Suppose the error value is not within the expected range; in that case, the BPNN performs the backpropagation algorithm by the gradient descent method to readjust weights and biases until the minimum error value is obtained [34]. There are various loss functions, such as the mean square error
Bayesian Neural Network Modeling with Uncertainty Analysis
The Bayesian neural network (BNN) is a Bayesian method for neural network modeling. Compared with the BPNN, the BNN takes into account the uncertainty in the model. Wright [35] proposed a Bayesian approach that considered the uncertainty of the model and input data by analyzing the posterior of the prediction. For the evaluation of mitigation measures, there was uncertainty in both inputs and outputs. For instance, the vessel speed was taken as an input factor to the model. The average speed is usually taken as the input value for a specific voyage corresponding to the observed fuel consumption. However, the speed during this voyage changed frequently, and there may be uncertainty for the average speed. Besides, the observed fuel consumption can also be uncertain due to the observation error. The uncertainty of both inputs and outputs may influence the prediction accuracy. Therefore, it was essential to consider these uncertainties in the evaluation of mitigation measures. Furthermore, the estimated parameters in the BPNN have uncertainties, such as weights and biases. These uncertainties may also have an impact on prediction performance. Therefore, the uncertainty of parameters also had to be considered. Here, the BNN model was proposed to take into account the uncertainty in the developed model. Let x D and y D denote the observed input values (e.g., vessel speed) and the observed output values (e.g., fuel consumption). The total observed data set is D = x D , y D . Due to the uncertainty of inputs and outputs, the observed data can be further expressed as Equation (5).
where y D (x) represents the actual fuel consumption for specific input sets, and ε y represents the observation error for the outputs; x D represents the expected input values, and e x represents the noise of the inputs. The distributions of ε y and e x can be assessed from the observed data. In most cases, it is reasonable to assume that ε y and e x follow normal distributions with zero means [22]. Specifically, ε y ∼ N 0, σ y 2 and e x ∼ N 0, σ x 2 .
The developed BPNN was used to predict fuel consumption, given the input sets (Section 3.2.1). Let y(x, w) denote the predicted output at input x with weight w. Then, the expected prediction concerning x can be obtained by Equation (6).
The expectation of the output is characterized by the prediction distribution of BPNN p(y x). It is also required for noisy inputs to obtain the distribution of the noise process p(x x), and the prior over the noisy input p(x). With this, the expected prediction y * at any new noisy input x * can be expressed as Equation (7).
P(y * x * , D) is the prediction distribution of the output, given new noisy input x * . Let x * denote the expected input. Then, the posterior of the output can be rewritten as Equation (8).
The prediction distribution of the output, given new expected input P(y * x * , D) = P(y * x * , w)P(w|D)dw. Using the Bayes' rule, the posterior of the weight P(w|D) = P(w, x D |y D , x D )dx D = P(x D , w|D)dx D . Therefore, the posterior distribution can be denoted as Equation (9).
It can be seen that three items are integrated out in Equation (9), including w, x D , and x * . The integration over w is to consider the uncertainty in the weights. The integration over x D is to consider the training of the network using uncertain input data. The integration over x * indicates that the model allows new input data to be noisy [26].
In case the input data are noiseless, the expected output can be obtained directly by calculating the expectation of output. E(y * x * , D) = y * P(y * |x * , D)dy * = y * P(y * | w, x * )P(x D , w|D)dwdx D dy * Therefore, the expected output can be obtained, no matter if the input data are noisy or noiseless. However, it is difficult to integrate with all these parameters to get the close form. The numerical integration method is often used to solve complicated computational problems [36]. In this paper, the Markov Chain Monte Carlo method was used for numerical integration [37].
Mitigation Potential Evaluation Using BPNN and BNN
The models in this article were realized by MATLAB software using in-house made codes. Before developing neural networks to assess the impact of input factors on ship fuel consumption, the performance of the BPNN and BNN had to be compared to obtain a better model architecture for the evaluation of mitigation measures. Generally, the models have some parameters that need to be set first. These models' parameters were trained through the training set, which was the obtained data from 2017. The models were further validated using the validation set, which was the remaining data from 2018. The validated models were used for prediction. The parameters were set as follows: The activation function and loss function are mentioned in Section 3.2.1, which were the tansig and the MSE, respectively. The weights and biases were optimized by the conjugate gradient method. The study in [38] indicated that initializing the weights and the biases to some small values can help neural networks to learn nonlinear relationships. It was randomly initialized to a Gaussian distribution with a mean and variance of 0 and 0.1, respectively. The maximum number of iterations for the network was set to 1000. The goal error value was set to 0.01. Then the BPNN and BNN models were developed, including an input layer, a hidden layer, and an output layer. The input layer had seven neurons corresponding to seven input factors. The output layer had one neuron, which represented the output of interest. The number of neurons for the hidden layer had to be determined. According to Equation (3), the hidden layer, with the number of neurons from 1 to 12, was evaluated. Based on Equation (5), the input factors and the output in the data sets were assumed to be the normal distribution, where the mean and the variance were estimated from the observed data. To improve the computing performance of models, the input parameters were scaled in the MATLAB software by the mean and standard deviation method. The corresponding output values were unscaled.
The performance of the BPNN and BNN with different numbers of neurons could be compared by MSE values to find the best model structure. Also, the performance of the BPNN and BNN with the best structure was further compared with the GP model, which made it easy to account for various uncertainties [16]. The GP model proposed by [39] was applied here. The 95% confidence interval of prediction for three models was also computed, and the probability that the observed fuel consumption was within the 95% confidence interval of predicted fuel consumption could be obtained. Finally, an optimal model accounting for the uncertainties was developed to evaluate the impact of selected factors on fuel consumption. It predicted the energy savings and emission reductions of different measures to estimate the mitigation potential of each measure.
Cost-Effectiveness Evaluation
The marginal cost-effectiveness (MCE) criterion proposed in [40] was applied to analyze the cost-effectiveness of the mitigation measures. Although the mitigation measures were considered to save energy and reduce emissions, it was necessary to evaluate the corresponding scenarios' costs to ensure that the mitigation measures can achieve positive benefits. The core idea of the MCE is to compare the increased implementation costs and emission reductions of different measures to rank the mitigation measures. The implementation costs were considered to be the ship's investment costs, the operational costs, the opportunity costs, and the fuel consumption costs. An in-house made code was applied to complete the cost-effectiveness evaluation with MATLAB software, where the code's calculation cost and design cost were ignored.
Results and Discussion
The performance of different neurons in the hidden layer was first evaluated by MATLAB.
The MSE values to the convergence of the BPNN and BNN models with different network structures were obtained, as shown in Tables 1 and 2. Three numbers in the network structure represent the number of neurons from the input layer to the output layer. The results show that for both the BPNN and BNN, the network structure with three neurons in the hidden layer had the minimum MSE value. The corresponding running information includes the following: iterations and gradient show that the structure also has good computing performance and does not trap in local optima. Therefore, the network with the structure of 7-3-1 was used for further analysis.
Given the network with the best structure, the performance of the BPNN, without taking into uncertainty, and the BNN, considering uncertainty, were compared with the GP model proposed in [39]. Figures 3-5 show the fit of predicted and observed fuel consumption for three models, using training data and validation data. It can be seen that the predicted values were close to the observed values in most cases for all three models. Given the network with the best structure, the performance of the BPNN, without taking into uncertainty, and the BNN, considering uncertainty, were compared with the GP model proposed in [39]. Figures 3-5 show the fit of predicted and observed fuel consumption for three models, using training data and validation data. It can be seen that the predicted values were close to the observed values in most cases for all three models. The average MSE values over 100 replications for three models, using both training and validation data, were further computed. The results are given in Table 3. It can be seen that the BNN has the smallest MSE, followed by the GP. The BPNN has the largest MSE. The two-sample t-test showed no significant difference between the BNN and the GP, while the MSEs of both the BNN and GP are significantly smaller than BPNN. As the smaller MSE means better prediction performance, the BNN and the GP, which take into account uncertainty, have significantly better prediction performance than the BPNN, which does not take into account uncertainty. The BNN has the best The average MSE values over 100 replications for three models, using both training and validation data, were further computed. The results are given in Table 3. It can be seen that the BNN has the smallest MSE, followed by the GP. The BPNN has the largest MSE. The two-sample t-test showed no significant difference between the BNN and the GP, while the MSEs of both the BNN and GP are significantly smaller than BPNN. As the smaller MSE means better prediction performance, the BNN and the GP, which take into account uncertainty, have significantly better prediction performance than the BPNN, which does not take into account uncertainty. The BNN has the best The average MSE values over 100 replications for three models, using both training and validation data, were further computed. The results are given in Table 3. It can be seen that the BNN has the smallest MSE, followed by the GP. The BPNN has the largest MSE. The two-sample t-test showed no significant difference between the BNN and the GP, while the MSEs of both the BNN and GP are significantly smaller than BPNN. As the smaller MSE means better prediction performance, the BNN and the GP, which take into account uncertainty, have significantly better prediction performance than the BPNN, which does not take into account uncertainty. The BNN has the best prediction performance with the smallest MSE. However, the difference between the BNN and the GP is not significant, which indicates that both models have similar prediction performance. The probabilities that the observed fuel consumption was within a 95% confidence interval of the predicted fuel consumption using the BPNN, BNN, and GP were then computed: 0.714, 0.925, and 0.953, respectively. The results indicate that the GP has the most considerable probability of covering the observed fuel consumption, and the BNN has a larger probability than the BPNN. This is because the prediction using the GP not only accounts for data uncertainty and parameter uncertainty; it also accounts for spatial uncertainty. The prediction variance of the GP is usually larger than the BNN. It is expected that the BPNN has the smallest probability to cover the observed fuel consumption, as the uncertainties were not taken into account. The developed BNN was further used to assess the mitigation measures.
The annual energy saving of four measures was then evaluated using the BNN, of which the predicted average values are given in Table 4. The corresponding emission reduction was also computed, where the emission factor was 3.114 g/g, which was adopted from [8]. The results show that the four mitigation measures selected can effectively save ship fuel consumption. Each measure's mitigation potential was also computed based on the annual total fuel consumption, which was 2845 metric tons (MT) in 2017. As a decisive factor affecting ship energy consumption, speed reduction has the largest abatement potential, at 18.47%. Weather routing is also an essential factor in ship energy consumption. The abatement potential for weather routing is 2.42%. The abatement potentials for draft optimization and trim optimization are 1.68% and 1.61%, respectively. Therefore, draft optimization and trim optimization also have an individual impact on ship energy consumption. Furthermore, the cost-effectiveness of the four mitigation measures was also estimated. The increased implementation costs of four measures were adopted from [28], shown in Table 5. Negative costs mean that the costs can be reduced by implementing the mitigation measure. The marginal cost-effectiveness values (MCEs) of different measures were then calculated to rank the mitigation measures, given in Table 5. The MCEs in Table 5 represent the additional implementation costs per unit of energy saved by the measure, compared with its next measure. Here, the carbon price of US$57/metric ton adopted from [41] was taken as the threshold for the ranking of mitigation measures. It was found that the MCE between speed reduction and weather routing is 12.55. This indicates that compared with weather routing, the implementation of speed reduction will add a US$2.55 cost, with an additional metric ton of carbon emission. This additional cost is accepted, as it is smaller than the carbon price compared with the given carbon price.
Therefore, speed reduction is still the optimal mitigation measure, although the costs saved for this mitigation measure are the smallest. The MCEs between weather routing and draft optimization, as well as draft optimization and trim optimization, are both negative. That is, compared with draft optimization, weather routing can not only reduce more emissions but can also save more costs.
A similar result can be found between draft optimization and trim optimization. No other measures need to be further compared in this paper, so there is no MCE value for trimming optimization.
Conclusions
In this paper, the BNN model, a Bayesian method to neural network modeling, was built to evaluate four mitigation measures from multiple data sources. Different from the common BPNN model, the BNN considers the input uncertainty, parameter uncertainty, and output uncertainty in the developed model. Its performance of considering these uncertainties was compared to the developed BPNN and validated GP models. The results show that, although the probability of the BNN covering the observed values is slightly lower than that of GP, it is significantly higher than the BPNN, and the BNN has the smallest MSE among the three models. This means that the BNN has the best prediction performance. At the same time, different iterations of the three models show that the models have good prediction performance. Therefore, a developed BNN model, considering the uncertainty of data and parameters, is valid and useful to evaluate mitigation measures. This summary provides a more convincing explanation for the previous research about using neural network modeling to evaluate ship energy consumption.
Based on the BNN model, the energy savings and emission reductions of different measures were computed to estimate their mitigation potential from an energy perspective. From an economic perspective, the increased costs and marginal cost-effectiveness were computed to evaluate each measure's cost-effectiveness. Finally, the ranking of the different measures through the two perspectives was obtained. A chemical tanker was used as an example of a complex chemical system to evaluate the mitigation measures. The results show that all four measures are beneficial in the implementation costs while saving energy consumption. A consistent ranking in mitigation potential and cost-effectiveness was obtained, which can further prioritize different mitigation measures.
Evaluating the proposed mitigation measures from economic and energy perspectives can provide directions for policymakers to develop new mitigation measures. It could also be suitable for promoting the formulation of national policies and international treaties to realize sustainable development. Ultimately, it can effectively help mitigate climate change and environmental pollution caused by ship gas emissions. However, there are still some areas for improvement in further research. For example, the presented developed model only considers the chemical tanker, and the adaptability of this method to other types of ships needs to be further verified. Additionally, due to the availability of data sources, four mitigation measures were considered. More relevant mitigation measures and corresponding influencing factors can be collected as the model's parameters in future research. It can also consider using some lower-cost simulation databases to reduce evaluation costs.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,522.8 | 2020-12-15T00:00:00.000 | [
"Engineering"
] |
Density Functional Theory Study on Defect Feature of As Ga Ga As in Gallium Arsenide
We investigate the defect feature of AsGaGaAs defect in gallium arsenide clusters in detail by using first-principles calculations based on the density functional theory (DFT). Our calculations reveal that the lowest donor level of AsGaGaAs defect on the gallium arsenide crystal surface is 0.85 eV below the conduction band minimum, while the lowest donor level of the AsGaGaAs defect inside the gallium arsenide bulk is 0.83 eV below the bottom of the conduction band, consistent with gallium arsenide EL2 defect level of experimental value (Ec-0.82 eV). This suggests that AsGaGaAs defect is one of the possible gallium arsenide EL2 deep-level defects. Moreover, our results also indicate that the formation energies of internal AsGaGaAs and surface AsGaGaAs defects are predicted to be around 2.36 eV and 5.54 eV, respectively. This implies that formation of AsGaGaAs defect within the crystal is easier than that of surface. Our results offer assistance in discussing the structure of gallium arsenide deep-level defect and its effect on the material.
Introduction
As a kind of excellent semiconductor material, gallium arsenide is widely used in fast photoelectric devices and integrated circuit substrate [1] and so forth.As a compound semiconductor material, the defect problems of the undoped semi-insulating GaAs (SI-GaAs), in particular, the unique deep-level defects in the SI-GaAs single crystal material, such as EL2 (Ec-0.82eV) and EL6 (Ec-0.38 eV), which have an important influence on the photoelectric characteristics and the application of materials [2][3][4][5][6][7], are more complex than those of silicon and germanium.By various theoretical and experimental means, many researchers have studied the microstructures of gallium arsenide EL2 deep-levels.For example, Lagowski et al. put forward isolated As Ga antisite defect structure type [8], Wager and van Vechten put forward V Ga As Ga V Ga ternary complex defect structure type [9], Zou et al. put forward As Ga V As V Ga ternary complex defect structure type [10,11], and Morrow proposes the possible As Ga V Ga defect structures [12].Wosinski et al. pointed out that EL2 is not isolated defects [13].The steady and metastable energy levels of EL2 in semi-insulating GaAs were studied by Kabiraj and Ghosh [14].Ternary complex defect model of EL2 defect has been studied by using first-principles by Li et al. [7], and EL2 and EL6 defects and correlations of clusters have been preliminarily discussed by Zhao and Wu [15].These results have a certain role in promoting of the features and applications of gallium arsenide materials.On the basis of the above study on gallium arsenide clusters and defects preliminary [16][17][18][19], in this paper, the As Ga Ga As defect and its features have been studied by first-principles based on density functional theory (DFT), which gives out another kind of microstructure of gallium arsenide EL2 deep-level defects and offers assistance in the discussion of defect features of gallium arsenide deep-level and application of materials.
Computational Methods
Our total energy and electronic structure calculations were carried out within a revised Heyd-Scuseria-Ernzerhof (HSE06) range-separated hybrid functional as implemented in VASP code [20,21].In the HSE06 approach, the screening parameter = 0.2 Å−1 and the Hartree-Fock (HF) mixing parameter = 25% which meant 25% HF exchange with 75% GGA of Perdew, Burke, and Ernzerhof (PBE) [22] exchange were chosen to well reproduce the experimental band gap (∼1.43 eV) of GaAs.The core-valence interaction was described by the frozen-core projector augmented wave (PAW) method [23,24].The electronic wave functions were expanded in a plane wave basis with a cut-off of 300 eV.A 3 × 3 × 3 k-point mesh within Monkhorst-Pack scheme [25] was applied to the Brillouin-zone integrations in total energy calculations.The internal coordinates in the defective supercells were relaxed to reduce the residual force on each atom to less than 0.02 eV⋅ Å−1 .All defect calculations were spin-polarized.In the calculation, firstly, defect structure models in various conditions were optimized and then static self-consistent calculation to the ground state structure was conducted and, finally, the corresponding band structure and density of states (DOS) were obtained.
Results and Discussions
3.1.The Properties of Perfect GaAs. Figure 1(a) shows the perfect GaAs supercell structure model.Figure 1(b) shows the calculated band structure and total density of states (DOS) of the perfect GaAs supercell structure model.
As can be seen from Figure 1(b), the perfect GaAs has a direct band gap structure, and its band gap is 1.5 eV at the Γ point, which is very close to the experimental value (1.43 eV), implying that the selected parameters are reasonable.
Surface Doping As
Ga Ga As Defect.Figure 2(a) shows the supercell structure model of GaAs (001) surface doping As Ga Ga As complex defect.The crystal thickness and vacuum thickness are 8.24 Å and 20.00 Å, respectively.The length of the base vectors A and B is for both 15.99 Å, whereas the length of the base vector C is 28.24 Å.Furthermore, vector angles , , and are all 90 ∘ .Figure 2(b) shows the band structure and the total DOS of the GaAs (001) surface As Ga Ga As complex defect model.
As can be seen from Figure 2(b), the lowest donor defect level of the As Ga Ga As defect on the gallium arsenide crystal surface is 0.85 eV below the bottom of the conduction band, being consistent with gallium arsenide EL2 defect level of experimental value (Ec-0.82eV), which suggests that As Ga Ga As defect is one of the possible microstructures of gallium arsenide EL2 deep-level defects, and the formation energy of As Ga Ga As defect is 5.54 eV.
One can note that the existence of As Ga Ga As defect changes the band structure and the total DOS of GaAs.This leads to the formation of dangling bond between the neighbors of defects.As a result, the matching surface states can exchange their positions with holes and electrons of the gallium arsenide materials, which affected the photoelectric properties of GaAs materials directly.
Internal Doping As
Ga Ga As Defect.The supercell structure model of the internal deep layer doping As Ga Ga As defect is displayed in Figure 3(a).The distance between the As Ga Ga As defect and the upper interface is 5.65 Å. Figure 3(b) shows the supercell structure model of the internal shallow layer doping As Ga Ga As defect, and the distance between the As Ga Ga As defect and the upper interface is 2.83 Å. Figures 4(a As can be seen from Figure 4, the band structure and DOS of internal As Ga Ga As defect are insensitive to its position.The lowest donor defect level is below the bottom of the conduction band 0.83 eV, being consistent with gallium arsenide EL2 defect level of experimental value (Ec-0.82eV).This suggests that internal As Ga Ga As defect is one of possible gallium arsenide EL2 deep-level defects.Meanwhile, it increases donor level and acceptor level of defects, changes the total DOS of materials, and affects the photoelectric properties of GaAs materials directly, when comparing with the band structure of perfect GaAs as shown in Figure 1(b).The results indicate that the formation energy of internal As Ga Ga As defect is 2.36 eV, showing an independent-position character.Note that the internal As Ga Ga As defect is more stable than the surface As Ga Ga As one, suggesting that formation of As Ga Ga As deep-level defects in the crystal is easier than that on surface.
Conclusions
In this paper, we have carried out the As Ga Ga As deeplevel defect in gallium arsenide by using first-principles calculations based on hybrid density functional theory.Our results show that the lowest donor defect level on the gallium arsenide surface is 0.85 eV below the bottom of the conduction band, while the lowest donor defect level of the As Ga Ga As defect inside the gallium arsenide crystal is 0.83 eV below the bottom of the conduction band, consistent with gallium arsenide EL2 defect level of experimental value (Ec-0.82eV).The As Ga Ga As defect is one of the microstructures of the EL2 deep-level defects in gallium arsenide.We also found that the band structure and density of states of internal As Ga Ga As defect have no relationship with its position and the formation energy of internal As Ga Ga As defect is 3.16 eV, smaller than that of the defect on surface, suggesting that the formation of As Ga Ga As deep-level defects within the crystal is easier than that of surface relatively.The existence of As Ga Ga As defect increases donor level and acceptor level of defects and changes the total DOS of materials and atoms around the defect form the dangling bond.Consequently, the resulting surface states can exchange their positions with holes and electrons of the gallium arsenide materials, which affects the photoelectric properties of GaAs materials directly.
Figure 1 :
Figure 1: (a) Supercell structure model of perfect GaAs.(b) Band structure and the total DOS of perfect GaAs.The Fermi energy is set to zero.
) and 4(b) show the corresponding band structure and total DOS, respectively.
Figure 2 :
Figure 2: (a) Supercell structure model of the surface double antisite As Ga Ga As defect.(b) The band structure and the total DOS of GaAs (001) surface with double antisite As Ga Ga As defect.
Figure 3 :
Figure 3: Supercell structure model of internal double antisite As Ga Ga As defects.(a) Between the As Ga Ga As defects and the upper interface the distance is 5.653 Å.(b) The distance between the As Ga Ga As defects and the upper interface is 2.827 Å.
Figure 4 :
Figure 4: The band structure and the total DOS of GaAs with internal double antisite As Ga Ga As defects.(a) The distance is 5.653 Å between the deep As Ga Ga As defects and the upper boundary.(b) The distance is 2.827 Å between the shallow As Ga Ga As defects and the upper boundary. | 2,290.4 | 2015-01-01T00:00:00.000 | [
"Materials Science"
] |
The qEEG Signature of Selective NMDA NR2B Negative Allosteric Modulators; A Potential Translational Biomarker for Drug Development
The antidepressant activity of the N-methyl-D-aspartate (NMDA) receptor channel blocker, ketamine, has led to the investigation of negative allosteric modulators (NAMs) selective for the NR2B receptor subtype. The clinical development of NR2B NAMs would benefit from a translational pharmacodynamic biomarker that demonstrates brain penetration and functional inhibition of NR2B receptors in preclinical species and humans. Quantitative electroencephalography (qEEG) is a translational measure that can be used to demonstrate pharmacodynamic effects across species. NMDA receptor channel blockers, such as ketamine and phencyclidine, increase the EEG gamma power band, which has been used as a pharmacodynamic biomarker in the development of NMDA receptor antagonists. However, detailed qEEG studies with ketamine or NR2B NAMs are lacking in nonhuman primates. The aim of the present study was to determine the effects on the qEEG power spectra of the NR2B NAMs traxoprodil (CP-101,606) and BMT-108908 in nonhuman primates, and to compare them to the NMDA receptor channel blockers, ketamine and lanicemine. Cynomolgus monkeys were surgically implanted with EEG radio-telemetry transmitters, and qEEG was measured after vehicle or drug administration. The relative power for a number of frequency bands was determined. Ketamine and lanicemine increased relative gamma power, whereas the NR2B NAMs traxoprodil and BMT-108908 had no effect. Robust decreases in beta power were elicited by ketamine, traxoprodil and BMT-108908; and these agents also produced decreases in alpha power and increases in delta power at the doses tested. These results suggest that measurement of power spectra in the beta and delta bands may represent a translational pharmacodynamic biomarker to demonstrate functional effects of NR2B NAMs. The results of these studies may help guide the selection of qEEG measures that can be incorporated into early clinical evaluation of NR2B NAMs in healthy humans.
Introduction
Major Depressive Disorder (MDD) is a heterogeneous condition characterized by the core symptoms of pervasive, sustained, low mood and/or loss of interest in the environment accompanied by a constellation of other symptoms involving alterations in sleep, appetite, energy level, psychomotor function and cognition [1]. The monoamine system has been the focus of MDD research for many years and although numerous antidepressant drugs of this class are available, MDD symptoms are poorly treated in most patients. Limitations of monoaminergic antidepressants include a delayed onset of action and a large population who show either modest or no treatment response (30-53%) [2]. Therefore there is a high demand for novel MDD therapeutics. In recent years, the glutaminergic system, including N-methyl-D-aspartate (NMDA) receptors, has been an area of research interest in the neurobiology of MDD [3,4]. Berman et al., (2000) were the first to publish a small (n = 7) novel, proof-of-concept clinical trial showing the non-selective NMDA receptor antagonist ketamine had rapid anti-depressant effects in patients with treatment-resistant depression (TRD) [5]. Zarate et al., (2006) confirmed these findings by demonstrating the robust, rapid (within 2 hrs) and sustained (more than 1 week) anti-depressant effects of a single dose of ketamine (0.5 mg/kg, i.v. infusion for 40 min) in TRD patients [6].
Several potential issues are associated with non-selective NMDA receptor channel blockers such as ketamine, including psychotomimetic or dissociative effects, cognitive impairment and abuse potential [7,8]. However, it has been suggested that directly targeting the NMDA receptor complex by using low-affinity receptor antagonists or by modulating NMDA receptor subtypes may bring about rapid antidepressant effects with reduced adverse effects [4]. Proof of concept for these approaches has been achieved in the clinic with lanicemine (AZD-6765), a low-trapping NMDA channel blocker and traxoprodil (CP-101,606) a negative allosteric modulator (NAM) which selectively inhibits the NR2B subtype of NMDA receptors. In preliminary clinical trials, lanicemine showed antidepressant efficacy with a reduced side-effect profile relative to ketamine [9]. However, in a larger trial of antidepressant efficacy, lanicemine did not differ from placebo although the agent was still well tolerated [10]. IV infusion of traxoprodil also exhibited ketamine-like efficacy in TRD patients with the advantage of reduced adverse effects [11]. Sanacora et al., (2013) demonstrated the utility of qEEG as a clinical biomarker to demonstrate pharmacodynamic effects of ketamine and lanicemine [12]. Both ketamine and lanicemine modulate gamma power band of the qEEG in rats and humans and the plasma concentrations affecting qEEG measures were similar to those demonstrating antidepressant efficacy. For lanicemine, the qEEG biomarker was useful in confirming the pharmacodynamic effects at NMDA receptors and guiding dose selection. In addition to dose selection, pharmacodynamic biomarkers aid interpretation of negative results. For instance, should a clinical dose have no therapeutic effect and also no effect on the pharmacodynamic biomarker, it's clear that the therapeutic hypothesis has not been tested. On the other hand, if a clinical dose has the anticipated effect on the pharmacodynamic biomarker and still has no therapeutic effect, this suggests that the mechanism was adequately tested but found to be ineffective.
Subanesthetic doses of NMDA receptor channel blockers such as ketamine, phencyclidine or MK-801 produce very similar changes in qEEG measures in rodents and humans, whether in basal power spectra, or evoked potential procedures [13,14]. The effects of NMDA antagonists on evoked potentials typically align well with rodents, nonhuman primates and humans, an indication of cross-species translatability of these measures [15][16][17][18][19]. However, there is a paucity of data describing the effects of NMDA antagonists and in particular, ketamine, on qEEG power spectra in nonhuman primates. Most early evaluations of ketamine used high doses in the anesthetic range [20]. In addition, most early reports did not analyze power spectra across a number of clinically relevant power bands. Similarly, there are few reports of the qEEG effects of NR2B selective NAMs in rodents [13,14], and none to date in nonhuman primates. The primary goal of these studies was to determine if NR2B NAMs produce changes in qEEG spectra that may be useful as pharmacodynamic biomarkers. Therefore, the qEEG effects of two selective NMDA NR2B NAMs, traxoprodil and BMT-108908 (Fig 1) were studied in cynomolgus monkeys. To provide further insight into the qEEG effects of NMDA receptor channel blockers, ketamine and lanicemine were also examined. The results of these studies may help guide the selection of qEEG measures for pharmacodynamic biomarker studies in humans as well as provide insight into the effects of different classes of NMDA receptor antagonists in vivo.
Subjects
Six cynomolgus monkeys (Macaca Fasicularis)~5-7 years of age and weighing 5.0-8.5 kg served as subjects for the qEEG studies. Each had been used previously in pharmacological studies, although there was a drug-free period of at least one month prior to these studies. Monkeys were typically pair-housed with compatible conspecifics, but separated during the work day (Monday-Friday) and re-paired after afternoon feeding. Monkeys were housed in standard stainless-steel macaque cages either 83x71x83cm or 79x79x83cm (W x L x H) with a pair sharing two adjoining cages. Colony rooms housed 12-16 cynos, and the monkeys remained in the colony room except for weekly EEG test sessions. Subjects were fed standard monkey chow (Harlan Teklad Global 20% protein Primate Diet 2050). Water was continuously available except during qEEG testing and fresh fruit was provided twice weekly. Toys and foraging devices were routinely provided and television programs were available in the colony rooms. Subjects were fitted with metal neck collars (Primate Products, Immokalee, FL). Laboratory animal care was according to Public Health Service Policy on the Humane Care and Use of Laboratory Animals, and Guide for the Care and use of Laboratory Animals, (2011). The protocol was approved by the Animal Care and Use Committee of the Bristol-Myers Squibb Company. Subjects were surgically implanted under isoflurane anesthesia (2-4%) with radio-telemetry transmitters (Konigsberg Instruments, Inc., Monrovia, CA). Vital signs were monitored throughout the surgery (e.g. heart rate, expired CO2 percentage, pulse oxygenation, body temperature, etc) to ensure appropriate anesthesia and the health of the animal. The transmitters were attached to two pairs of EEG leads placed over the dura: one set with leads over the frontal cortex (with a reference over parietal cortex; roughly F6-P6 in the 10-20 system) and the second set with leads over the auditory cortex (roughly C6-CP6 in the 10-20 system). Buprenophine (0.01-0.03 mg/kg) was administered IM post surgery and then continued BID for 2-3 days. Animals were observed daily and analgesia was extended as needed if the animal exhibited signs of discomfort or as needed. After full recovery (approximately 3-4 weeks), the implanted radio-telemetry device was tested to ensure a good EEG signal, including auditory evoked potentials to 1 ms click evidenced over both derivations (data not shown). The data for this study was collected from the F6-P6 derivation for 5 monkeys. Artifact in the F6-P6 derivation of one monkey forced use of the C6-CP6 derivation. The patterns of qEEG changes did not differ from the C6-CP6 monkey and the other monkeys (data not shown).
qEEG recording sessions
Animals were comfortably seated in a primate chair and acclimated to a quiet testing chamber outfitted with an antenna to receive telemeter signals and a camera to monitor the animal during the recording session. qEEG recording sessions included either intramuscular (IM) or intravenous (IV) drug administration. For IM studies, animals were placed in the test chamber, and after a 20 min baseline recording, animals were treated with either vehicle or drug and the qEEG measured for a further 90 min. For IV studies, prior to the start of qEEG baseline recordings, a catheter was placed in the saphenous vein and taped securely. After the baseline recording, the treatment (drug or vehicle) was administered and the catheter flushed with saline. The catheter was removed at the end of the test session. Animals were typically tested once weekly.
The relative power for each frequency band (power in a band / overall total power) was determined as the average value from one min bins across the 90 min session. These data are provided as supplementary information (S1 Data). Area under the curve (AUC) over the entire 90 min period for each power band was calculated from the time course data and analyzed by 1-way RM ANOVA followed by Holm-Sidak post-hoc tests comparing vehicle to each dose of compound. The Holm-Sidak method was used for post-hoc comparisons between individual drug doses and vehicle (Prism v. 6.0, Graphpad Software Inc., La Jolla CA). Visual inspection of the qEEG relative power time course plots revealed a slower onset of activity for lanicemine relative to other drugs tested (most prominent in Fig 2; also present in . Therefore a 15 min 'pretreatment time' was imposed on the analysis of lanicemine's effects and AUCs were calculated for 16-90 min, analyzed by RM ANOVA, and used in Cohen's d estimates. Pretreatment times were not necessary for other compounds administered IM as effects were more rapid with ketamine and traxoprodil (e.g. Fig 5). Summary statistics for the main effect of each RM ANOVA are presented in Table 1. Significant post-hoc tests are reported in the text. Additionally, the Cohen's d estimate of effect size was calculated from the AUCs using unpaired statistics so as not to overestimate the effect size using an online tool from Becker, 2000 (http:// www.uccs.edu/~lbecker/). Cohen's d values of >0.2, >0.5 and >0.8 are described using the conventions of small, medium, and large effects, respectively [21].
Effect of IV ketamine on qEEG
As indicated in Table 1, IV ketamine produced statistically significant main effects of treatment on relative power in gamma and beta 1 bands. Post hoc tests confirmed a significant increase in gamma power after 0.56 mg/kg (p = 0.0216, Fig 2) with a Cohen's d = 0.94 confirming a large effect (Fig 6). There was also a significant decrease in the beta 1 power band after 0.56 mg/kg (p = 0.0404, Fig 4) with a Cohen's d = 1.00 confirming a large effect (Fig 6). IV ketamine had no significant effects on other power bands. Despite a medium Cohen's d effect size of 0.7, the decreases in relative alpha power IV ketamine did not reach statistical significance (Holm-Sidak p> 0.05).
Effect of IM ketamine on qEEG
As indicated in Table 1, IM ketamine produced statistically significant main effects of treatment on relative power in beta 1, alpha and delta bands. Similar to IV ketamine, IM ketamine increased gamma relative power (Fig 2) with Cohen's d effect sizes of~0.6 ( Fig 6); however, these increases in gamma power did not reach significance following RM-ANOVA (p>0.05). Also similar to IV ketamine, IM ketamine produced significant dose dependent decreases in beta 1 (Fig 4) with post hoc tests confirming decrease after 1.7 mg/kg and 3 mg/kg (p = 0.0112 and p = 0.0011, respectively; Fig 6). Cohen's d effect sizes for beta 1 decreases were large at 1.2 and 1.8 after 1.7 mg/kg and 3 mg/kg, respectively. Significant decreases were also seen in alpha relative power at these doses with large Cohen's d effect sizes of 1.2 and 1.6 after 1.7 mg/kg and 3 mg/kg (post hoc p = 0.0205 and p = 0.0037, respectively). In contrast, delta power was significantly increased after 1.7 mg/kg and 3 mg/kg IM ketamine with Cohen's d effect sizes of 1.6 and 1.9 (post hoc p = 0.0303 and p = 0.0214, 3 mg/kg, respectively). Beta 2 and theta bands did not significantly differ from vehicle (p>0.05).
Effect of IM lanicemine on qEEG
As indicated in Table 1, IM lanicemine produced statistically significant main effects of treatment on relative power in gamma and beta 1 bands. Lanicemine produced a dose-dependent increase in gamma power with statistical significance after 5.6 mg/kg IM (p = 0.0186, Fig 2) with a Cohen's d effect size of 1.20. Despite a significant main effect in the beta 1 band, and a medium Cohen's d effect size of 0.66, the decrease in beta 1 following 5.6 mg/kg narrowly NR2b selective NAM's have no effect on gamma (30-55 Hz) qEEG in cynomolgus monkeys. Y-axis is relative power in gamma (30-55 Hz) frequency band of the EEG power spectrum. X-axis is time after IM or IV administration. Results are the mean ± SEM (N = 5-6). Traxoprodil, 10 mg/kg IM and BMT-108908 3 mg/kg IV (closed symbol) had no effect on gamma, p>.05 (Table 1). Fig 4). Trends toward increases in beta 2 relative power were not significantly different from vehicle (p>0.05). Lanicemine also had no significant effect on alpha, delta or theta relative power at the doses tested (p>0.05).
Effect of IM traxoprodil on qEEG
The NR2B NAM traxoprodil had no effect on gamma relative power (p>0.05, Fig 3). However statistically significant decreases were observed in beta 1 after 3.0, 5.6 mg/kg and 10 mg/kg IM (Fig 5) with large Cohen's d effect sizes of 1.42, 1.10 and 1.23, respectively (p = 0.0141 for each; Fig 7). All doses of traxoprodil tested also produced, statistically significant decreases in beta 2 relative power with large Cohen's d effect sizes of 0.91, 0.96 and 1.061 at 3.0, 5.6 mg/kg and 10 mg/kg, respectively (p = 0.0442 for each; Fig 7). Decreases in alpha power were not statistically significant despite large Cohen's d effect sizes (0.83, 1.24 and 1.26 after 3.0, 5.6 and 10 mg/kg, respectively). Similarly, increases in relative power in the delta band were not statistically significant despite large Cohen's d effect sizes (0.91, 0.98 and 1.152 after 3.0, 5.6 and 10 mg/kg, respectively). No significant effects on power in the theta band were observed.
Effect of IV BMT-108908 on qEEG
The NR2B NAM BMT-108908 had no effect on gamma relative power (p>0.05, Fig 3). However, significant decreases were seen in beta 1 relative power at all doses tested (p = 0.0041, p = 0.0009 and p = 0.0004 for 0.3, 1.0, and 3.0 mg/kg respectively; Fig 5). Cohen's d effect sizes were large at 0.8, 1.2 and 1.6 after 0.3, 1.0 and 3.0 mg/kg, respectively (Fig 7). Similarly, significant decreases were seen in beta 1 relative power at all doses tested (p = 0.00490, p = 0.0049, p = 0.0013 for 0.3, 1.0, and 3.0 mg/kg respectively) with Cohen's d effect sizes of 0.77, 0.82 and 1.12, respectively. Significant decreases on alpha relative power were also seen with all doses (p = 0.0467 at 0.3 mg/kg, p = 0.0223 at 1 mg/kg and p = 0.0242 at 3 mg/kg) with Cohen's d effect sizes of 0.89 at 0.3 mg/kg, 1.47 at 1 mg/kg and 1.43 at 3 mg/kg. BMT-108908 had no significant effect on relative theta power.
Discussion
The present studies are the first to demonstrate the effect of NR2B selective NAMs as well as the NMDA receptor channel blockers ketamine and lanicemine, on quantitative EEG in nonhuman primates including a full range of clinically relevant frequency bands. In general, the channel blockers ketamine and lanicemine increased gamma power and decreased beta power. In contrast to the NMDA channel blockers, the selective NR2B NAMs traxoprodil and BMT-108908 had no effect on gamma power. However, the NR2B NAMs produced robust decreases in alpha, beta 1 and beta 2 frequency bands. Both ketamine and the NR2B NAMs increased power in the delta frequencies.
While human data for NR2B NAMs are lacking, the present data demonstrating no effect on gamma power following treatment with NR2B NAMs is consistent with previously published reports in rats [13,14,19]. The Kocsis et al., (2012) study provided evidence for subunitspecificity of cortical gamma activity induced by NMDA receptor blockade and showed that increases in aberrant gamma activity are primarily mediated by NMDA receptor antagonists containing the NR2A subunit, whereas effecting receptors containing NR2B, NR2C, or NR2D subunits did not produce this response [14]. Similarly, Sivarao et al., 2014 found no change in gamma and decreases in beta 1 and beta 2 bands after traxoprodil administration in rats [13]. Therefore, the lack of effect of traxoprodil and BMT-108908 on gamma power in the nonhuman primate is consistent with published reports in rats. The robust changes in beta, alpha and delta frequencies following NR2B NAM administration suggest that these frequency bands may be useful as pharmacodynamic biomarkers in future clinical studies.
The primary clinical route of administration for NMDA receptor channel blockers and NR2B NAMs has been intravenous; however, clinical evidence for antidepressant activity after intramuscular injection of ketamine is growing [23]. Therefore, both routes were included in these studies. Additionally one NR2B NAM was administered via each route. Ketamine is rapidly eliminated from plasma after IV administration [24], and its EEG effects in humans closely follow its pharmacokinetic profile [25]. The rapid rise and fall of gamma power after IV ketamine in the present study is consistent with its profile in humans [25]. However, when analyzed by area under the curve for the measure of effect size, the shorter profile after IV likely blunted ketamine's qEEG effects relative to IM administration in the present study. Nonetheless, the overall pattern of effects across power bands is similar between IM and IV ketamine. The direction of power changes were also similar for the two NR2B NAMs whether given IM or IV; therefore, the route of administration does not appear to be a large factor in the effect of channel-blocking NMDA antagonists or NR2B NAMs on qEEG power spectra.
One limitation of the present studies was the relatively small number of subjects. While 5-6 macaques is similar in size to many nonhuman primate EEG studies previously published [15,16,18], relatively small variation between subjects can influence statistical significance of the analysis. For instance traxoprodil increased delta power with relatively large effects in all subjects; however, differences in the sensitivity of individual animals to different doses (e.g. largest effect at 3 mg/kg in one animal and at 10 mg/kg in another) introduced sufficient variability (13)(14)(15)(16)(17)(18)(19) Hz) qEEG in cynomolgus monkeys. Y-axis is relative power in beta 1 (13)(14)(15)(16)(17)(18)(19) Hz) frequency band of the EEG power spectrum. X-axis is time after IM or IV administration. N = 5-6. Error Bars are SEM. Beta 1 band after 0.56 mg/kg IV, ketamine (closed symbol) differs from vehicle (open symbol) at p < .05 level. (Table 1). Ketamine caused a decrease in beta 1 relative power after 3 mg/kg IM (closed symbol), and differs from vehicle (open symbol) at p < .001 level (Table 1). Lanicemine 5.6 mg/kg IM (closed symbol), decreased beta 1 relative power but was not significantly different from vehicle (open symbol), p>.05 (Table 1). doi:10.1371/journal.pone.0152729.g004 qEEG Signature of Selective NMDA NR2B NAMs for the RM ANOVA to fail to reach significance. However, the level of technical difficulty and the resource-intensiveness of the model did not allow for additional subjects to be included. Another potential concern with the methodology is that the comfortable, dimly-lit, environment of the sound-attenuating cubicle could possibly cause the monkeys to sleep. Decreases in higher frequency bands (e.g. gamma or beta) and increases in lower frequency power bands (e.g. delta) may be consistent with sleep induction; however, subjects were continuously monitored via video camera during each session and there were no observations of eye closure or sleep when decreased beta and increased delta power occurred (data not shown). In addition, similar doses of these agents impaired cognition in cynomolgus monkeys without producing signs of motor impairment or sedation during operant tests performed in similar cubicles [26]. Therefore, the qEEG effects of NR2B NAMs were not likely to be due to induction of sleep.
A reliable pharmacodynamic biomarker should show a systematic relationship with other pharmacologic effects of the compound. These same four compounds were all recently shown to impair cognition in a delayed-match to sample (DMS) test in cynomolgus monkeys [26]. Interestingly, all of the doses of compounds that produced significant qEEG effects also produced behavioral effects on the DMS task. While there is not a 1:1 correspondence between qEEG and DMS effects (e.g. lower ketamine doses of 0.56 and 1.0 mg/kg affected DMS but not qEEG) there is good agreement between doses changing qEEG power and doses impairing cognition. Overall, these results suggest that qEEG changes can be a useful measure of pharmacodynamic activity in the brain with either NMDA channel blocking antagonists or NR2B NAMs.
An important caveat for qEEG biomarkers is the possibility of biphasic qEEG responses. Indeed, doses of ketamine higher than those used here become anesthetic, and in both macaques and humans, power spectra are likely to differ once anesthetic effects are achieved. Accordingly increases in beta and theta power have been reported after anesthetic doses of ketamine in rhesus monkeys (10-15 mg/kg IM) and humans (approximately 4 mg/kg IV) [20,25]. In the present context of developing novel antidepressants, it is unlikely that a biphasic response of qEEG would be relevant, as antidepressant doses are much lower than anesthetic doses of NMDA channel blocking antagonists.
In addition to aiding development of clinical biomarkers, the results of the present studies highlight something interesting about how cortical gamma oscillations may relate to antidepressant effects of NMDA antagonists. Acute ketamine administration in humans induces psychomimetic and dissociative effects coincident with an increase in gamma oscillations (whether measured by spontaneous qEEG or as gamma oscillations following evoked potentials) [12,27]. It is not clear whether the increase in gamma power is a necessary component of the antidepressant effects produced by NMDA antagonists. However, results of preclinical studies, including this one, question that hypothesis. While NR2B NAMs show little increase in cortical gamma in rodents, they do have robust anti-depressant effects across species. This is the first report that establishes that as in rodents, NR2B NAMs do not affect gamma oscillations in primates. As mentioned previously, traxoprodil has shown antidepressent effects, however, EEG was not measured in those studies, and it is unknown how gamma oscillations were affected. In addition, Lanicemine, a low trapping NMDA channel blocker, was shown to produce robust increase in gamma oscillations in rodents, non-human primates (current report) as well as healthy human subjects [12]. However, despite the initial promise, that compound did not show therapeutic efficacy in large multi-center trials. Thus, there may be a double dissociation as far as gamma oscillations and efficacy in MDD is concerned: an increase in acute Non-selective NMDA channel blockers elicit robust changes in qEEG power spectra in cynomolgus monkeys. Y-axis is relative power in designated frequency bands of the qEEG power spectrum. X-axis is Cohen's d' calculated from the area under the relative power curve for each band. N = 5-6. Significant changes from qEEG AUC following vehicle are designated by *, **, or *** indicating significant differences at the p < .05, p < .01, and p < .001 levels, respectively. doi:10.1371/journal.pone.0152729.g006 qEEG Signature of Selective NMDA NR2B NAMs gamma oscillations does not ensure a long-term efficacy against depression (lanicemine) while a lack of gamma oscillations does not preclude NR2B agents from being effective in the clinic against MDD (traxoprodil). The use of qEEG as pharmacodynamic biomarkers for NR2B NAM antidepressants may also provide clinical data that would help understand the relationship between increases in gamma power antidepressant efficacy of NR2B antidepressants. NR2b selective NAM's elicit robust changes in qEEG power spectra in cynomolgus monkeys. Y-axis is relative power in designated frequency bands of the qEEG power spectrum. X-axis is Cohen's d' calculated from the area under the relative power curve for each band. N = 5-6. Significant changes from qEEG AUC following vehicle are designated by *, **, or *** indicating significant differences at the p < .05, p < .01, and p < .001 levels, respectively. In summary, the qEEG effect profiles of traxoprodil and BMT-108908 in the non-human primate are strikingly similar with comparable effects seen across all power bands. These results suggest that, while elevations in gamma band power may provide evidence for NMDA channel block, changes in other qEEG bands may provide pharmacodynamic biomarkers for NR2B NAMs, specifically the reduction in beta and increase in delta power. The results of this study provide further evidence of the translation across species of qEEG measures and support its utility as a pharmacodynamic biomarker and suggest that qEEG models may represent a clinical biomarker for pharmacodynamic effect of NMDA antagonists and NR2B NAMs in early clinical trials.
Supporting Information S1 Data. Relative power data spreadsheet file includes relative power in each band for each treatment for each subject. Relative power spectra are averages of power across one minute bins over the 90 min session for each individual subject. Data are normalized to that individual subject's pre dose baseline levels for each spectral band. These are the data used to construct figures and analyzed to build the ANOVA table. (XLSX) | 6,147.4 | 2016-04-01T00:00:00.000 | [
"Biology"
] |
Electrochemical Anodization and Characterization of Titanium Oxide Nanotubes for Photo Electrochemical Cells
Due to its multitude of applications, titanium oxide is one of the most coveted and most sought-after materials. The above experiment demonstrated that TiO2 nanotube arrays might be formed by electrochemical anodization of titanium foil. The 0.25 wt% ammonium fluoride (NH4F) was added to a solution of 99% ethylene glycol. Anodization is carried out at a constant DC voltage of 12V for 1 hour. Then, the annealing process is carried out for 1 hour at 4800C, which is known as an annealing. FE-SEM were utilized to evaluate the surface morphology of the nanotube arrays that were made. At the wavelength of 405 nm, sharply peaked photoluminescence intensity was observed, which corresponded tothe band gap energy (3.2 eV) of the anatase TiO2 phase. Since free excitations appear at 391 and 496 nm, and since oxygen vacancies are developed on the surface of titania nanotube arrays, it is reasonable to conclude that free excitations and oxygen vacancies are the causes of humps at 391 and 496 nm, and that they may also be present at 412 and 450 nm. FESEM results showed uniformly aligned TiO2 nanotube arrays with an inner diameter of 100 nm and a wall thickness of 50 nm
INTRODUCTION
Today, it is necessary to employ a wide variety of materials in order to design and create gadgets which are currently acceptable for a number of commercial applications. The increased demand for nanomaterials has led to increased interest in current technologies that build high-performance circuits. Geometry, shape, and morphology of nanostructures all have a crucial role in determining the effectiveness of nanostructures [1]. When increased understanding of nanoscale materials will be implemented, a better integration of theory and models will lead to previously unthought-of advancements in predicting and designing material properties at all scales ranging from atomic-molecular nanostructure to microstructure behavior. This will lead to previously unseen levels of advancement in the prediction and design of materials properties. The fact that titanium dioxide nanotubes (TNT) have lower production costs, a larger surface-to-volume ratio, and a better combination of attributes has fueled their increase in popularity as a result of their use in various nanostructured oxide materials. Because of the diverse applications in which TNT has been used, there have been a variety of applications considered for TNT, including pool boiling [4,[5][6][7][8][9][10], photo catalysis [5][6][7][8][9][10], electrochromic device [11][12], hydrogen generation [13,14], corrosion resistance [15], solar cells [16][17], sensors [18,19], storage device [20], and catalyst support [21]. In comparison to conventional anodization, electrochemical anodization offers the best capabilities, such as the ability to establish a specific thickness and size for the resulting nanotube arrays with the desired dimensions. It is also capable of providing specializations tailored to specific applications with its uniform, controllable process, and is advantageous when performing electrochemical anodization as compared to the other options. This process is also a cost-effective one, and the tubes formed in this manner have a very strong adhesive strength. Also, the anodization variables make it possible to change the film thickness and form simply by increasing or decreasing the value of the variable.
2.EXPERIMENTAL METHOD
Electrochemical anodic oxidation was used to create the TiO2 nanotube array film. Sigma-Aldrich, Bangalore, provided a high purity titanium (99.8%) plate with a thickness of 0.25 mm, which was washed in DD water and then rinsed with acetone for 10 minutes. In a cylindrical electrochemical reactor, TiO2 nanotube arrays were created. The experimental setup includes an anode as the working electrode and Platinum Pt as the co-electrode (cathode) immersed in an electrolytic solution of 0.25wt% ammonium fluoride (NH4F) dissolved in 99 percent ethylene glycol. Anodization is carried out at a continuous DC voltage of 12V for 1 hour at a certain anodization time and then annealed at 4800C for 1 hour. The electrolytic solution employed has a pH of 4.3 and a mortality of 0.06M.
Figure 1. Experimental setup of Electrochemical Anodization
Opposing processes occur at the oxide/electrolyte interface, with one being the hydrolysis of TiO2 and the other being the dissolution of TiO2. The TiO2 nanostructures formed are dependent on which of these two processes is happening at the oxide/electrolyte interface [24,25]. To anodize fluorides present in the electrolyte, the two procedures result in the creation of vertically aligned titanium oxide nanotubes on the surface of titanium substrates (mixture of ammonium fluoride and ethylene glycol).
MEASUREMENTS
High-resolution surface images were acquired by employing (FEG-SEM).By employing an XPERT-PRO X-ray diffractometer with Cu K radiation (λ = 1.54060 Å), the structure and crystalline nature of titania nanotubes were investigated. The bandgap energy was determined using a Photoluminescence (PL)
FEGSEM ANALYSIS
The two figures displayed in this section provide a top view of the TiO2 nanotube array surface structure in FEG-SEM at a magnification of 700KX and 400KX, respectively. When using the self-organized Nano porous TiO2 tubes, it was discovered that the resulting tubes had a length of 200 nm, a pore width of 100 nm, and a wall thickness of 50 nm. To fabricate and analyze self-aligned TiO2 nanotube arrays, the TiO2 nanotube arrays are generated using potentiostatic anodization of two different electrode topologies. The image presented in Figure 5 displays the photoluminescence (PL) emission spectra of TiO2 nanotubes arrayed on titanium foil in the wavelength range of 300-600nm at ambient temperature. A significant PL emission peak is observed at 405 nm, which is associated with the band gap energy (3.1eV). The bulk anatase TiO2 phase has a band gap energy of 3.2 eV. The oxygen vacancies that may have developed on the surface of titania nanotube arrays at 412 nm and 450 nm might be explained by a little protrusion at 412 nm and 450 nm, which may be indicative of oxygen vacancies. One more hump was observed in the form of two wavelengths of 496 nm and 391 nm, which may indicate free excitations.
4.CONCLUSION
Anodic oxidation of TiO2 has been used to make nanotube arrays, which have since been characterized. When considering the band gap energy of the bulk TiO2 anatase phase, the group peak about 405 nm in the PL emission spectrum is clearly present. Also, a number of small humps were also observed in PL spectra, which could be attributed to free excitations or the production of oxygen vacancies on the surface of titania nanotube arrays. A natural test that validates the anatase phase of TiO2 nanotube arrays is by measuring the XRD, and the peaks seen in the XRD pattern are in agreement with JCPDS Card No. 021-1272. FESEM (Focused Electron Scanning Electron Microscope) has verified the existence of nanotube arrays with an inner diameter of 100 nm and a wall thickness of 50 nm. With regard to the anatase TiO2 nanotube arrays that have been manufactured, they are appropriately aligned and hence acceptable for use in Photo Electrochemical Cells for Hydrogen Generation. | 1,597.4 | 2021-11-01T00:00:00.000 | [
"Materials Science"
] |
Implementation of Lapped Biorthogonal Transform for JPEG-XR Image Coding
Advancements in digital image devices have culminated in increase in image size and quali‐ ty. High quality digital images are required in different fields of life for example in medical, surveillance, commercials, space imaging, mobile phones, play stations and digital cameras. As a result, memory requirement for storing these high quality images has been increased enormously. Moreover, if we want to transmit these images over communication channel, it will require high bandwidth. Thus there is a need to develop techniques that reduces the size of image without significantly compromising the quality of digital image so that it can be stored and transmitted efficiently.
Introduction
Advancements in digital image devices have culminated in increase in image size and quality.High quality digital images are required in different fields of life for example in medical, surveillance, commercials, space imaging, mobile phones, play stations and digital cameras.As a result, memory requirement for storing these high quality images has been increased enormously.Moreover, if we want to transmit these images over communication channel, it will require high bandwidth.Thus there is a need to develop techniques that reduces the size of image without significantly compromising the quality of digital image so that it can be stored and transmitted efficiently.
Compression techniques exploit redundancy in image data to reduce the required amount of storage for image.Different compression performance parameters such as compression ratio, computation complexity, compression / decompression time and quality of compressed image vary with different compression techniques.Most widely used image compression standard is JPEG (ISO/IEC IS 10918-1 | ITU-T T.81) [1].It supports baseline, hierarchical, progressive and lossless modes and provides high compression at low computational cost.Figure 1 shows steps in JPEG encoding.It uses Discrete Cosine Transform (DCT) which is applied on 8x8 image block.However at low bit rate it produces blocking artifacts.
To overcome the limitations of JPEG, new standard i.e.JPEG2000 (ISO/IEC 15444-1 | ITU-T T.800) was developed [2].JPEG2000 uses Discrete Wavelet Transform (DWT) and provides high compression ratio without compromising the quality of image quality even at low bit rates.It supports lossless, lossy, progressive and region of interest encoding.However, these advantages are achieved at the cost high computational complexity.Therefore there was a need for a compression technique that not only preserves the quality of high resolution images but also keep the storage and computational cost as low as possible.
Figure 1. JPEG Encoding
A new image compression standard, JPEG eXtended Range (JPEG XR) has been developed which addresses the limitations of currently used image compression standards [3][4].JPEG XR (ITU-T T.832 | ISO/IEC 29199-2) mainly targets to increase the capabilities of exiting coding techniques and provides high performance at low computational cost.JPEG XR compression stages are almost same at higher level as compared to existing compression standards but lower level operations are different such as transform, quantization, scanning and entropy coding techniques.It supports lossless as well as lossy compression.JPEG XR compression stages are shown in Figure 2. JPEG XR use Lapped Biorthogonal Transform (LBT) to convert image samples into frequency domain coefficients [5][6][7][8].LBT is integer transform and it is less computationally expensive than DWT used in JPEG2000.It reduces blocking artifacts at low bit rates as compared to JPEG.Thus due to less computational complexity and reduced artifacts, it significantly improves the overall compression performance of JPEG XR.Implementation of LBT can be categorized into software based implementation and hardware based implementation.Software based implementation is generally used for offline processing and designed to run on general purpose processors.Performance of software based implementation is normally less than hardware based implementation and mostly it is not suitable for real time applications.Hardware based implementation provide us superior performance and mostly suitable for real time embedded applications.In this chapter we will discuss LabVIEW based software implementation and Micro Blaze based hardware implementation of LBT.Next section describes the working of Lapped Biorthogonal Transform.
Lapped Biorthogonal Transform (LBT)
Lapped Biorthogonal Transform (LBT) is used to convert image samples from spatial domain to frequency domain in JPEG XR.Its purpose is the same as discrete cosine transform (DCT) in JPEG.LBT in JPEG XR is operated on 4x4 size image block.LBT is applied on blocks and macro blocks boundaries.Input image is divided into tiles prior to applying LBT in JPEG XR.Each tile is further divided into macro blocks as shown in Figure 3.Each macro block is a collection of 16 blocks while a block is composed of 16 image pixels.Image size should be multiple of 16; if size is not multiple of 16, then we extend the height and or width of image to make it multiple of 16.This can be done by replicating the image sample values at boundaries.Lapped Biorthogonal Transform consists of two key operations: 1. Overlap Pre Filtering (OPF)
Forward Core Transform (FCT)
Encoder uses OPF and FCT operations in following steps as shown in Figure 4. OPF is applied on block boundaries, areas of sizes 4x4, 4x2 and 2x4 between block boundaries are shown in Figure 5.
The various steps performed in LBT are as follows: 1.In Stage 1, Overlap pre filter (OPF_4pt) is applied to 2x4 and 4x2 areas between blocks boundaries.Additional filter (OPF_4x4) is also applied to 4x4 area between block boundaries.
2.
A forward core transform (FCT_4x4) is applied to 4x4 blocks.This will complete stage 1 of LBT.
3.
Each 4x4 block has one DC coefficient.As macro block contains 16 blocks so we have 16 DC coefficients in one macro block.Arrange all 16 DC coefficients of macro blocks in 4x4 DC blocks.
4.
In stage 2, Overlap pre filter (OPF_4pt) is applied to 2x4 and 4x2 areas between DC blocks boundaries.Additional filter (OPF_4x4) is also applied to 4x4 area between DC block boundaries.
5.
Forward core transform (FCT_4x4) is applied to 4x4 DC blocks to complete stage 2 of LBT.This will results in one DC coefficient, 15 low pass coefficients and 240 high pass coefficients per macro block.1) and Eq. ( 2): Where T1 and T2 are 1-D transform matrix for rows and columns respectively.Forward Core Transform is composed of Hadamard transform, Todd rotation transform and Toddodd rotation transform.Hadamard transform is Kronecker product of two 2-point hadamard transform Kron( Th, Th ) where Th is given by Eq. ( 3): Todd rotation transform is Kronecker product of 2-point Hadamard transform and 2-point rotation transform Kron ( Th, Tr ) where Tr is given by Eq. ( 4): ( ) Toddodd rotation transform is Kronecker product of two 2-point rotation transform Kron (Tr, Tr).Overlap pre filtering is composed of hadamard transform Kron (Th, Th), inverse hadamard transfom, 2-point scaling transform Ts, 2-point rotation transform Tr and Toddodd transform Kron (Tr, Tr).Inverse hadamard transform is Kronecker product of two 2-point inverse hadamard transform Kron (inverse (Th), inverse (Th)).
LabVIEW based Implementation of LBT
LabVIEW is an advanced graphical programming environment.It is used by millions of scientists and engineers to develop sophisticated measurement, test, and control systems.It offers integration with thousands of hardware devices.It is normally used to program, PXI based system for measurement and automation.PXI is a rugged PC-based platform for measurement and automation systems.It is both a high-performance and low-cost deployment platform for applications such as manufacturing test, military and aerospace, machine monitoring, automotive, and industrial test.In LabVIEW, programming environment is graphical and it is known as virtual instrument (VI).
LabVIEW implementation of LBT consists of 10 sub virtual instruments (sub-VIs).LBT implementation VI hierarchy is shown in Figure 6.These sub-VIs are building blocks of LBT.Operations of these sub-VIs are according to JPEG XR standard specifications [3].OPF 4pt, FCT 4x4, OPF 4x4 are main sub-VIs and are used in both stages of LBT.OPF 4pt further uses FWD Rotate and FWD Scale VIs.Similarly FCT 4x4 and OPF 4x4 require T_ODD, 2x2T_h, T_ODD ODD, T2x2h_Enc, FWD_T ODD ODD sub-VIs.
Figure 7 shows main block diagram of LBT implementation in LabVIEW that performs sequence of operations on the input image.After the operation of OPF 4pt, OPF 4x4 is performed on 4x4 areas between block boundaries to complete overlap pre filtering.Figure 10 shows block diagram of OPF 4x4.
OPF 4x4 operates on 16 image samples.It uses T2x2_Enc, FWD Rotate, FWD Scale, FWD ODD and 2x2T_h sub-VIs.Here these sub-VIs are also executes in parallel.Four T2x2h_Enc and 2x2T_h sub-VIs are executing in parallel.Similarly FWD Rotate, FWD Scale and FWD ODD are also executed in parallel.OPF 4x4 starts processing on 16 image samples at once and outputs all 16 processed image samples at same time.Figure 11 shows block diagram for processing of OPF 4x4.
For processing of image samples for OPF 4x4 operation: start point of OPF 4x4 and image dimensions are required along with input images samples.After the processing of OPF 4x4, FCT 4x4 is performed on each 4x4 image block.Figure 12 shows block diagram of FCT 4x4.FCT 4x4 operation requires 2x2T_h, T_ODD and T_ODDODD sub-VIs.These sub-VIs are also executed in parallel to speed up the operation of FCT 4x4.It is operated on 16 image samples that are processed in parallel.This completes the stage 1 of LBT.This will result one DC coefficient in each 4x4 block.In stage 2, all operations will be performed on these DC coefficients of all blocks.DC coefficients will be considered as image samples and arranged in 4x4 blocks.OPF 4pt is performed in horizontal and vertical directions on DC coefficients block boundaries with 4x2 and 2x4 areas.OPF 4x4 is also applied on 4x4 areas between DC blocks boundaries.FCT 4x4 is performed on each DC 4x4 blocks to complete stage 2 of LBT.At this stage, each macro block contains 1 DC, 15 low pass coefficients and 240 high pass coefficients.
We tested LabVIEW implementation on NI-PXIe 8106 embedded controller.It has Intel 2.16GHz Dual core processor with 1GB RAM.It takes 187.36 ms to process test image of size 512x512.We tested LBT in lossless mode.Functionality of implementation is tested and verified with JPEG XR reference software ITU-T T835 and standard specifications ITU-T T832.Memory usage by top level VI is shown in Table 1.
Resource Type Used
Front
Soft processor based hardware design of LBT
To use Lapped Biorthogonal transform in real time embedded environment, we need its hardware implementation.Application specific hardware for LBT provides excellent performance but up-gradation of hardware design is difficult because it requires remodeling of whole hardware design.Pipeline implementation of LBT also provides outstanding performance but due to sequential nature of LBT, it requires large amount of memory usage [10][11][12].In this section, we describe a soft embedded processor based implementation of LBT.The proposed architecture design is shown in Figure 13.Soft embedded processor is implemented on FPGA and its main advantage is that we can easily reconfigure or upgrade our design.The processor is connected to UART and external memory controller through processor bus.Instruction and data memories are connected to soft embedded processor through instruction and data bus respectively.The instructions of LBT processing are stored in instruction memory that will be executed by the proposed soft embedded processor core.Block RAM (BRAM) of FPGA is used as data and instruction memory.
For the processing of LBT, digital image is loaded into DDR SDRAM from external source like imaging device through UART.Image is first divided into fix size tiles i.e. 512x512.Tile data is fetched from DDR SDRAM into the data memory.Each tile is processed independently.OPF_4pt and OPF_4x4 operations are applied across blocks boundaries.After that FCT_4x4 operation is applied on each block to complete first stage of LBT.At this stage, each block has one DC coefficient.
For second stage of LBT, we consider these DC coefficients as single pixel arranged in DC blocks of size 4x4 and same operations of stage 1 are performed.After performing OPF_4pt, OPF_4x4 and FCT_4x4, stage 2 of LBT is completed.At this stage, each macro block has 1 DC coefficient, 15 low pass coefficients and 240 high pass coefficients.We send these coefficients back to DDR SDRAM and load new tile data into data memory.DDR SDRAM is just used for image storage and can be removed if streaming of image samples from sensor is available.Only data and instruction memory is used in processing of LBT.Flow diagram in Figure 14 gives summary of operations for LBT processing.The proposed design is tested on Xilinx Virtex-II Pro FPGA and verified the functionality of design according to standard specifications ITU-T T832 and reference software ITU-T T835.Processor specifications of design are listed in Table 4.
Processor Speed 100MHz
Processor Bus Speed 100MHz Memory for Instruction and Data 32KB
Conclusion
In this chapter we have discussed the implementation of Lapped Biorthogonal Transform in LabVIEW for state of art image compression technique known as JPEG XR (ITU-T T.832 | ISO/IEC 29199-2).Such implementation can be used in PXI based high performance embedded controllers for image processing and compression.It also helps in research and efficient hardware implementation of JPEG-XR image compression.Moreover we also proposed an easily programmable, soft processor based design of LBT which requires less memory for processing that's makes this design suitable for low cost embedded devices.
Figure 5 .
Figure 5. Image partitioning The 2-D transform is applied to process the two dimensional input image.A 2-D transform is implemented by performing 1-D transform in rows and columns of 2-D input image.Matrix generated by Kronecker product is also used to obtain 2-D transform.Transform Y of 2-D input image X is given by Eq. (1) and Eq.(2):
Figure 7 .
Figure 7. LBT Block Diagram In stage 1, image samples are processed by OPF 4pt in horizontal direction (along width) of the image.This operation is performed on 2x4 boundary areas in horizontal direction.Figure 8 shows block diagram of OPF 4pt.
Figure 9 .
Figure 9. Block Diagram for OPF 4pt Processing OPF 4pt operation is also performed in vertical direction (along height) of the image.For processing in both directions OPF 4pt requires 1D array of input image samples, starting point for the operation of OPF 4pt and dimensions of input image.
Figure 14 .
Figure 14.Flow Diagram of LBT Processing in Proposed Design[9]
Figure 15 .
Figure 15. Figure (a) shows original image.Figure (b) shows decompressed image which was compressed by proposed LBT implementation.
Table 1 .
Memory UsageImportant parameters of implementation of top level VI and sub-VIs are shown in Table2.
Table 3 .
Test Image is loaded into DDR SDRAM through UART from computer.Same test image is also processed by reference software and compares the results.Both processed images were same when indicates correct functionality of our design.FPGA resources used in implementation are shown in Table3.FPGA Resource Utilization
Table 4 .
Processor ResourcesMemory required for data and instruction in our design is 262,144 bits.As the input image is divided into fix size tiles i.e. 512x512, design can process large image sizes.Minimum input image size is 512 x 512.Due to less memory requirements, easy up-gradation and tile based image processing.It is suitable for low cost portable devices.Test image is used of size 512x512 and in unsigned-16 bit format.Execution time to process test image is 27.6ms.Compression capability for test image is 36 frames per second.Figure15shows original and decompressed image which was compressed by proposed design.Lossless compression mode of JPEG XR is used to test the implementation so recovered image is exactly same as original image. | 3,495.8 | 2013-01-09T00:00:00.000 | [
"Computer Science"
] |
Decreased serum levels of IL-27and IL-35 in patients with Graves disease.
OBJECTIVES
Graves' disease (GD) is an autoimmune disease causing the overproduction of the thyroid hormone from thyroid gland. This disease is mainly the result of the production of antibodies against TSH receptors. Cytokines play an important role in orchestrating the pathophysiology in autoimmune thyroid disease. The regulatory role of IL-12 on TH1 cells has been proven. IL-27 and IL-35, members of IL-12 cytokine family, are two cytokines that have been newly discovered. IL-35 has been identified as a novel immunosuppressive and anti-inflammatory cytokine while IL-27 has both inflammatory and anti-inflammatory functions. The objective of the current study was to examine the changes in the serum level of the foregoing cytokines in GD patients in comparison to healthy controls.
MATERIALS AND METHODS
In this study, serum levels of IL-27 and IL-35 were determined by an ELISA method; anti TPO and anti Tg were measured by an RIA method in 40 new cases of Graves's disease. The findings were compared with 40 healthy controls.
RESULTS
The results showed a significant difference between IL-27 and IL-35 regarding their serum levels with P values of 0.0001 and 0.024, respectively; anti TPO and anti Tg levels of the cases were also significantly different from controls (p < 0.001).
CONCLUSION
The reduction in the serum levels of IL-27 and IL-35 in GD patients compared to normal subjects suggests the possible anti-inflammatory role of these cytokines in GD.
INTRODUCTION
G rave's disease (GD) is an autoimmune thyroid disease characterized by thyrotoxicosis, diffuse goiter and the presence of autoantibody against thyroidstimulating hormone (TSH) receptor (1). The three major clinical signs of this disease are hyperthyroidism, ophthalmopathy and localized skin manifestations (2). GD is reported in 0.5% of the world population and its incidence is 5-10 times higher in women than in men (3). Although GD is a multifactorial disease whose etiology has not been fully fathomed, evidence shows that the imbalance between pro-and anti-inflammatory cytokines and production of aberrant autoantibodies play a pivotal role in the disease pathogenesis (1). Many studies have shown that pro-inflammatory cytokines such as interleukin (IL)-2, IL-8, IL-6, tumor necrosis factor (TNF)-α and IL-17 are increased in the serum of GD patients (4). The evaluation of the T helper (TH) subsets involvement in GD showed a TH2 bias immunological imbalance and an increase in TH1-and TH17-related cytokines compared to healthy controls (5)(6)(7).
Cytokines play a crucial role in triggering and coordinating inflammatory immune responses. Improper cytokine expression appears to influence the pathogenesis of many human diseases including thyroid autoimmune diseases. The immune regulation of two cytokines related to IL-12 family, namely IL-27 and IL-35, has been demonstrated in several autoimmune diseases (8). IL-27 is a heterodimeric cytokine comprised of Epstein-Barr virus (EBV)induced gene 3 (EBI3) subunit (a protein linked to IL-12-p40) and the p28 subunit (9). Antigen presenting cells, including monocytes, macrophages and dendritic cells are the most important sources of IL-27 (10) that has both pro-inflammatory and anti-inflammatory activities (11). This cytokine is able to induce the proliferation of naive T cells and promote TH1 immune responses (12). Additionally, IL-27 regulates the production of anti-inflammatory cytokines such as IL-4, IL-10, and transforming growth factor beta (TGF-β) (13). IL-35 is a novel cytokine also belonging to IL-12 family. It is produced by regulatory T cells and plays an important role in suppressing the immune system (14). Regulation of cytokine network is a main target of various investigations in autoimmune inflammatory diseases (15,16). The immunoregulatory role of these two cytokines in GD is yet to be elucidated. Therefore, the aim of the present study was to generate a new insight on the role of these cytokines in GD development or pathogenesis. For this purpose, the levels of IL-27 and IL-35 cytokines were compared between GD patients and healthy controls. The possible relationship between these cytokines and the presence of anti-thyroid antibodies including anti-thyroid peroxidase (anti-TPO) and anti-thyroglobulin (anti-Tg) antibodies was further assessed in GD patients.
Study population
Forty new cases of GD referring to the Motahari outpatient clinics, Shiraz University of Medical Sciences, participated in this study. Patients were diagnosed by an endocrinologist on the basis of clinical manifestations, biochemical criteria of thyrotoxicosis (TSH < 0.05 mIU/L and increased free T3 and/or free T4 levels) and the presence of TSH receptor antibodies (TRAb). The Graves orbitopathy (GO) was checked in GD patients according to the NO SPECS grading system. Moreover, 40 healthy subjects matched in age and sex with the patient group were included in the study. The controls did not have any history of GD or other autoimmune diseases. Written informed consent was obtained from all the subjects, and the study was reviewed and approved by the Ethics Committee of Fasa University of Medical Sciences.
Sample collection
From all study participants, 5ml peripheral blood sample was collected. Following centrifugation at 3000 rpm for 10 min, the sera were separated, aliquoted and kept at -70 ºC until further use.
Measurement of anti-Tg and anti-TPO and TSH receptor antibodies
Serum levels of anti-Tg and anti-TPO antibodies were evaluated by radioimmunoassay (RIA) using anti-hTg [ 125 I] RIA kit (RK-8CT) and anti-hTPO [ 125 I] RIA kit (RK-36CT) obtained from Institute of ISOTOPE, Hungary. RIA protocol was carried out according to the manufacturer's instruction. Briefly, samples and calibrators were incubated together with biotin labelled anti-Tg (or biotin labelled anti-TPO for anti-TPO assay) and 125 ITg (or 125 I TPO) in the streptavidincoated tubes. Following incubation, the content of the tubes was aspirated and the bound activity was measured in a gamma counter. The concentrations of anti-Tg and anti-TPO antibodies were inversely proportional to the radioactivity measured in the test tubes. The concentration was read off the calibration curve generated through plotting the binding values against a series of calibrators containing known amounts of anti-Tg or anti-TPO.
Serum levels of TSH receptor antibody (TRAb) were evaluated by competitive ELISA technique (Medizym ® TRAb clone from Medipan, Dahlewitz/ Berlin, Germany) according to the manufacturer's instruction. Briefly, samples, control and calibrators were added to respective tubes and incubated for 120 min at room temperature. After incubation, wells were aspirated and washed to remove any residual droplets. Next, monoclonal human antibody biotin complex (M22) was added. The incubation and washing steps was repeated. Conjugate solution was then added to the test tubes. After that, the incubation and washing steps were performed, and the substrate solutions were then added to each well. Finally, the optical density was read at 450 nm. The concentrations of TRAb were inversely proportional to the enzymatic activity measured in the test tubes. coated in each well of a 96-well microplate overnight at room temperature (RT). After washing and blocking, 100 μL serum samples of patients and controls were added for 2 h at RT. Plates were washed again and 100 μL of the detection antibody and then 100 μL of the working dilution of streptavidin-horseradish peroxidase (HRP) were added for 20 min. The substrate solution (100 μL) was then added for 20 min, and finally the reaction was stopped by adding a stop solution to each well. Regarding IL-35 serum levels, 40 μL samples along with 10 μL anti-IL-35 biotinylated antibody and 50 μL streptavidin-HRP were simultaneously added to a 96-well microplate pre-coated with anti-IL-35 antibody. After 60 min at 37 ºC and then washing, 100 μL of substrate solution was added for 10 min at 37 ºC, and the reaction was then stopped via a stopping solution. Ultimately, for both cytokines, the optical density of samples was read by use of a microplate reader at 450 nm. The levels of cytokines were extrapolated from the related standard curve.
Statistical analysis
Statistical analysis and plotting of the graphs were performed using SPSS version 23 (SPSS Inc. Chicago, USA) and GraphPad Prism (GraphPad software Inc. CA) software, respectively. The non-parametric Mann-Whitney U-test was used to compare the serum levels of cytokines between patients and controls.
The correlation between the serum levels of IL-27 and IL-35 and autoantibodies was examined using the Pearson test. P-values less than 0.05 were considered significant.
Patient's demographics
In this study, 31 patients (77.5 %) were females and 9 were males. The mean age of patients was 35.9 years (range: 20-60 years). Of the 40 healthy controls, 30 (75%) were females and 10 (25%) were males. The mean age and age range of controls were 36.5 and 20-60 years, respectively.
Detection of anti-TPO and anti-Tg antibodies and thyroid hormones
In GD patients, the levels of free T4, free T3, TSH and TRAb were 32.1 ± 9.30 pmol/l, 12.5 ± 6.23 pmol/l, 0.1 ± 0.9 mIU/mL and 5.1 ± 0.2 mIU/mL, respectively. Twenty-two patients (55%) were positive for anti-TPO antibody and 33 (82.5%) were positive for anti-Tg antibody. All normal subjects were negative for these two antibodies. All GD patients were positive for TRAb (Table 1). Higher levels of anti-TPO and anti-Tg antibodies were observed in patients compared to the controls (p < 0.001, Figure 1).
Serum levels of IL-27 and IL-35
Analysis of the IL-27 cytokine level in the studied subjects showed a lower IL-27 serum concentration in the patients compared with the control group (3536 ± 246.3 versus 6013 ± 314.3 pg/mL). As shown in Figure 2A, the difference between these levels was significant (p = 0.024). Similarly, a lower IL-35 level was observed in the patients (3.02 ± 0.25 ng/mL) in comparison to the controls (7.12 ± 0.36 ng/mL), (p < 0.0001) ( Figure 2B). Statistical analysis showed no significant correlation between the IL-27 and IL-35 cytokines neither in patients nor in controls.
Relationship between IL-27, IL-35, anti-thyroid antibodies, TRAb and the grade of patients
The relationship between the levels of these two cytokines and the concentration of anti-TPO, anti-Tg antibodies and TRAb levels was examined using the Pearson correlation test. In patients, age did not have any correlations with the levels of IL-27 and IL-35 and TRAb, anti-TPO, and anti-Tg antibodies ( Table 2).
There was no significant correlation between any of the measured IL-35, IL-27, TRAb, anti-Tg and anti-TPO factors in the subjects and the disease grading parameter performed by the endocrinologist (Figure 3 and Table 3).
DISCUSSION
GD is a TH2 predominant autoimmune illness defined by the presence of anti-TSH receptor antibody which motivates thyroid hormone secretion (17). The appearance of other antibodies against thyroid antigens such as anti-Tg and anti-TPO antibodies was reported in patients with hyperthyroidism (18). As it was observed in the present study, the serum levels of anti-Tg and anti-TPO antibodies in GD patients was significantly higher than those of control subjects, which is in line with previous reports (19).The generation of these antibodies can be attributed to the increased expression of the specific antigens and loss of tolerance in the organ (20). Cytokines take part in the induction and effector steps of all inflammatory and immune responses, playing a critical role in the progression of autoimmune diseases. Excess, reduced, or improper cytokine responses significantly contribute to the formation of autoimmune inflammation (8). In vitro experiment analysis of intra-thyroidal lymphocytes including TSHreceptor specific T cells has revealed the predominance of the TH2 response in GD. In fact, following the initiation of TH2-related responses, the inflammatory process continues through Th1 cells. In this way, the cytokines such as IL-1a, IL-6 and TNF-α are generated by thyroid follicular cells as well as inflammatory cells. Moreover, increased gene expression of IL-l α, IL-I3, IL-6, TNF-α and IFN-γ was also shown in GD (5). On the other hand, IL-1β was able to induce the production of hyaluronan by thyroid epithelial cells and thyroid fibroblasts, a process possibly contributing to the progression of goiter in GD (21). Cytokines of IL-12 family played important roles in the immune system and inflammation (22). IL-27 and IL-35 were relatively new members of the IL-12 family (23), and in the present study, the serum levels of these two cytokines in GD patients were reduced in comparison to normal individuals. IL-27 was shown to have two distinct inflammatory and anti-inflammatory functions (11). To our knowledge, no study has measured IL-27 in GD. Regarding other autoimmune diseases, IL-27 reduction has been reported in multiple sclerosis (MS) patients (24). After the interferon therapy of these patients with IL-27, increased serum levels of IL-27 were observed, and researchers predicted that with the administration of this cytokine, the recovery of MS disease would increase (24). Although increased levels of this cytokine were reported in patients with rheumatoid arthritis compared to healthy subjects (25), the role of IL-27 is complex and in addition to antiinflammatory activity, its pro-inflammatory function should be considered (26). Among IL-27 proinflammatory functions, inhibition of Th17 cells and antagonistic effect on IL-6 activity were reported to be noteworthy (27,28). The dual function of IL-27 may be explained by the fact that this cytokine can be released from various cells such as antigen presenting cells under various conditions depending on the type of disease, cytokine network and the dominant cytokine profile. In this regard, IL-27 was proposed as a promising antiviral and anti inflammatory agent with apparent low toxicity risk as observed in animal and in vitro models (29).
In the present study, a significant reduction was observed in the IL-27 belonging to the sera of patients with GD in comparison to the healthy control group (P value = 0.024). The mean serum level of IL-27 in the patient group was approximately half of the control group. This result casts doubt on the inflammatory role of this cytokine in the development of GD while further confirming the anti-inflammatory properties of this cytokine as evidenced in previous studies on some autoimmune mice models such as uveoretinitis, multiple sclerosis and rheumatoid arthritis (30).
The IL-35 is secreted by activated antigen presenting cells including B cells, monocytes, macrophages and dendritic cells. This cytokine was initially introduced to be generated by Treg cells and play an essential role in the inhibitory function of these cells (31). Recently, the immunomodulatory role of IL-35 has been identified in certain inflammatory conditions (32). The main inhibitory effects of IL-35 were on TH1 and TH17 cells (31). Administration of IL-35 to mice with type 1 diabetes inhibited the disease (33). Moreover, the therapeutic effects of IL-35 on collagen-induced arthritis in the RA model were documented (34). In the only study on GD, the levels of IL-35 and TGF-β cytokine levels were reduced while IL-17A, IL-23, IL-6 serum levels were increased in Chinese GD patients compared with healthy controls (35), which is consistent with the present study . No correlations were observed between IL-35 and autoantibody levels in the patients of the current research. These results may suggest that this inhibitory cytokine is a therapeutic agent in GD. It is likely that the administration of IL-35 leads to the suppression of related inflammatory process in GD. This hypothesis should be verified in future studies on mice models of autoimmune thyroiditis.
CONCLUSION
The reduction in the serum levels of IL-27 and IL-35 in GD patients compared to normal subjects suggests the possible anti-inflammatory role of these cytokines in GD, hence their use as therapeutic candidates for the treatment of GD patients in future.
Disclosure: no potential conflict of interest relevant to this article was reported. | 3,540.6 | 2020-03-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
Optical power scale realization using the predictable quantum efficient detector
We report realization of scales for optical power of lasers and spectral responsivity of laser power detectors based on a predictable quantum efficient detector (PQED) over the spectral range of 400 nm–800 nm. The PQED is characterized and used to measure optical power of a laser that is further used in calibration of the responsivities of a working standard trap detector at four distinct laser lines, with an expanded uncertainty of about 0.05%. We present a comparison of responsivities calibrated against the PQED at Aalto and the cryogenic radiometer at RISE, Sweden. The measurement results support the concept that the PQED can be used as a primary standard of optical power.
PAPER • OPEN ACCESS
Optical power scale realization using the predictable quantum efficient detector
Introduction
The predictable quantum efficient detector (PQED) provides traceability of optical power to the SI system of units [1,2]. Such traceability route is tempting, because the operation of PQEDs is as easy as that of other silicon trap detectors. In most national metrology institutes, the optical power is measured with an absolute cryogenic radiometer (ACR) [3,4,5]. These devices can achieve an uncertainty below 0.01%. However, they are expensive to obtain and maintain as they are operated at cryogenic temperatures. Aalto has taken into use a compact PQED [1] as a primary standard of optical power over the spectral range of 400 nm -800 nm. The PQED, shown in figure 1, consists of high-quality photodiodes with minimal losses of internal charge carriers, arranged in a wedged trap configuration to minimize the effects of reflectance correction [6,7]. PQEDs are compact in size and operate at room temperature. They show excellent repeatability of ⁓0.0016% [1]. The stability of the PQEDs is also excellent as reported in [8], where no change in responsivity is observed over 8 years within the measurement uncertainty of about 0.01%. In this work, we present an optical power and spectral responsivity scale realization based on a PQED. A silicon trap detector is calibrated with the new scale and compared to calibration at RISE, Sweden. RISE uses an ACR as a primary standard of optical power measurements. The comparison of
Measurement Setup
The new scale is based on a PQED and a multi-wavelength laser setup for comparing detectors developed at Aalto [9], used earlier in transmittance measurements of polymer samples [10]. The PQED photodiodes used are based on p-type silicon wafer and have been described in detail in [1,6,7]. The mechanical structure of the PQED is as described in [1]. Figure 2 presents a simplified drawing of the setup. Various lasers have been installed in the setup. Lasers available include KrAr+, Ar+, HeCd, red and green HeNe, and a couple of diode lasers. The laser beam to be used is selected with a computerdriven mirror on a rail. Unused beams are terminated in beam dumps. The measurement beam is cleaned with a spatial filter based on two off-axis parabolic mirrors (OAP), and a laser power controller (LPC) stabilizes the beam intensity. The PQED and the trap detectors to be calibrated are mounted on a precise XY translation stage, and their photocurrents are recorded with a current-to-voltage converter (CVC) and a digital voltmeter (DVM). A multiplexer (MUX) is used to read various detectors with one set of electronics. PS 90 is the position controller which controls the movement of the filter wheel and the XY translation stage as commanded by the computer. The whole setup is computer controlled. In the PQED, two photodiodes are arranged in a wedged light trapping configuration. Seven specular reflections take place between the photodiodes before the light is reflected out of the PQED. The structure and calculation of the reflectance of a p-type PQED are discussed in detail in [1,6,7]. In the current setup, the PQED is operated without a Brewster window in an ordinary laboratory room. Instead of window, dry nitrogen purging is used to avoid dust and moisture contamination of the photodiodes through the open entrance aperture [11]. The dry nitrogen enters the detector at the back, flows through the detector and then leaves via the entrance aperture of the PQED which is 10 mm in With the incoming light beam, the optical power P is calculated from the photocurrent Ip of the PQED as where is the vacuum wavelength of the laser used, () is the reflectance of the PQED, [1 + g()] is the quantum yield in silicon, () is the internal quantum deficiency of the photodiodes, estimated to be approximately 0.0008% [6], e is the elementary charge, h is Planck's constant, and c is the speed of light in vacuum. The specular reflectances of the PQED are measured at the respective wavelengths using the method described in [1,6]. In low-uncertainty measurements, the quantum yield may start to contribute at wavelengths below 500 nm [12], although its deviation from 1 was earlier thought to be significant only at wavelengths below 400 nm. The optical power of Eq. (1) is used to calculate the responsivity of trap detectors as = DUT / P, where IDUT is the measured photocurrent of the trap detector under test. The PQED is used once a year to calibrate Hamamatsu silicon trap detectors serving as working standards.
Uncertainty Budget
The uncertainty budget of the new optical power and spectral responsivity scale is presented in Table 1. The responsivity of the PQED has a standard uncertainty of 0.011% -0.015%. This uncertainty value consists of the main components due to reflectance (0.001%), non-uniformity (0.008%), repeatability (0.001%), internal quantum efficiency (0.008%) and quantum yield (0.010% at 458 nm, zero at other wavelengths) of the PQED. The quantum yield is [1 + g()] = 1.0001 ± 0.0001 at 458 nm and zero at other wavelengths [12]. Uncertainty due to repeatability of results has been obtained by calculating the standard deviation of 10 averaged measurements. The error in alignment of detectors was obtained by tilting the detectors by a few degrees and calculating the change in the signal due to a change of 0.5º in the angle. The alignment error also accounts for the repeatability error of the linear translator. The uncertainties in the calibrations of the DVM and the CVC include all uncertainty components from the national standards of electricity to the measuring instruments. The DVM is calibrated by feeding the multimeter with a known current and voltage from a Keithley calibrator. The sensitivity of the CVC is calibrated by measuring the output voltage of the CVC with a calibrated DVM, when supplying a known current to the input with the Keithley calibrator. The trap detector has been scanned at a wavelength of 488 nm with a laser beam, having a diameter of 1.2 mm, in order to check for the uniformity of the detector. The same spatial uniformity is expected to be valid at all wavelengths. The uncertainty due to the spatial nonuniformity of the trap detector is 0.023%, which is the largest component of the uncertainty budget.
The uncertainty of the optical power scale contains all listed components except the spatial nonuniformity of the trap detector. The expanded uncertainty of optical power is thus 0.024% -0.034%. The uncertainty of the spectral responsivity scale, measurements of a trap detector against the PQED, includes all the components. The expanded uncertainty is 0.052% -0.057% at the wavelength range of 458 nm -633 nm, depending on the wavelength.
Comparison Measurement
One silicon trap detector was measured both at RISE, Sweden, and at Aalto using the new spectral responsivity scale. Table 2 shows the responsivities measured at the wavelengths of 458 nm, 515 nm, 543.5 nm, and 633 nm, along with the differences between the two responsivities. The expanded uncertainties presented in Table 2 are quadratic sums of the uncertainties of RISE and Aalto. The results are in agreement within the uncertainties (k =2) of 0.077% to 0.086% at all wavelengths as seen in Table 2. However, the difference between the two responsivities is somewhat higher at 458 nm than at other wavelengths.
Conclusion
Aalto has taken into use a new optical power scale based on a PQED. The PQED is used annually to measure the responsivities of Hamamatsu silicon trap detectors, used as working standards. Comparison with calibrations performed against an ACR at RISE using a silicon trap detector showed an agreement between the two scales within the expanded uncertainties of 0.077% -0.086% for the wavelength range 458 nm -633 nm. The comparison results deviate more at the wavelength of 458 nm than at 514 nm, 543.5 nm, or 633 nm. This may be because of high temporal instability of the Hamamatsu trap detectors at wavelengths below 476 nm [8]. Overall, the results indicate the usability of the PQED as a primary standard of optical power. | 2,041.2 | 2022-01-01T00:00:00.000 | [
"Physics"
] |
Develop a Smart Microclimate Control System for Greenhouses through System Dynamics and Machine Learning Techniques
: Agriculture is extremely vulnerable to climate change. Greenhouse farming is recognized as a promising measure against climate change. Nevertheless, greenhouse farming frequently encoun-ters environmental adversity, especially greenhouses built to protect against typhoons. Short-term microclimate prediction is challenging because meteorological variables are strongly interconnected and change rapidly. Therefore, this study proposes a water-centric smart microclimate-control system (SMCS) that fuses system dynamics and machine-learning techniques in consideration of the internal hydro-meteorological process to regulate the greenhouse micro-environment within the canopy for environmental cooling with improved resource-use efficiency. SMCS was assessed by in situ data collected from a tomato greenhouse in Taiwan. The results demonstrate that the proposed SMCS could save 66.8% of water and energy (electricity) used for early spraying during the entire cultivation period compared to the traditional greenhouse-spraying system based mainly on operators’ experiences. The proposed SMCS suggests a practicability niche in machine-learning-enabled greenhouse automation with improved crop productivity and resource-use efficiency. This will increase agricultural resilience to hydro-climate uncertainty and promote resource preservation, which offers a pathway towards carbon-emission mitigation and a sustainable water–energy–food nexus.
Introduction
The Sustainable Development Goals (SDGs) call for imperative action to ensure food security while preserving natural resources and maintaining environmental sustainability, especially in the era of climate change [1]. Significant changes in Earth's climate have fostered more extreme weather events in recent decades and therefore have increasingly impacted global agriculture by deeply implicating the fate of food systems and directly affecting the future of "eating" for humans. For instance, Taiwan suffered from 15 extreme weather events in 2016, including 4 typhoons, 3 torrential rains, 4 severe rains, and 4 cold snaps. The huge agricultural loss caused by these extreme weather events accounted for 10.3% of the total value of agricultural production, resulting in severe fluctuations in food prices and disturbance in social equilibrium. Besides, changes in temperature and precipitation patterns may increase crop failures and production declines [2].
Agricultural systems are vulnerable to changes not only in climate but also in other evolving factors like farming practices and technology. The impacts of climate change on agricultural systems globally have been investigated in recent decades [3][4][5][6]. Greenhouses are an expensive and technological solution for the challenges climate change poses to agriculture. However, they are not a universal tool that will solve all problems since it is infeasible to grow all crops indoors. For specific, high-value crops, this makes sense. Climate-smart agriculture is an integrated approach that seeks to manage landscapes by assessing interlinked food security and climate change to simultaneously improve crop productivity as well as reduce agricultural vulnerability to pests and climate-related risks [7].
Greenhouse cultivation that creates a controllable and stable environment facilitating crop growth and yield could be a climate-smart practice [8][9][10]. Hemming et al. [11] indicated that the opportunities and challenges for the future implementation of sensor systems in greenhouses could be explored by using artificial-intelligence techniques. Greenhouse farming is recognized as a promising measure to cope with climate change because this physical practice can promote crop growth and productivity by adequately controlling a microclimate to increase food security [12][13][14]. Due to the high agricultural loss induced by extreme weather events in 2016, the Council of Agriculture in Taiwan launched a five-year funding program in December of 2016 to encourage greenhouse construction or upgrades (2000 ha expected) for mitigating agricultural losses and maintaining stable food prices in the future. Among limited managerial tools, spraying plays a pivotal role in greenhouse control of environmental cooling, especially for places like Taiwan with hot and humid weather, where environmental adversity can occur in greenhouses. For instance, Bwambale et al. [15] conducted a review of smart irrigation-monitoring and control strategies that aimed to improve water-use efficiency in precision agriculture. Tona et al. [16] conducted a technical-economic analysis on spraying equipment for specialty crops and indicated that the purchase price would make the robotic platform profitable. Spraying systems are evidently one of the key environmental-control strategies for greenhouse cultivation. Nevertheless, most of the previous research related to spraying for environmental cooling focused mainly on cooling effects [17], without considering resource consumption. For resource preservation, it is required to consider the resource-use efficiency of spraying for environmental cooling.
Greenhouse cultivation by nature substantially depends on environmental controls to stabilize crop productivity [18,19]. Accurate prediction or simulation of a greenhouse internal environment is needed to evaluate environmental-control strategies for crop growth [20][21][22][23][24][25]. Besides, short-term microclimate prediction is challenging because meteorological variables are strongly interconnected with values changing rapidly during an event. With the motivation to fill the research gap and support the above-mentioned governmental greenhouse policy to achieve SDGs #2 (Zero Hunger), #12 (Responsible Consumption and Production), and #13 (Climate Action), this study developed a watercentric smart microclimate-control system (SMCS) for greenhouse cultivation in response to climatic variation. The SMCS was designed to automatically activate early spraying for environmental cooling while consuming less water and energy. The SMCS seamlessly integrates a system-dynamics (SD) model coupled with a physically based (i.e., a hydrometeorological process) estimation model, a machine-learning prediction model, and a spray mechanism. A traditional greenhouse-spraying system based on the physically based estimation model and the spray mechanism coupled with operators' experience served as a benchmark for exploring the usefulness and applicability of the proposed SMCS. A tomato greenhouse located in Changhua County of Taiwan formed the case study, where the in situ datasets for use in this study were collected by Internet of Things (IoT) devices. The SMCS is expected to increase greenhouse automation and reinforce the efficiency of resource utilization, which can pave the way to reducing carbon emissions and promoting water-energy-food-nexus synergies in greenhouse farming.
Materials and Methods
This study proposes a water-centric SMCS that fuses system-dynamics and machinelearning techniques to regulate the greenhouse micro-environment within the canopy, with improved resource-use efficiency. The research flow chart is shown in Figure 1. We first collected the historical IoT monitoring data of the investigative greenhouse. Based on the IoT data, the SD model simulated the greenhouse microclimate within the canopy before and after spraying for environmental cooling. The back-propagation neural-network (BPNN) model predicted one-hour-ahead greenhouse internal temperature and relative humidity, where the initial inputs were the IoT data. Based on the prediction results, a spray mechanism was designed to determine the necessity of early spraying for environmental cooling. Consequently, the impacts of spraying on the internal environment and resource consumption were investigated. This study further compared the spray effects between the SMCS and the traditional greenhouse-spraying system (a benchmark), with the main focus on the resource consumption of spraying for environmental cooling. In the end, the potential of the SMCS for agricultural-loss mitigation in the perspective of water-energyfood-nexus synergies was discussed. It was noted that both traditional and machinelearning-based systems were constructed based on the IoT data collected from the same trial during 20 May and 20 July 2019. learning techniques to regulate the greenhouse micro-environment within the canopy, with improved resource-use efficiency. The research flow chart is shown in Figure 1. We first collected the historical IoT monitoring data of the investigative greenhouse. Based on the IoT data, the SD model simulated the greenhouse microclimate within the canopy before and after spraying for environmental cooling. The back-propagation neural-network (BPNN) model predicted one-hour-ahead greenhouse internal temperature and relative humidity, where the initial inputs were the IoT data. Based on the prediction results, a spray mechanism was designed to determine the necessity of early spraying for environmental cooling. Consequently, the impacts of spraying on the internal environment and resource consumption were investigated. This study further compared the spray effects between the SMCS and the traditional greenhouse-spraying system (a benchmark), with the main focus on the resource consumption of spraying for environmental cooling. In the end, the potential of the SMCS for agricultural-loss mitigation in the perspective of waterenergy-food-nexus synergies was discussed. It was noted that both traditional and machine-learning-based systems were constructed based on the IoT data collected from the same trial during 20 May and 20 July 2019.
Study Area and Materials
In this study, a total of 1488 hourly meteorological datasets related to tomato cultivation were collected on 20 May and 20 July 2019 by IoT devices installed inside and outside a privately owned greenhouse located in Changhua County of Taiwan ( Figure 2). The IoT devices ( Figure 2) installed in the greenhouse were developed by the Taiwan Agricultural Research Institute. The size of the greenhouse is about 52 m × 30 m × 6 m (length × width × height), indicating that the land area of the greenhouse is about 1560 m 2 . Monitoring items consisted of internal/external temperature, internal/external relative humidity, external insolation, wind speed, and wind direction (Table 1). It is noted that this study adopted IoT datasets for model-construction and evaluation purposes only. a privately owned greenhouse located in Changhua County of Taiwan ( Figure 2). The IoT devices ( Figure 2) installed in the greenhouse were developed by the Taiwan Agricultural Research Institute. The size of the greenhouse is about 52 m × 30 m × 6 m (length × width × height), indicating that the land area of the greenhouse is about 1560 m 2 . Monitoring items consisted of internal/external temperature, internal/external relative humidity, external insolation, wind speed, and wind direction (Table 1). It is noted that this study adopted IoT datasets for model-construction and evaluation purposes only.
System Dynamics (SD) for Simulating Greenhouse Environment
SD is a set of process-oriented research methods specializing in the causal-feedback relationship among many variables and high-order non-linear systems [26][27][28]. It also specializes in explaining the results of system behavior through structural reasons behind the
System Dynamics (SD) for Simulating Greenhouse Environment
SD is a set of process-oriented research methods specializing in the causal-feedback relationship among many variables and high-order non-linear systems [26][27][28]. It also specializes in explaining the results of system behavior through structural reasons behind the behavior [29]. SD has been widely used for simulating the non-linear behaviors in complex systems over time in various fields, including greenhouse management, forecasting and experimentation [30][31][32], rooftop farming [33], and the water-food-energy nexus [34,35].
This study explored the causal loops of SD for greenhouse cultivation by consideration the spray effect ( Figure 3a). It is noted that the SMCS was constructed to reduce internal temperature and increase internal relative humidity by raising the partial pressure of water vapor to achieve the effect of cooling and humidification. A physically based model was constructed based on the SD model to estimate the greenhouse internal temperature and relative humidity before and after spraying. The framework of the SD model coupled with the physically based estimation model is shown in Figure 3.
This study explored the causal loops of SD for greenhouse cultivation by consideration the spray effect ( Figure 3a). It is noted that the SMCS was constructed to reduce internal temperature and increase internal relative humidity by raising the partial pressure of water vapor to achieve the effect of cooling and humidification. A physically based model was constructed based on the SD model to estimate the greenhouse internal temperature and relative humidity before and after spraying. The framework of the SD model coupled with the physically based estimation model is shown in Figure 3. Referring to Lee et al. [17], greenhouse internal relative humidity and temperature were considered to be a function of the conservation of mass and the conservation of energy, which consisted of two parts. Part 1 estimated the internal relative humidity by calculating enthalpy and heat conduction. Part 2 estimated the internal temperature by calculating the variation in moisture in the air. The formulation of greenhouse internal relative humidity and temperature is briefly introduced below.
Formulation of Greenhouse Internal Relative Humidity
The physically based estimation model of internal relative humidity was constructed by the equations of the conservation of mass and the conservation of energy (Equation (1)).
where dH dt is the indoor absolute humidity change rate in a time period (kg/m 3 h), β i,t is the spray efficiency (%), Water i,t denotes the amount of spray (kg), Vent i,t denotes the indoor ventilation (kg/h), and H i,t (H o,t ) denote the internal (external) absolute humidity (kg/m 3 ) at t. V GH denotes the total capacity of the greenhouse (m 3 ), and D air denotes the air density (1.2 kg/m 3 ).
where RH i,t (RH o,t ) denotes the indoor (external) relative humidity (%) at t, esi i,t (esi o,t ) denotes the indoor (external) saturated vapor pressure (kpa) at t, and P atm denotes the atmospheric pressure (101 kpa).
where C i,t is the ventilation utilization factor at t, A GH is the ventilation area of the greenhouse (m 2 ), and WS t denotes the wind speed (m/h) at t.
where H i,t+1 and H i,t denote the indoor absolute humidity at t + 1 and t (kg/m 3 ), respectively.
where ei i,t+1 denotes the indoor partial pressure of water vapor (kpa) at t + 1. Consequently, the internal relative humidity (RH i,t+1 ) at t + 1 could be calculated by Equation (10).
Formulation of Greenhouse Internal Temperature
The internal temperature was also constructed by the equations of the conservation of mass and the conservation of energy (Equation (11)).
where dh dt denotes the indoor change rate of enthalpy in a time period (kj/kg h); h i,t and h o,t denote the indoor and external enthalpies (kj/kg) in the air at t, respectively; Vent i,t denotes the ventilation rate (m 3 /h) at t; V GH denotes the total capacity of the greenhouse (m 3 ); D air denotes the air density (1.2 kg/m 3 ); K in denotes the indoor coating material's heatconvection parameter in the air (6.4 W/m 2 • C); A w denotes the area of the coating material (m 2 ); T s,t , T i,t , and T f,t denote the indoor temperature ( • C) of the coating material, the indoor temperature ( • C), and the indoor ground temperature ( • C) at t, respectively; A f denotes the total ground area of the greenhouse (m 2 ); and K f denotes the indoor ground-to-air heat convection parameter (4.65 W/m 2 • C).
where H i,t denotes the indoor absolute humidity (kg/m 3 ) at t.
where T o,t denotes the external temperature ( • C) at t, and H o,t denotes the external absolute humidity (kg/m 3 ) at t.
where a is the solar-absorption rate on the surface of the material (0.65%), Rn o,t denotes the external solar radiation (W/m 2 ) at t, and K out denotes the thermal conductivity on the surface of the material (6.3 W/m 2 • C).
where ref denotes the ground reflectivity (0.2), par o,t denotes the external insolation at t (W/m 2 ), and Rn lon denotes the atmospheric long-wave radiation (343 W/m 2 ).
where B is the Boltzmann constant (5.67 × 10 −8 Wm −2 K −4 ). Because this study considered spray to be a means of humidification and cooling, it required calculating the internal heat moving away due to spray, as shown in Equation (17) (refer to [36]).
where Q t denotes the heat moving away due to spray (kj/h), β i,t denotes the indoor spray efficiency (%) at t, Water i,t denotes the indoor spray amount (kg/h) at t, and H fg denotes the latent heat of water evaporation (2256.6 kj/kg). (18) where dT denotes the indoor temperature change in a time period ( • C/h), and C p denotes the specific heat of the air (1.0052 kj/kg • C). Consequently, the internal temperature at t + 1 could be obtained from Equation (19).
where T i,t+1 and T i,t denote the indoor temperature ( • C) at t + 1 and t, respectively. Details of the formulation of greenhouse relative humidity (Part 1) and internal temperature (Part 2) can be found in the Supplementary Material.
The BPNN is one of the most widely used ANNs. This study utilized the BPNN to predict one-hour-ahead greenhouse internal temperature (T i (t + 1)) and relative humidity (RH i (t + 1)) based on current information on six meteorological factors, including external temperature (T o ), external relative humidity (RH o ), external insolation (par o ) and wind speed (WS), internal temperature (T i ), and internal relative humidity (RH i ) (Figure 3b). The construction of the BPNN prediction model was based on a total of 1488 hourly IoT data, where 64, 16, and 20% of the data were shuffled and randomly allocated into training, validation, and testing stages, respectively. The architecture of the BPNN model constructed in this study is illustrated in Figure 3b. The parameter setting of the BPNN model is shown in Table 2, where the number of neurons in the hidden layer and the batch size were determined to be 20 and 64, respectively, through trial-and-error processes. The relevant trial-and-error results are presented in Tables 3 and 4. Figure 4 presents the spray-simulation flow chart of the SMCS. According to the onehour-ahead predictions (t + 1) of greenhouse internal temperature and relative humidity obtained from the BPNN model, a spray mechanism with spraying criteria was designed to determine the time to spray, which is introduced as follows. According to Xue et al. [58] on greenhouse cultivation, the net photosynthetic rate and cumulative photosynthesis of tomato leaves could be significantly improved when the internal relative humidity reached 70%. Liou et al. [59] indicated that the formation of lycopene in tomatoes would be reduced if the greenhouse internal temperature exceeded 28 °C. Therefore, this study managed to activate sprayers for environmental cooling under two conditions: when the internal relative humidity fell below 70%, and when the internal relative humidity and the internal temperature exceeded 90% and 28 °C, respectively.
Construction of the Spray Mechanism
To avoid resource over-consumption, sprayers would not be activated if the internal relative humidity exceeded 90% or the internal temperature fell below 25 °C. Besides, the switching on/off of the sprayers would be carried out based on the predicted values of internal relative humidity and temperature. Therefore, the spray mechanism would activate sprayers for environmental cooling subject to two criteria: (1) the one-hour-ahead prediction of internal relative humidity would be less than 70% and the one-hour-ahead prediction of internal temperature would be higher than 25 °C, and (2) the one-hour-ahead According to Xue et al. [58] on greenhouse cultivation, the net photosynthetic rate and cumulative photosynthesis of tomato leaves could be significantly improved when the internal relative humidity reached 70%. Liou et al. [59] indicated that the formation of lycopene in tomatoes would be reduced if the greenhouse internal temperature exceeded 28 • C. Therefore, this study managed to activate sprayers for environmental cooling under two conditions: when the internal relative humidity fell below 70%, and when the internal relative humidity and the internal temperature exceeded 90% and 28 • C, respectively.
To avoid resource over-consumption, sprayers would not be activated if the internal relative humidity exceeded 90% or the internal temperature fell below 25 • C. Besides, the switching on/off of the sprayers would be carried out based on the predicted values of internal relative humidity and temperature. Therefore, the spray mechanism would activate sprayers for environmental cooling subject to two criteria: (1) the one-hour-ahead prediction of internal relative humidity would be less than 70% and the one-hour-ahead prediction of internal temperature would be higher than 25 • C, and (2) the one-hour-ahead prediction of internal relative humidity would be less than 90% and the one-hour-ahead prediction of internal temperature would be higher than 28 • C. Spraying would terminate either when the internal temperature and relative humidity met the environmental suitability for tomato growth or when the total amount of spray exceeded the maximal spray volume within one hour (i.e., 1.35 kg).
In the case of no spraying being required for environmental cooling, the one-hourahead predictions of internal temperature and relative humidity obtained from the BPNN model would be fed back to the system and serve as the initial input values of the BPNN model at the next time-step (the orange dotted line in Figure 4). If either above-mentioned activation criterion for spraying was met, a spray of 0.001 kg would be carried out, leading to a re-calculation of the internal temperature and relative humidity after spraying by using the physically based estimation model. The spraying process would repeat until reaching the stop criteria. It is noted that a sprayer would not be activated if the required amount of spray was less than its minimal spray volume (=the minimal duration of spray × the rate of spray). When spraying terminates, the final one-hour-ahead estimates (t + 1) of internal temperature and relative humidity obtained from the physically based model would be fed back to the system and serve as the initial input values of the BPNN model at the next time-step (the orange dotted line in Figure 4). For the greenhouse investigated and the sprayer selected for use in this study, it would require three sprayers to cover the entire greenhouse farm (1560 m 2 ). The weight of spray each time would be 0.001 kg per sprayer, and the total weight of spray per hour would be 1.35 kg for three sprayers. Therefore, the control loop would be evaluated at a rate of 8 s.
Evaluation of Model Performances
To explore the spray effect of the SMCS on greenhouse farming, the above-mentioned spraying process for environmental cooling was implemented on all 1488 IoT data collected in this study. For comparison purposes, a traditional greenhouse-spraying system was established by integrating the physically based estimation model with the spray mechanism only, whereas the physically based model was responsible for estimating one-hour-ahead greenhouse internal temperature and relative humidity before and after spraying.
This study used the root-mean-square error (RMSE) and the coefficient of determination (R 2 ) as the statistical indicators to evaluate model performance. Their mathematical formulas refer to Equations (20) and (21).
Root-Mean-Square Error Coefficient of Determination where N is the total number of data, y i is the output value of the model, o i is the observation value, and y and o are the average of the output value and the observation value, respectively. According to the definitions of the two indicators, it is obvious that a model is considered to perform well if it produces a higher R 2 value but a lower RMSE value than the comparative model(s).
Results
This study developed a water-centric SMCS dedicated to greenhouse farming and the spray effect on greenhouse microclimate for environmental cooling with the relevant resource consumption being investigated. The operation of the SMCS was composed of four main phases: to simulate greenhouse environmental dynamics in consideration of the spray effect (by the SD model), to predict one-hour-ahead internal temperature and relative humidity (by the BPNN model), to determine the necessity of spraying for environmental cooling (by the spray mechanism), and to estimate the required amount of spray to manage a microclimate suitable for tomato growth in the coming hour (by the physically based model). The SMCS was applied to the 1488 in situ data collected from a greenhouse on 20 May 2019 and 20 July 2019. The modeling results are presented and discussed as follows. Table 5 shows the performance of the physically based estimation model and the BPNN prediction model with respect to greenhouse internal temperature and relative humidity based on test datasets. For the physically based estimation model, the R 2 and RMSE values of the internal temperature were 0.80 and 1.89 • C, respectively, whereas those of the internal relative humidity were 0.79 and 8.17%, respectively. The results demonstrate the accuracy and reliability of the physically based model. As for the BPNN prediction model, its R 2 and RMSE values of the internal temperature were 0.83 and 1.37 • C, respectively, whereas those of the internal relative humidity were 0.88 and 3.9%, respectively. The results also demonstrate the accuracy and reliability of the BPNN model. It appears that the BPNN model is superior to the physically based model in terms of higher R 2 and lower RMSE values. Figures 5 and 6 show the errors and error distributions of internal-temperature and relative-humidity estimates obtained from the physically based model and the BPNN model, respectively. In both error plots, positive values indicate overestimation whereas negative values indicate underestimation. Regarding the physically based estimation model, it can be seen in Figure 5a that the errors of the internal temperature mostly fell within 1 and 2 • C (overestimated), with an overestimation occurrence frequency (1098 times) much higher than the underestimation one (387 times). According to Figure 5b, the errors in the internal relative humidity were mostly concentrated within −3% and −6% (underestimated), with an underestimation occurrence frequency (787 times) higher than the overestimation one (699 times).
Comparison of Model Accuracy and Reliability between the Physically Based and ANN Models
Regarding the BPNN prediction model, the results of Figure 6a indicate that the errors of the internal temperature mainly fell within −1 and 0 • C (under prediction), where underprediction (1176 times) occurred more frequently than overprediction (302 times). According to Figure 6b, the errors in the internal relative humidity were mainly concentrated within −3% and 0%, where underprediction (959 times) also occurred more frequently than overprediction (517 times). It also appears that the BPNN model performed better than the physically based model in terms of smaller error ranges and error distributions centering at zero.
Furthermore, the results shown in Table 5 and Figures 5 and 6 are quite consistent, which shows that the overall performance of the BPNN model was slightly better than that of the physically based model. This recommended the incorporation of the BPNN model into the SMCS to predict one-hour-ahead internal temperature and relative humidity in this study. Tables 6 and 7 show the results of internal temperature and relative humidity before and after spraying by the traditional spraying system and the proposed SMCS, respectively. Table 6. Results of greenhouse environmental control on internal temperature and relative humidity before and after spraying for environmental cooling by the traditional spraying system (20 May 2019-20 July 2019).
Temperature
Relative Humidity Before After Before After The results of Table 6 indicate that the average and standard deviation of the internal temperature after spraying decreased by 2.6 and 2.3 • C, respectively. For the internal relative humidity after spraying, the average value increased from 72% to 86%, whereas the standard deviation dropped from 16% to 7%.
The results of Table 7 show that both the average and standard deviation of the internal temperature after spraying decreased by 1.4 • C. For the internal relative humidity after spraying, the average value increased from 74% to 89%, whereas the standard deviation dropped from 12% to 4%. These results demonstrate that the SMCS could more effectively reduce the internal temperature while increasing the internal relative humidity after spraying than the traditional one, which supports the practicability of the proposed SMCS on greenhouse farms.
Comparison of Resource Consumption between Traditional and Smart Microclimate-Control Systems
Concerning spray-related resources utilization for greenhouse environmental control over the entire investigative period, water consumption could be obtained directly from summing up the amount of spray at each time-step while power consumption would be converted from horsepower and the total operating hours of the sprayers. For spraysimulation purposes, this study adopted the "FH-09 power spray motor" sprayer launched by the Fog Century Environmental Protection and Energy Saving Enterprise Co. Ltd., located in Taichung City, Taiwan. The main specifications of the sprayer are a horsepower of 1.125 kW, a water absorption of 0.15 kg/h, and an applicable area of about 400 to 600 m 2 . Considering the greenhouse investigated in this study occupies an area of 1560 m 2 , it would require three sprayers to cover the entire greenhouse farm. Table 8 compares the traditional and the proposed control systems regarding the resource consumption of spraying for environmental cooling. It is noted that the numbers of the on/off switches of the sprayers associated with the two comparative systems differed slightly (736 times for the traditional system vs. 726 times for the smart system). Therefore, the difference of the two systems in power consumption enabling the switching on/off of the sprayers could be ignored. Under this assumption, the traditional system consumed about 129,478 kg of water and 90 kWh of electric power for greenhouse environmental control during the entire tomato-cultivation period. In contrast, the SMCS only consumed about 42,962 kg of water and 29.8 kWh of electric power. The results demonstrate that the SMCS consumed far fewer resources for spraying than the traditional system, with water-and power-saving rates reaching 66.8%. It was further noticed that early spraying for environmental cooling suggested by the SMCS allowed the wind to blow away excess internal water vapor one hour ahead, leading to a decrease in the internal relative humidity. Spray efficiency is known to be inversely proportional to the internal relative humidity. Therefore, the amount of spray could be reduced due to early spraying.
Evaluation of Hazard Mitigation by the SMCS
This study further evaluated the potential contribution of the proposed SMCS to the governmental greenhouse policy launched in 2016 regarding the construction of 2000 ha of reinforced greenhouses within five years. Taking the agricultural loss in 2020 released by the Council of Agriculture in Taiwan as an example, under the scenario that all 2000 ha of greenhouses could be equipped with the SMCS, the agricultural loss caused by extreme weather events would be significantly reduced by 22% (=2000 (greenhouse area in ha)/9097 (total damaged area in ha)) on average. Besides, resource saving in water and energy would achieve 1,109,918 tons (=((86,516 kg/1560 m 2 ) × 10,000) × 2000 ha/1000) and 771,795 kWh (=((60.2 kWh/1560 m 2 ) × 10,000) × 2000 ha), respectively (Table 8). This suggests the smart greenhouse microclimate-control practice bears high potential for tackling climate change and can significantly promote the nexus synergies among water, energy, and food, especially when encountering extreme weather events.
Conributions of the SMCS
The proposed SMCS makes two main contributions. Firstly, for maintaining an environment suitable for crop growth, the traditional greenhouse-spraying system requires monitoring sensors like IoT devices to detect the internal temperature or relative humidity for switching sprayers on/off. Nevertheless, this may impose the risk of an unsuitable environment on greenhouse farming between two time-steps. For example, the operational time interval was one hour in this study. Even if the greenhouse environment complies with the suitability conditions of crop growth at the current minute, it may violate the suitability conditions in the next minute. In contrast, the SMCS can predict the greenhouse microclimate for the next hour well, thereby spraying in advance to prevent an unsuitable environment for crop growth. Besides, the SMCS avoids using IoT sensors because the extra hardware and maintenance costs of the monitoring devices also place a heavy burden on greenhouse owners. Secondly, the SMCS consumes fewer resources of water and energy (electricity) when spraying for environmental cooling than the traditional method, indicating that the SMCS can mitigate greenhouse-gas emissions. Low resource consumption also represents cost-effectiveness and relatively high profits, leading to more commercial value that can be achieved by the SMCS.
The greenhouse-management practice developed can be applied to crops and areas of interest with adequate modification of the environmental suitability for crop growth. Similar methodology for developing the SMCS can also be applied to different greenhouse types. Future research can consider incorporating crop evapotranspiration, soil-moisture content, nutrients, and fertilization into the SMCS to increase the prediction accuracy of the greenhouse environment and promote crop productivity and quality. Ventilation is also a major factor in the control of greenhouse temperature. In future research, ventilation will be considered by incorporating the greenhouse-control factors (e.g., skylight, roller shades on each wall, and inner shade net) into the proposed water-centric smart microclimate-control system (SMCS) to increase its operational efficiency and effectiveness.
Conclusions
This study proposed a water-centric smart microclimate-control system (SMCS) for greenhouse farming, with a mission to manage the microclimate through efficient spraying for environmental cooling. The SMCS can maintain stable crop productivity when extreme weather events occur. The SMCS can determine the necessity of spraying for environmental cooling according to the predictions of greenhouse internal temperature and relative humidity. The results demonstrate that the SMCS could achieve the same environmental-control effect as the traditional one while consuming far fewer resources for spraying, which makes greenhouse farming move towards carbon-emission mitigation and sustainable management of the water-energy-food nexus. There are four main findings drawn from this study, shown below.
Firstly, the cost of sensor installation is a major concern for farmers in Taiwan, especially concerning device investment and maintenance issues. The BPNN model could (Figure 1) predict greenhouse microclimate based on external climate conditions with less water and energy. After the BPNN model is constructed, this science-based management practice requires no in situ monitoring sensors, which favorably lessens greenhouse owners' investment in environmental control and makes a positive contribution to the overall cost-benefit ratio of greenhouse farming. The physically based model engaging the internal hydro-meteorological process could produce satisfactory accuracy and reliability in estimating greenhouse microclimate, despite it performing slightly worse than the BPNN prediction model.
Secondly, the SMCS could predict the greenhouse internal environment well one hour ahead and spray in advance when needed for environmental cooling, which prevents crops from being exposed to an unsuitable cultivation environment.
Thirdly, the SMCS could achieve savings as high as 66.8% of water and energy compared to the traditional method. Therefore, the SMCS gains more commercial value than the traditional method because low resource consumption means low production cost and relatively high profits.
Fourthly, the reduction in agricultural loss caused by extreme weather events in 2020 would reach 22% if the SMCS could be implemented in 2000 ha of greenhouses (the goal of the governmental greenhouse policy launched in 2016 in Taiwan). This would lead to effective resource saving in water and energy of 1,109,918 tons and 771,795 kWh per year, respectively. This greenhouse-control strategy significantly contributes to environmental sustainability and greenhouse-gas-emission mitigation.
This study suggests a practicability niche in machine-learning-enabled greenhouse automation with improved crop productivity and resource-use efficiency. The proposed SMCS substantially moves greenhouse farming towards the SDGs in the perspectives of food security, natural-resource preservation, and environmental sustainability. | 8,031.8 | 2022-12-03T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Agricultural and Food Sciences",
"Computer Science"
] |
Optimal proteome allocation strategies for phototrophic growth in a light-limited chemostat
Background Cyanobacteria and other phototrophic microorganisms allow to couple the light-driven assimilation of atmospheric \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {CO}_{2}$$\end{document}CO2 directly to the synthesis of carbon-based products, and are therefore attractive platforms for microbial cell factories. While most current engineering efforts are performed using small-scale laboratory cultivation, the economic viability of phototrophic cultivation also crucially depends on photobioreactor design and culture parameters, such as the maximal areal and volumetric productivities. Based on recent insights into the cyanobacterial cell physiology and the resulting computational models of cyanobacterial growth, the aim of this study is to investigate the limits of cyanobacterial productivity in continuous culture with light as the limiting nutrient. Results We integrate a coarse-grained model of cyanobacterial growth into a light-limited chemostat and its heterogeneous light gradient induced by self-shading of cells. We show that phototrophic growth in the light-limited chemostat can be described using the concept of an average light intensity. Different from previous models based on phenomenological growth equations, our model provides a mechanistic link between intracellular protein allocation, population growth and the resulting reactor productivity. Our computational framework thereby provides a novel approach to investigate and predict the maximal productivity of phototrophic cultivation, and identifies optimal proteome allocation strategies for developing maximally productive strains. Conclusions Our results have implications for efficient phototrophic cultivation and the design of maximally productive phototrophic cell factories. The model predicts that the use of dense cultures in well-mixed photobioreactors with short light-paths acts as an effective light dilution mechanism and alleviates the detrimental effects of photoinhibition even under very high light intensities. We recover the well-known trade-offs between a reduced light-harvesting apparatus and increased population density. Our results are discussed in the context of recent experimental efforts to increase the yield of phototrophic cultivation.
Background
Phototrophic microorganisms such as microalgae and cyanobacteria hold significant potential for the production of industrially or medically relevant compounds, such as pigments, organic acids, or alcohols [17,56], as well as secondary metabolites used for pharmaceutical purposes [26,36]. The interest in cyanobacteria as platforms for microbial cell factories originates from their capability for carbon-neutral production, easy accessibility for genetic manipulation, and their relatively fast growth rates compared to land plants. A major challenge of cultivating phototrophic microorganisms on a commercial scale, however, is still the low biomass density, and hence low volumetric productivity, compared to other biotechnologically relevant microorganisms [27,29,51,52].
Open Access
Microbial Cell Factories *Correspondence<EMAIL_ADDRESS>Institut für Biologie, Fachinstitut für Theoretische Biologie, Humboldt-Universität zu Berlin, Invalidenstr. 110, 10115 Berlin, Germany Previous research has established the critical role of the photobioreactor design for improving the overall performance with a focus on parameters such as mixing rates, gas exchanges, temperature, pH, as well as light paths [22,40,41]. In particular, there has been significant progress to model phototrophic culture systems making use of sophisticated computational methods to describe reactor geometry, light transfer, and fluid dynamics [1,9,16,38]. Concomitantly, there have been significant efforts to obtain a better quantitative understanding of the photosynthetic productivity of cyanobacterial growth in photobioreactors [8,43].
However, despite this progress, there remains a need for an improved computational framework to better understand the physiological acclimation of cyanobacteria in a heterogeneous light environment typically encountered in dense cultures. In this respect, we can build upon an established theory of the light-limited chemostat, originally developed by Huisman et al. [23] and later refined by other authors [18,30,31]. These previous analyses, however, were almost all based on phenomenological growth models, such as the Monod or Haldane-type equation, and only few works, such as the computational analysis of He et al. [20], explicitly integrate intra-and extracellular information to achieve a better understanding of bioreactor productivities.
The purpose of this work is therefore to integrate a recent coarse-grained model of cyanobacterial growth into a model of population dynamics within a light-limited chemostat. The coarse-grained computational model was previously parametrized using an in-depth quantitative analysis of cyanobacterial growth in an optically thin turbidostat [54], and describes the relationship between intracellular protein allocation and cellular growth. Based on our previous experimental analyses [15,54], our premise is that the model provides a reasonable description of cyanobacterial growth under different light intensities-and therefore represents a suitable starting point to investigate the relationship between the allocation of intracellular proteins, light absorption, self-shading, growth rate, and overall culture productivity. Combining our model of cyanobacterial growth with a model of a light-limited chemostat therefore allows us to computationally investigate and compare different possible proteome allocation strategies, such as maximization of growth rate versus maximization of culture productivity, and provides insights into optimal strain design strategies.
Our results have profound consequences for the design of photobioreactors. The model predicts that high population densities alleviate the detrimental effects of photoinhibition even under very high light intensities. The results therefore strongly support previous works by Richmond [41] and Qiang et al. [40] who showed that a high areal phototrophic productivity can be achieved using reactors maximally exposed to light with a short light-path and turbulent mixing. We further recover the well-known tradeoffs between a reduced light-harvesting apparatus and increased population density, and hence higher volumetric productivity. Our approach provides a general computational framework to integrate and solve models of cyanobacterial proteome allocation in a light-limited chemostat.
The paper is organized as follows: in the first two sections, we briefly introduce a model of the light-limited chemostat. In the subsequent sections, we describe the coarse-grained model of phototrophic growth and its solution using the assumption of parsimonious protein allocation. We then integrate both models and show that, as a nontrivial result, phototrophic growth in a light-limited chemostat can be described using the concept of an average light intensity. We then investigate culture properties, such as light attenuation, population density, and volumetric productivity, as well as the emergent bistability of the culture induced by photoinhibition. In the subsequent section, we consider hypothetical strains whose protein allocation is optimized for maximal culture productivity-and highlight differences to proteome allocation in wild-type cells. Finally, we consider engineering strategies for heterologous production.
A model of the light-limited chemostat
To investigate cellular proteome allocation in dense cultures, we make use of a mathematical description of continuous cultivation in a chemostat, as originally described by Novick and Szilard [37] and, independently, by Monod [35]. The dynamics of the population density ̺ (in units of cells per milliliter) of genetically identical and well mixed cells is described by the differential equation, where µ denotes the specific cellular growth rate and D denotes the dilution rate of the culture medium. Figure 1 illustrates the concept of the chemostat.
Fresh medium and dissolved nutrients are continuously fed into the culture with the same rate as the culture medium is removed, resulting in a constant operating volume. All dissolved nutrients are well mixed within the culture medium. The dynamics of the concentration of a nutrient s depends on the inflow and outflow rates, as well as the uptake rate of the microorganisms, where V in,s denotes the inflow rate of the nutrient s, with V in,s = [s 0 ] · D for a soluble nutrient that is supplied with a concentration [s 0 ] via the medium. Gaseous nutrients, such as CO 2 , are supplied by sparging. The uptake rate of nutrients by the microorganisms is typically assumed to be proportional to the specific growth rate µ and the yield coefficient Y s denoting the number of cells obtained per nutrient molecule.
Light attenuation in the chemostat
Different to other potentially limiting nutrients, light cannot be homogeneously distributed through the culture by vigorous mixing. Following Huisman et al. [23] and others [4,18,20,30,31], we describe light absorption according to the law of Lambert-Beer, i.e., b a d c Fig. 1 A model of the light-limited chemostat. a Schematic representation. The light source irradiates a culture vessel of depth z m . The culture is aerated with CO 2 -enriched air and nutrients are well mixed. Different from other nutrients, the photon flux is inhomogeneous and decays exponentially with depth. b A coarse-grained single-cell model. The model describes carbon assimilation and metabolism, light harvesting, photosynthesis, and protein translation by ribosomes. External inorganic carbon c x i is transported into the cell ( v t ), assimilated ( v c ) into organic carbon precursors c 3 from which amino acids aa are synthesized ( v m ). Amino acids serve as precursors for protein synthesis ( γ j ). The model consists of seven coarse-grained protein complexes, including ribosomes R, transport proteins E T , metabolic enzymes E C , E M , E Q , photosynthetic units PSU, and quota proteins P Q . All catalyzed reactions are fueled by energy units e that are produced by activated photosynthetic units PSU * ( v 2 ). Activation of resting photosynthetic units PSU 0 is facilitated by light. High light intensities cause photodamage ( v i ), i.e., the degradation of PSU into its constituent amino acids. The model also incorporates a general protein degradation term ( d p ), as well as an energy maintenance reaction ( v me ). c The optimized specific growth rate as a function of light intensity, shown together with experimental values for Synechocystis sp. PCC 6803 obtained from quantitative growth experiments in an optically thin turbidostat [15,54]. d Proteome allocation within the system is formulated as an optimization problem (parsimonious allocation) such that the ribosome fractions β j translating the different proteins give rise to a maximal specific growth rate µ we assume that light absorption is proportional to the concentration of light-absorbing substances in the medium (including cells) and the local light intensity. The light intensity I(z) at a depth z is then given by where I 0 denotes the incident light intensity at the surface, α denotes the species-specific light attenuation coefficient per cell, and K bg denotes the background turbidity of the medium including all other light-absorbing substances [23]. We note that Lambert-Beer's law is an approximation and neglects aspects such as backscattering. In the following, we further assume monochromatic light and consider light as the only limiting nutrient. The latter assumption is motivated by the fact that in biotechnological applications mineral nutrients are typically supplied in sufficient quantities. Suitable strategies to supply CO 2 to dense cultures were recently proposed [3,29].
To solve the equations for the light-limited chemostat requires knowledge of the specific growth rate µ(I) as a function of the light intensity (and possibly other nutrients). To this end, previous works typically used well-known phenomenological rate equations, such as the Monod equation in Huisman et al. [23], to describe the light-limited growth of phototrophic microorganisms. Following the original analysis of Huisman et al. [23], Gerla et al. [18] and later Martínez et al [31] provided a detailed analysis based on a Haldane-type equation, where k 1 and k 2 denote species-specific parameters and µ max the maximal growth rate in the absence of photoinhibition ( k 2 = 0 ). Equation (4) can be derived using a simple model of photoinhibition [14,19]. See, for example, Westermark and Steuer [50] for a review.
A coarse-grained model of phototrophic growth
Our aim is to replace the phenomenological growth equations used in previous works with a mechanistic model of cyanobacterial growth. To this end, we utilize the coarse-grained model of Faizi et al. [15] with minor modifications as described in "Methods" section. The model describes phototrophic growth of a single cyanobacterial cell in an optically thin culture and was recently subject to an in-depth analysis based on quantitative growth experiments [54]. Different to phenomenological growth models, the model of Faizi et al. [15] describes growth in terms of the expression of a (coarse-grained) proteome and accounts for the acclimation of cells to different light intensities. Based on our previous experimental analysis [54], we consider the model to be a reasonable description of cyanobacterial growth, and therefore a suitable starting point to investigate growth in a lightlimited chemostat.
The model is conceptually similar to other recent models of cellular resource allocation [5,24,34,49] and describes the uptake and conversion of extracellular nutrients into metabolic precursors (metabolism) as well as the synthesis of proteins from these metabolic precursors (gene expression). The dynamics of all cellular constituents are modelled as ordinary differential equations (ODEs). In brief, the model consists of 13 ODEs that describe the dynamics of 7 intracellular protein complexes and 5 intracellular metabolites: inorganic carbon c i is taken up and assimilated into the metabolite c 3 , which serves as a precursor for the synthesis of amino acids aa and other cellular components c q . Proteins are translated by ribosomes using available amino acids and cellular energy. Energy is provided by a photosynthetic unit PSU that integrates light harvesting and the electron transport chain. The cellular energy unit e combines chemical energy and reductant (ATP and NADPH, respectively). Light absorption induces photodamage that results in a (light-dependent) degradation of PSU back into its constituent amino acids. The model is depicted in Fig. 1 and a detailed description of the model equations is provided in "Methods" section.
Parsimonious resource allocation and growth
Similar to other models of cellular resource allocation, the model does not assume knowledge of regulatory interactions but is formulated as an optimization problem that is solved based on the principle of parsimonious allocation of cellular resources to achieve a maximal growth rate. That is, we maximize the cellular growth rate under (steady-state) balanced growth conditions by varying the fractions β j of ribosomes that translate specific proteins P j . The fractions β j govern the abundance of the respective proteins-and a different allocation of intracellular proteins will give rise to different physiological properties and growth rates under different environmental conditions. Hence, our framework goes beyond phenomenological growth functions and allows us to study the consequences of different proteome allocation strategies, including allocation strategies that are optimized for maximal culture productivity, as well as the trade-offs that arise from an heterologous synthesis and excretion of a metabolic compound of interest.
Assuming steady-state conditions and balanced growth, all intracellular components are subject to the mass balance constraint [11,15], (5) where N denotes the stoichiometric matrix, v the vector of reaction fluxes, and x the vector of intracellular concentrations (including proteins). Equation (5) implies that the product of the stoichiometric matrix N and the vector of reaction fluxes v equals the dilution of intracellular compounds due to growth. With ω denoting the vector of specific weights of each intracellular compound, and the (reasonable) assumption of a constant cell density D c = ω · x , the specific cellular growth rate is given by Figure 1c shows the resulting maximal growth rate in dependence of the light intensity I. The growth curve emerges from the coarse-grained model using the assumption of parsimonious protein allocation and is in good agreement with Eq. (4), as well as with experimentally determined growth curves obtained in an optically thin turbidostat. See Faizi et al. [15] and Zavřel et al. [54] for further discussion.
Phototrophic growth in the light-limited chemostat
To describe phototrophic growth in a light-limited chemostat, we aim to incorporate the coarse-grained growth model into a heterogeneous light environment induced by self-shading of the culture. According to Eq. (3), the local light intensity decreases exponentially as a function of vessel depth z m and depends on the density of light-absorbing organisms ̺ and their species-specific light attenuation coefficient α . Within our model the species-specific cellular light attenuation coefficient is given by where α 0 denotes a basal light absorption per cell independent of photosynthesis, and the product σ · PSU tot describes the absorption per cell for photosynthesis, with σ denoting the effective cross section per PSU. The specific light attenuation coefficient α therefore depends on the expression of the protein complex PSU, and hence on the acclimation state of the cell.
We further assume that the culture is rapidly mixed, i.e., we only consider a single cell type and acclimation state within the culture. The concentrations of intracellular compounds do not depend on the (momentary) position of a cell within the chemostat. Individual metabolic reactions, however, in particular reactions that directly depend on light, will proceed with rates that depend on the local light intensity-the overall metabolism is required to be balanced with respect to energy uptake and growth. As noted by Pirt [39], this assumption implies a certain buffering capacity to permit each cell to grow with a constant rate even though it is intermittently exposed to radiation.
To solve the model, we consider the steady-state condition for the chemostat, Eq. (1), and integrate over the vessel depth z m , Using the definition of the specific growth rate, Eq. (6), we obtain an expression for the effective growth rate μ in the chemostat, with Using a substitution of variables, as suggested by Huisman et al. [23], Eq. (10) can be rewritten as an integral over light intensity and solved analytically for all reaction rates (see "Methods" section). The solution reveals that it is possible to express the effective specific growth rate μ in the light-limited chemostat as a function of an effective average light intensity Î , Since ln(I 0 ) − ln(I(z m )) = (α · ̺ + K bg ) · z m , the average light intensity Î depends on the incident light intensity I 0 , the cell-specific attenuation coefficient α , the background turbidity K bg , the population density ̺ and the vessel depth z m . As already noted by Huisman et al. [23], the value of Î can also be readily estimated experimentally from measuring the incident and transmitted light intensities, I 0 and I(z m ) , respectively.
The solution of Eq. (10) is a nontrivial result and crucially depends on the assumption of rapid mixing and the fact that light absorption and photoinhibition are modelled as first-order reactions. There is indeed significant empirical evidence for the latter assumption, which is contrary to the belief that photoinhibition does not occur under low light [6,44,45]. While previous models also used the concept of an average light intensity as a convenient approximation [7,13], the description emerges here as a consequence of the functional form of the intracellular rate equations.
Growth, population density and bistability
Given the definitions above, a solution of the model of the light-limited chemostat requires to solve the steadystate equation 0 =μ(β, x,Î) · ̺ − D · ̺ for the effective growth rate μ as a function of the average light intensity Î . A solution requires knowledge about the cellular proteome allocation, or, as described above, a suitable optimization objective. As our first optimization scenario, we therefore assume that the cyanobacterial cells acclimate to the average light intensity Î only. That is, we assume that the cells have no explicit information about the culture density or other culture parameters, but adjust their intracellular proteome such that it maximizes the effective growth rate μ for the respective average light intensity Î . This allocation strategy is identical to the proteome allocation strategy previously used in Faizi et al. [15], with results that are in excellent agreement with measurements in an optically thin turbidostat [54].
Our premise is therefore that the cyanobacterial wildtype (WT) strain has evolved to allocate its proteome such that the specific growth rate in the respective light environment is maximized. We denote this optimization objective as WT-strategy. Figure 2 illustrates the solution obtained for the lightlimited chemostat for the WT-strategy. All extracellular culture parameters are summarized in Table 1. Figure 2a shows the maximal effective growth rate μ as a function of the effective average light intensity Î . For any value of Î , sub-optimal proteome allocation strategies result in growth rates beneath the curve (indicated by the shaded area in Fig. 2a). The maximal value of the effective average light intensity Î is bound from above by (a function of ) the incident light intensity I 0 . Figure 2a also indicates that the WT-strategy is (evolutionary) stable with respect to changes in proteome allocation: any sub-optimal proteome allocation strategy results in a lower growth rate for the respective average light intensity. The respective strain would be outcompeted by a strain that attains a higher specific growth rate at the same average light intensity. Figure 2b shows the effective growth rate μ as a function of the population density. We note that the values are identical to those shown in Fig. 2a, μ is not subject to any explicit optimization with respect to the population density ̺.
Different to phenomenological models, the assumption of parsimonious protein allocation implies that cells acclimate to different average light intensities Î . The respective changes in the cellular light attenuation coefficient α are shown in Fig. 2c. Higher average light intensities result in the lower expression of photosynthetic units, resulting in lower values of the cellular light attenuation coefficient α . Figure 2d shows the culture productivity P E =μ · ̺ as a function of the population density for different incident light intensities I 0 . The steady-state productivity is given by the intersection between the curve and the straight line defined by D · ̺ . The culture productivity P E has a maximum for intermediate values of D that depends on the incident light intensity I 0 .
The chemostat is in steady state when the effective growth rate μ equals the dilution rate D. For any incident light intensity I 0 , we must therefore distinguish between three possible cases (see Fig. 2a, b): (i) For a sufficiently low dilution rate D, there is a single steady state. In this case, the average light intensity Î corresponds to the nutrient concentration in the classical chemostat. Any perturbation towards a higher culture density reduces the average light intensity, resulting in a lower growth rate and hence a decreasing culture density: the steady state is stable. (ii) For a dilution rate D that exceeds the maximal growth rate μ max , no positive steady state is feasible, the culture is washed out ( ̺ = 0 ). (iii) For intermediate values of D, and for sufficiently high I 0 , the effects of photoinhibition induce a second potential steady state: the chemostat is bistable. In the second state, however, an increase in the average light intensity results in a decrease of the effective growth rate, resulting in a decrease of the population density, and hence a further increase in the resulting average light intensity: the second steady state is unstable and the culture is washed out ( ̺ = 0).
These results recapitulate the results previously obtained for phenomenological rate equations. In particular, Gerla et al. [18] and others [30,31] provided a detailed theoretical analysis of the light-limited chemostat using Haldane-type models and highlight the consequences of bistability induced by photoinhibition. In the following, we focus on the stable steady state only and make use of the plasticity of our model to investigate different proteome allocation strategies.
Maximizing photosynthetic productivity
For many biotechnological applications the overall culture productivity is a crucial process parameter that determines the economic viability of phototrophic cultivation. We are therefore interested in the maximal volumetric productivity of a light-limited chemostat, as well as the optimal proteome allocation strategy to achieve maximal productivity-and how this strategy differs from proteome allocation in wild-type cells. To this end, we consider a hypothetical strain that is engineered (or selected) to adjust its intracellular protein allocation such that it maximizes the steady-state volumetric biomass productivity of the culture, defined as We denote this optimization objective as P E -strategy. Figure 3a shows the maximal biomass productivity of the hypothetical P E -strain as a function of the dilution rate D for two different incident light intensities I 0 .
The curves reflect a trade-off between maximizing the dilution rate and maximizing culture density ̺ . With increasing dilution rates (Fig. 3b) the steady-state population density decreases, resulting in a maximal biomass productivity P E for intermediate values of D. We note that the dilution rates at which the maximal productivity is attained are well below the maximal growth rate of the strain. For example, for an incident light intensity of I 0 = 440 µE m −2 s −1 , the maximal productivity P E = 0.16 gDW L −1 h −1 is achieved for a dilution rate D opt = 0.026 h −1 . Figure 4a shows the optimal dilution rate D opt for different incident light intensities I 0 . The dilution rate D opt for which the maximal productivity is attained initially increases with the incident light intensity and converges to D opt ≈ 0.027 h −1 . Similar, as shown in Fig. 4b, the photosynthetic efficiency Y E , defined as the photosynthetic yield in gDW per photon [57], (13) Properties of the light-limited chemostat. a The maximal effective growth rate μ as a function of the average light intensity. The shaded area indicates sub-optimal proteome allocation strategies. The average light intensity ˆI is bound from above by the (a function of the) incident light intensity I 0 . The limits for three different incident light intensities are shown. A steady state is attained if the effective growth rate μ equals the dilution rate. b The maximal effective growth rate μ as a function of the population density for three different incident light intensities I 0 . Higher incident light intensities result in a higher steady-state population density for an identical dilution rate D. For certain dilution rates, bistability emerges. c The cellular light attenuation coefficient α as a function of the average light intensity ˆI . Parsimonious allocation results in an acclimation of cells to different light intensities. d The culture productivity P E =μ · ̺ as a function of the population density for three incident light intensities I 0 . At steady-state, the effective growth rate equals the dilution rate, hence P E = D · ̺ . Higher incident light intensities result in a higher steady-state productivity first increases with increasing incident light intensity and then saturates at Y E ≈ 2.37 gDW mol photons −1 , approximately double that of the measured efficiencies from Touloupakis et al. [43] (see "Methods" section for a discussion of error ranges). These results suggest that the operation of a light-limited chemostat is more efficient at high light intensities and that the efficiency is also not decreasing for incident light intensities under which the individual cells already exhibit strong photoinhibition.
Finally, we consider the impact of the vessel depth z m on the (maximal) productivity. As expected, and shown in Fig. 4c, the culture density and hence the volumetric biomass productivity decreases with increasing vessel depth. However, the productivity per surface area (as well as the total biomass within the bioreactor) remains approximately constant for different vessel depths. The small decrease of the per surface area is due to the increasing effect of the background absorption [ K bg in Eq. (3)]. Hence, the model predictions agree with previous reports from Qiang et al. [40], Richmond [41], and Cuaresma et al. [10] that cultivation in short light-path bioreactors is advantageous.
Engineering strategies for maximal biomass productivity
Different from phenomenological growth models, the coarse-grained model allows us to investigate the proteome allocation strategies that maximize culture productivity ( P E -strategy)-and to compare the respective differences from the allocation strategy that maximizes growth rate (WT-strategy). Figure 5 shows the optimal proteome allocation for both optimization strategies, performed at a dilution rate of D = 0.026 h −1 for an incident light intensity I 0 = 440 µE m −2 s −1 . For simplicity, the figure neglects quota components ( 55% of cell mass corresponds to non-protein components, represented by the metabolic compound c q , 50% of protein mass corresponds to quota protein). Cells optimized for culture a b Fig. 3 Maximizing photosynthetic productivity. a The maximal volumetric biomass productivity P E as a function of the dilution rate D for two different incident light intensities I 0 . b The population density ̺ as a function of D. The maximal productivity P E represents a trade-off between a high population density and a high dilution rate D, resulting in a maximal value for intermediate dilution rates, well below the maximal growth rate of the strain a b c Fig. 4 Photosynthetic efficiency, optimal dilution, and effects of the mixing depths. a The optimal dilution rate D opt for which a maximal biomass productivity P E is attained. D opt increases with increasing incident light intensity I 0 and saturates at a value D max ≈ 0.027 h −1 . b The photosynthetic efficiency as a function of the incident light intensity for a maximally productive chemostat. The photosynthetic efficiency increases with increasing incident light intensity and saturates at Y E,max ≈ 2.474 gDW mol photons −1 . c The consequences of different mixing depths on the maximal biomass productivity P E . While the volumetric productivity decreases, the surface productivity P E · z m remains approximately constant. The slight decrease is due to the increasing effect of the background turbidity productivity exhibit a reduced expression of the photosynthetic unit (PSU), and instead accumulate free metabolites. Other proteome components are similar for both optimization strategies. Figure 6 provides a more detailed comparison between both optimization strategies for different dilution rates D. The results are shown as a function of the resulting average light intensity. Figure 6a shows that strains optimized for culture productivity ( P E -strategy) exhibit a slightly lower effective growth rate μ as a function of the effective average light intensity compared to strains optimized for maximal growth (WT-strategy). This difference is due to different proteome allocation. As shown in Fig. 6b, cells optimized for culture productivity also exhibit a lower light attenuation coefficient α . Correspondingly, the culture exhibit a higher population density ̺ at the same effective light intensity (Fig. 6d). The optical depth of both cultures, defined as θ = α · ̺ , remains unchanged (Fig. 6e) and the overall light absorption of both cultures is identical.
The difference in proteome allocation as a function of the dilution rate is again shown Fig. 6c: the lower light absorption coefficient of cells optimized for maximal culture productivity results from the fact that these cells express less PSU. The differences in protein allocation between both strains are restricted to low dilution rates (including the dilution rate at which the maximal productivity is attained). For higher dilution rates, both optimization strategies give rise to an identical proteome allocation. The reason for the observed convergence of allocation strategies is that for higher dilution rates D, the cells have to allocate increasing resources to ribosomal and metabolic proteins to match the growth rate imposed by the dilution rate.
As shown in Fig. 6f, however, the quantitative differences between the volumetric productivities of both optimization strategies as a function of the dilution rate, are rather small-despite the significant differences in proteome allocation. We note that the absolute quantitative difference in maximal productivity may also be strainspecific and may depend on the parameterization of the model. Figures 5 and 6 show results for an incident light intensity of I 0 = 440 µE m −2 s −1 , all results remain qualitatively identical for other values of I 0 .
Sensitivity analysis
To obtain further insights to what extend parameters other than proteome allocation affect the maximal productivity, we performed a sensitivity analysis of the maximal productivity with respect to the model parameters. For details on the estimation see "Methods" section. The results, shown in Fig. 7, indicate that parameters that positively influence growth rate also improve overall productivity. In particular, a larger catalytic activity τ (catalytic cycles per second) of the PSU increases productivity. Likewise, an increase of the effective cross section σ per PSU results in an increased productivity, contrary to arguments suggested in the context of antennae truncation. The influence of most other catalytic activities k cat is modest. From a mechanistic perspective, parameter changes that increase the growth rate at a given average light intensity allow the model to attain the required growth rate (set by the dilution rate D) at a lower average light intensity, resulting in a higher culture density and hence a higher productivity. Vice versa, parameters that are expected to negatively affect growth, such as an increased protein degradation constant d p , increased basal maintenance a b Fig. 5 Comparison of the cellular composition for different proteome allocation strategies. a P E -strategy: the protein allocation optimized for maximal culture productivity at a dilution rate of D = 0.026 h −1 with I 0 = 440 µE m −2 s −1 . b WT-strategy: the protein allocation is optimized to give rise to a maximal effective growth rate for the respective average light intensity at the same dilution rate and incident light intensity. The P E -strategy results in a significant reduction of light-harvesting protein complexes (PSU). Instead, intracellular metabolites are accumulated. For simplicity, the cellular composition is shown without (protein and metabolic) quota components (quota components correspond to ≈ 77.2% of cellular dry weight) Faizi and Steuer Microb Cell Fact (2019) 18:165 v me , and increased photoinhibition constant k d , as well as an increased basal light attenuation α 0 all reduce the maximal productivity.
Engineering strategies for heterologous production
In addition to the production of biomass, cyanobacteria are potential host organisms for the light-driven heterologous synthesis of bioproducts. Metabolic engineering for heterologous production, however, requires optimal expression strategies. To explore the trade-offs between growth and optimal product synthesis, we extend the coarse-grained model with an additional enzyme complex E X (representing a set of heterologous proteins) that catalyzes the synthesis and excretion of a a b c d e f Fig. 6 Comparison of cellular and culture properties between the P E -strategy and WT-strategy. Proteome allocation was optimized for culture productivity for different dilution rates D and I 0 = 440 µE m −2 s −1 , and compared to the respective values obtained for the WT-strategy. a The effective growth rate as a function of the average light intensity ˆI . The P E -strategy results in a reduced effective growth rate at low effective light intensities. b The cellular light attenuation coefficient α . c The fraction of protein PSU (light harvesting and photosynthesis) as a fraction of cell mass. d The resulting population density ̺ as a function of the average light intensity. e The optical depth θ = α · ̺ of both cultures. f The overall volumetric productivity of both optimization strategies as a function of the dilution rate. Despite the significant differences in proteome allocation, the differences in volumetric productivity remain small Fig. 7 Sensitivity analysis of the maximal productivity P E with respect to model parameters. Shown is the relative (logarithmic) sensitivity of P E with respect to variation in kinetic parameters, using an incident light intensity of I 0 = 440 µE m −2 s −1 . The results for other incident light intensities are qualitatively similar. The dilution rate D was allowed to vary as part of the optimization problem product of interest. In brief, we introduce a reaction v x , catalyzed by E X , that uses the carbon precursor c 3 as substrate and exports a metabolite m x into the extracellular medium. The compound m x represents a small molecule of interest, such as lactate [2], ethanol [12], or a volatile product [55], whose heterologous production has been achieved in cyanobacteria. For simplicity, we neglect effects of product inhibition or toxicity [25], but these could be readily incorporated into the definition of v x if the respective data are available. See "Methods" section for model definitions.
We are interested in the (maximal) volumetric productivity P X = v x · ̺ , defined as the synthesis rate per cell multiplied by the population density. We note that the definition of P X holds independently on how the product is removed from the medium, i.e., whether the product is removed as part of the output flux ( D x = D ) or with a separate rate D x = D . In either case, the mass-balance equation holds, and the concentration of m x adjusts accordingly.
We first consider the trade-off between expression of the heterologous protein E X , biomass productivity (the 'protein burden'), and P X respectively. To this end, we force the heterologous expression of the protein E X within the WT-strain by introducing a lower bound on its concentration (in molecules per cell) as an additional constraint into the optimization objective, and subsequently use the WT-strategy to maximize the effective growth rate. The optimization establishes a 'best case' scenario for growth under the constraint of heterologous expression. Fig. 8a shows the resulting trade-off between biomass productivity and expression for three different dilution rates D. As expected, biomass productivity decreases with increasing heterologous expression and there is a maximal expression after which the biomass (14) productivity ceases: the remaining proteome resources are not sufficient to ensure a specific growth rate that matches the dilution rate D and the culture is washed out. Figure 8b shows the resulting productivity P X as a function of protein expression. The productivity P X exhibits a maximum for a specific expression of E X that depends on the dilution rate D.
Beyond enforced expression, we are interested in the predicted proteome allocation of a hypothetical P X -strain that maximizes the productivity P X -and how this predicted proteome differs from the proteome of a WTstrain. To this end, P X is maximized for different dilution rates D as a function of protein expression, i.e., by varying the fraction β j of ribosomes that translate a specific protein P j . The results are summarized in Fig. 9. Figure 9a shows the maximal productivity P X of the optimized P X -strain as a function of dilution rate D. The maximal productivity P X decreases with increasing dilution rates: for heterologous production cyanobacteria act as catalysts and the maximally productive state of the culture is attained when growth (almost) ceases (i.e., the reactor has a low dilution rate) and all cellular resources are directed to carbon assimilation and product synthesis. This finding holds independently of the removal rate of the product. In practice, however, product inhibition and possible toxicity of the accumulated dissolved products will either prohibit very low dilution rates or necessitate fast removal of the product. Fig. 9b shows the optimal heterologous expression necessary for a maximal productivity P X . Figures 9c, d compare the cellular composition of cells optimized for maximal growth (WT-strategy) and maximal productivity P X ( P X -strategy) at the same dilution rate D. In the latter case, protein complexes associated to light harvesting and photosynthesis (PSU) are again reduced, whereas protein complexes associated to carbon uptake and assimilation are increased.
Discussion
In this study, our aim was to provide insights into the limits of phototrophic cultivation of microorganisms in a light-limited chemostat. To this end, we built upon an established theory of growth in a light-limited chemostat, as developed by Huisman et al. [23] and others [18,30,31]. Previous analyses, however, primarily relied on phenomenological growth models, such as the Monod or Haldane-type equation. In contrast, our starting point was a mechanistic model of cyanobacterial growth that connects intracellular resource allocation with physiological properties and growth. The model was previously parameterized using data obtained from an (optically thin) turbidostat culture of the cyanobacterium Synechocystis sp. PCC 6803, and was subject to a detailed analysis with respect to the predicted physiological properties as a function of growth rate and light intensity [54]. Our premise was therefore that the model represents a reasonable description of cyanobacterial growth at different light intensities. Our aim was to extrapolate the results obtained from the coarse-grained single-cell model to dense cultures that give rise to strong light gradients due to self-shading-motivated by the hypothesis that growth in an optically dense culture imposes different trade-offs on resource allocation. Following previous works [23], and the experimental setup used by Zavřel et al. [54], we only considered monochromatic light (the computational approach, however, can be straightforwardly extended to different light spectra). Our first step was to integrate the coarse-grained growth model into a model of the light-limited chemostat. The rate equations as a function of the light gradient were solved analytically, resulting in a description of phototrophic growth that only depends on the average light intensity within the chemostat. Such a description was previously utilized by several authors, for example Du et al. [13] and Clark et al. [7], as a reasonable approximation. Our results, however, provide a more stringent justification for this approximation: it emerges as a direct consequence of the model definitions. A crucial prerequisite for this fact is that photodamage is assumed to happen at all light intensities and that the rate constant of photodamage (the degradation of the PSU protein complex that represents the degradation of the D1 protein) a b c d Fig. 9 Maximal productivity of heterologous production. We consider a hypothetical strain whose protein allocation maximizes the productivity P X for different dilution rates D. Results are shown for an incident light intensity I 0 = 440 µE m −2 s −1 . a The maximal productivity P X decreases with increasing dilution rate D. b The optimal expression of the heterologous protein as a function of dilution rate. c The cellular composition for the WT-strategy for I 0 = 440 µE m −2 s −1 and a dilution rate D = 0.03 h −1 . d Optimal cellular composition under the P X -strategy for an identical incident light intensity and dilution rate is directly proportional to light intensity. There is indeed significant experimental evidence for this assertion: the finding that the rate constant of photodamage is directly proportional to light intensity has been confirmed several times in various organisms [6,32,[44][45][46]. It has already been highlighted [44,45] that the first-order behavior of photoinhibition is not a trivial result, and runs contrary to the belief that photoinhibitory damage does not occur under low light. We note that the first-order dependence is an empirical finding that is independent of details of the model implementation.
Within our computational framework, the first-order dependence allows us to solve the model analytically. The solution has strong implications for phototrophic cultivation and the design of photobioreactors. Firstly, these results provide a stringent justification for the photonfluxostat [13] as a suitable tool for quantitative growth experiments. In particular, the average light intensity, as defined in Eq. (11), can be readily estimated experimentally using the incident and transmitted light intensities, I 0 and I(z m ) respectively, and therefore may provide direct feedback to a controller. The definition provided in Eq. (11) also provides a more accurate description than the approximations previously used in phenomenological growth models [7].
Secondly, and more importantly, the dependence on the average light intensity implies that the culture density itself provides an effective mechanism of light dilution. If photodamage is directly proportional to light, then the average rate of photodamage equals the photodamage rate at the average light intensity-with the latter being determined by the culture density. For a rapidly mixed culture, our model therefore predicts maximally efficient growth for high densities at very high light intensities. Within the model, higher light intensity will always result in denser cultures with no obvious upper bound imposed by the model itself (a fact that is different to the analysis of Martínez et al. [31] where the maximal productivity has an upper bound independent of the light intensity). In practice, however, we expect that at high culture densities the supply of other nutrients, in particular inorganic carbon, becomes limiting-resulting in a de facto upper bound on the feasible cell density that is outside the scope of the current model.
Our model predictions can be compared to growth data reported in the literature. Results obtained from conventional cultivation typically report significantly lower cell densities compared to the values suggested here. See, for example, Straka and Rittmann [42] for typical values for Synechocystis sp. PCC 6803 cultured in conventional BG-11 medium (the comparison with our prediction is shown in Additional file 1: Figure S3). However, several recent works have shown that conventional BG-11 media is not suitable for high density cultivation and alternatives are required [3,29,48,52]. Previous works have shown that cultivation of Synechocystis sp. PCC 6803 is feasible at cell densities in excess of 20 gDW L −1 and light intensities in excess of 1000 µE m −2 s −1 with no apparent detrimental effects due to photoinhibition [3,29]. Similar results were recently reported for other cyanobacterial strains [52]. The predictions of our model in favor of cultivation at very high light intensities in shallow rapidly mixed cultures are also confirmed by the experiments of Qiang et al. [40] using Spirulina platensis. Therein a linear relationship was observed between the output rate (in gDW L −1 h −1 ) and the incident light intensity, up to a photon flux of 2500 µE m −2 s −1 , with areal productivities similar to the values computed here. Taken together, these results strongly support the previous arguments of Richmond [41] for cultivation at high light intensities in shallow rapidly mixed cultures with short light-paths for maximal phototrophic productivity.
Beyond the argument for dense cultures, the model recapitulates many of the results previously obtained for the light-limited chemostat using phenomenological growth models. In particular, we recover the observed bistability for incident light intensities that give rise to photoinhibition. While we were primarily interested in the steady-state properties, bistability has complex implications for the startup and dynamics of a culture [18,30,31], for example a threshold in the (initial) population density below which the culture will wash out.
In our analysis, we were further interested in the optimal proteome allocation for phototrophic production. As a benchmark for comparison, we assume that wildtype cells adjust their proteome composition such that they achieve the maximal growth rate at the respective average light intensity (WT-strategy). As shown in Fig. 6, the WT-strategy is (evolutionary) stable with respect to alternative proteome allocation strategies. Our results show that the composition of (hypothetical) strains optimized for maximal biomass productivity differs significantly from the composition of cells using the WT-strategy. Maximally productive strains exhibit a significantly reduced expression of protein complexes associated with light harvesting and photosynthesis (protein complex PSU). This reduction is reminiscent of antennae truncation strategies [33]: the WT-strategy maximized growth rate at the expense of culture efficiency. If the light absorption per cell is reduced, the population density of the culture increases, and hence the productivity increases. Interestingly, the reduction in light-harvesting proteins, however, does not result in an increase of other protein fractions, but rather in the accumulation of free metabolites (Fig. 5). This unintuitive result arises because the cellular growth rate within a chemostat is determined by the dilution rate. Hence the required (minimal) capacity for metabolic and ribosomal proteins is fixed. To fulfill the density constraints imposed within the model, the cell therefore accumulates free metabolites.
We further observed that, despite the significant differences in cellular composition, the quantitative differences in productivity between cells optimized for biomass productivity and the WT-strategy are rather small. This difference, however, may be strain-dependent and coarse-grained growth models parameterized for other data or strains data might exhibit larger differences. The small (and possible strain-dependent) difference in the overall productivity might also explain the mixed success reported for antenna truncation [28]. As shown in Fig. 7, a simple reduction in the effective cross section per PSU does not result in an enhanced productivity.
Noteworthy, the maximal culture productivity is typically attained at dilution rates well below the maximal growth rate of cells. This finding has implications for the current quest to identify the fastest growing cyanobacterium [47,53]-with grow rates typically measured in optically thin cultures under optimal conditions. While growth rate still remains an important parameter, our results show that culture productivity is determined by a combination of factors, including maximal culture density, dilution rate, and incident light intensity. We envision that the computational framework presented here may be further developed into an automated "design-build-testlearn" pipeline for microbial design strategies that allows to extrapolate the expected culture productivity of strains based on a defined set of screening experiments.
Conclusion
The results obtained from our computational model have strong implications for phototrophic cultivation and the design of photobioreactors. The (experimentally well supported) fact that the rate of photodamage is directly dependent on light intensity implies that phototrophic growth in the light-limited chemostat can be efficiently described using the concept of an average light intensity. Furthermore, the first-order dependency of photodamage implies that the culture density itself provides sufficient light dilution-given that the cells are rapidly mixed and other nutrients are available in nonlimiting concentrations. We have previously shown that suitable cultivation setups are indeed possible by combining short light-paths (up to 1 cm) with high light intensities ( > 1000 µE m −2 s −1 ), turbulent mixing and sufficient supply of inorganic nutrients [3,29]. As already emphasized by Richmond [41], such results rekindle the hope that growing algae and cyanobacteria in ultra high densities may boost economic viability of phototrophic cultivation. The early experimental results of Qiang et al. [40] are recovered here and put in the context of a thorough computational framework that builds upon recent insights in the cellular economy of phototrophic growth [54].
The computational framework presented here provides a further step to guide phototrophic cultivation and the development of phototrophic cell factories. While our study was limited to steady-state conditions, further work may assess the dynamics of cellular proteome allocation and the resulting population dynamics. Furthermore, in future work, the model may be extended to incorporate further molecular details, in particular with respect to the photosynthetic light reaction, cycling of inorganic carbon, photorespiration, storage metabolism, oxygen accumulation, as well as the potential effects of product toxicity and inhibition. We conjecture that our approach will prove useful in understanding the limitations of phototrophic culture productivity and allows us to further optimize culture conditions and cellular composition.
A model of phototrophic growth
We use the previously described model of Faizi et al. [15] with minor modifications. The model is implemented as an ordinary differential equation (ODE), all parameters are summarized in Additional file 1: Table S1. The model describes phototrophic growth of a cyanobacterial cell and consists of 7 coarse-grained protein complexes that catalyze cellular reactions, as well as 5 intracellular metabolites. Cellular processes require a cellular energy unit e that combines ATP and NADPH and is produced by the photosynthetic light reactions.
The metabolic reactions and the stoichiometry of the translation reaction are summarized in Table 2. All metabolic reactions v met are assumed to follow irreversible Michaelis-Menten kinetics, where [E] denotes the concentration of the respective catalyzing protein complex, [m] the concentration of the substrate, K m and k cat the respective kinetic constants. The Michaelis-Menten K e constant with respect to the energy unit e is assumed to be equal for all reactions.
For each protein complex P j , the translation rate γ j depends on the length n j of the protein (in units of amino acids, aa) and the fraction β j · [R] of ribosomes that translate protein P j (with β j ≤ 1), where γ max denotes the maximal catalytic rate of the ribosome. The parameters β j determine cellular proteome allocation. where τ denotes the catalytic turnover of the PSU, σ the effective cross section for light absorption, and k d a rate constant for photodamage. These equations correspond to a three-state model of photosynthesis [14,50] where degradation and recovery of PSU involves the constituent amino acids. The first-order light dependency of the reactions, in particular for photoinhibition, is strongly supported by data [6,44]. We emphasize that this fact does not imply that the growth curve or the oxygen evolution rate exhibit first-order dependencies-both are outcomes of a trade-off between different constraints and objectives (see Fig. 1c). The overall equation for photosynthesis converts m hv = 8 photons into 8 energy units e (representing a total of 5 ATP and 2 NADPH, the latter are weighted as 1.5 e each) and one O 2 . The model consists of 13 ODEs, an objective function (see below), and additional constraints to ensure synthesis of quota compounds (a quota protein P Q and the metabolic compound c q ). External parameters are the concentration of extracellular inorganic carbon and the light intensity I 0 . The former is assumed to be constant and saturating with respect to the K t of the transporter reaction.
Solving the light gradient
To obtain an expression of the effective growth rate μ in the light-limited chemostat, we require a solution of Eq. (10) and integrate the rate equations over the mixing depth of the photobioreactor. Following [23], we use a substitution of variables with to replace the integral over the mixing depth with an integral over the light intensities, and note that We distinguish between light-independent reactions and reactions affected by light. The former remain unchanged, whereas the latter only consist of reactions that exhibit a first-order dependency on the light intensity ( v 1 and v i ). Consequently, the solution for each reaction rate replaces the depth-dependent light intensity with the average light intensity Î , defined in Eq. (11), in the chemostat. The derivation is based on the assumption that no further variables depend on the (momentary) position in the chemostat, i.e., the culture is rapidly mixed.
Sensitivity analysis
The sensitivity analysis is performed to quantify the influence of model parameters on culture productivity. The relative sensitivity ǫ i of the culture productivity P E with respect to a given model parameter p i is defined as ln(I 0 ) − ln(I(z m )) = z m · (α · ̺ + K bg ).
Table 2 Summary of metabolic and translation reactions
The protein complex E T imports extracellular inorganic carbon and represents import and carbon concentrating mechanisms. The protein complex E C catalyzes assimilation of inorganic carbon into the carbon precursor c 3 (Calvin-Benson cycle). The protein complex E M catalyzes the synthesis of amino acids (central metabolism), whereas the protein complex E Q catalyzes the synthesis of other metabolic compounds c q . The remaining two protein complexes are the photosynthetic unit (PSU) and a (non-functional) quota protein compound P Q
Protein Reaction Stoichiometry Description
Carbon assimilation Synthesis of quota compounds R γ j n j · aa + 3 · n j · e → P j Translation by ribosomes and approximated by a small variation ( ±0.1% ) of each parameter p i . A relative sensitivity of ǫ i = 1 indicates a linear dependency of the culture productivity on the respective parameter p i .
Heterologous expression
The coarse-grained model is modular and allows for the addition of further enzymes of interest. We extend the model to include the synthesis and export of a desired product m x . For each enzyme complex E X the following parameters have to be defined: enzyme length n x , its turnover rate k x cat and other kinetic parameters (here the half-saturation constant K x with respect to its substrate). In this case, we assume irreversible Michaelis-Menten kinetics, the new reaction has to be added to the ODEs for the respective substrates (here c 3 and e) and product (here m x ). It is straightforward to also include, for example, a term for product inhibition into the equation provided the respective parameters are known. The model is augmented by two additional ODEs, and with a translation rate γ E X . The protein complex E X competes with other proteins for ribosomal capacity and is included into the definition of the cell density.
Model parametrization
The model parameters are taken from the previously published models [15,54], only the turnover rate of the photosynthetic unit τ , its effective absorption cross section σ and the photodamage constant k d are refitted in this study. We first performed a rough estimate for the photosynthetic turnover rate τ = 500 s −1 . Additional file 1: Figure S1 shows how different turnover rates effect the growth rate for arbitrarily chosen photodamage constant k d and absorption cross section σ . Fitting of the remaining parameters k d and σ was performed as described in Faizi et al. [15] for a predefined set of values k d = {10 −7 , 1.1 · 10 −7 ..., 4.9 · 10 −7 , 5 · 10 −7 } and σ = {5, 10, ..., 25, 30} . The best fit was obtained with k d = 2.7 · 10 −7 and σ = 15 nm 2 PSU −1 .
All parameters are listed in Additional file 1: Table S1. Unless otherwise noted, the average length of a protein is 300 amino acids, the average turnover rate is k cat = 20 s −1 and the average half-saturation constant is set to 10 4 molecules per cell. In addition, we assume that the amount of photons required to activate one PSU is m hv = 8 photons. With respect to the measurements from Zavřel et al. [54], we set the concentration of quota compounds to 10 11 molecules of carbon per cell. The concentration of quota compounds represent the amount of carbon molecules contained in the dry weight per cell without proteins.
To parameterize the chemostat model we determined the light attenuation through the culture vessel filled only with medium. For this purpose, we fitted Eq. 3 to the light profile data in Additional file 1: Figure S2, for a vessel depth of z m = 2.4 cm without microorganisms ( α · ̺ = 0 ) and obtain the background turbidity for the culture medium of K bg = 0.06 cm −1 . The species-specific basal light attenuation coefficient is set to α 0 = 0.01 µm 2 per cell, which is approximately one order of magnitude smaller than the varying light attenuation coefficient determined by the total photosynthetic unit amount and the absorption cross section at high light conditions.
The units for culture density are gram dry weight per liter (gDW/L). We note that the original measurements and parametrization of Zavřel et al. [54] was in cells per liter, and the conversion into gDW is subject to considerable variance, owing to the experimental difficulties in the accurate estimation of dry weight. Replicate measurements reported in Zavřel et al. [54] vary from 0.53 · 10 −11 gDW/cell to 1.13 · 10 −11 gDW/cell. In this work, we assume a conversion factor of 10 −11 gDW/cell, for visual clarity error bars are omitted in all plots (but see Additional file 1: Figure S3 for an example of error ranges). We emphasize that our aim is not a precise prediction of (the numerical value of ) a specific productivity, but rather to investigate the dependence of the (maximal) productivity on culture parameters-these results are independent of the conversion factor.
Model implementation
The model is implemented as an optimization problem to obtain the optimal proteome allocation for a specific environmental condition, characterized by the incident light intensity I 0 , external inorganic carbon concentration c x i , mixing depth z m and dilution rate D. The variable parameters in our optimization problem are the population density ̺ and the ribosomal fractions β j . For the objective function of our optimization problem we first assume that the cell optimizes the internal composition such that the growth rate is maximal for the specific external condition (WT-strategy). In addition, we define two further objective functions. The second objective function maximizes the product of the dilution rate and population density ( P E = D · ̺ ) to determine the optimal proteome allocation that maximizes the volumetric biomass productivity of the culture. The third objective function maximizes the productivity of a desired product m x ( P X = v x · ̺).
The optimization problem is implemented with the APMonitor Optimization Suite [21] and solved using the IPOPT (Interior Point Optimizer) method. The model is written in the APMonitor modeling language and provided on https ://githu b.com/marja nfaiz i/photo autot rophi c-growt h (in the folder 'Faizi2019') together with a Python script (entitled optimization.py) to run the simulations. | 14,151.2 | 2019-10-10T00:00:00.000 | [
"Environmental Science",
"Biology",
"Engineering"
] |
Hierarchical Vehicle Scheduling Research on Tide Bicycle-Sharing Traffic of Autonomous Transportation Systems
,
Introduction
Nowadays, as the modern transportation systems (TSs) develop, problems among mobility services (MSs), e.g., congestion, route adjustment, user dispersion, and peaktime conficts have become commonplace [1][2][3] in terms of the adjustment of urban planning and the year-on-year increase in car ownership.Since the MS are fundamental in propelling current intelligent TS (ITS) evolving towards autonomous TS (ATS), they are also being renovated to assist the public on a daily basis and explore the advancement of ATS.Some theoretical development has revealed that the creation of shared transport, called bicycle-sharing, has efectively alleviated the "last mile" service (LMS), which is the most predominant pain point in the current MS [4] in terms of the most immediate interaction with users, and some emerging companies, e.g., Mobike have also taken this trend to a new level.However, bikes must be parked in GPSidentifed areas to address ill-posed problems in the case of illegal parking, vandalism, or theft.Since then, to avoid constant billing, users never consider the capacity of the parking area, resulting in an uneven spatial distribution of bikes, namely, some areas sufer a severe accumulation of bikes while others are "one bike is hard to fnd" [5].Terefore, scientifc and reasonable scheduling strategies are required to overcome the imbalance between the supply and demand of bikes and improve resource utilization.
Rebalancing and optimizing bicycle-sharing distribution constitutes the vehicle routing problem (VRP), and most current research is based on this theory.For instance, Caggiani et al. [6] proposed a decision support system for the reallocation problem by forecasting the demand for spatiotemporal bikes.Similar research can, accordingly, be divided into static and dynamic scheduling to optimize VRP models with diferent objectives.Specifcally, in static scheduling, Kadri et al. [7] and Dell'Amico et al. [8] developed optimization models for user satisfaction and operating cost, respectively.Yan et al. [9] investigated the deterministic and stochastic demand for bicycle-sharing in dynamic scheduling.In general, these methods assume that the overall supply and demand within a scheduling station are in equilibrium without supporting the mobility of bikes between zones.Moreover, limited open literature has reported that current research focuses too much on mathematical modeling, neglecting the analysis of actual demands.Terefore, these methods have encountered three challenges in practice, namely, slow convergence, high time complexity, and problematic application, from perspectives of fulflling the actual demands in real-time and adjusting scheduling as needed.
As current diversifed mobility demands tend to be managed and fulflled by more intelligent and automated systems with fewer human intervention [10], it is an urgent need to collaborate corresponding functions to renovate the conventional MS of ITS in the context of ATS [11], i.e., update the ability to sense user demands and rearrange system supplies [12,13].Hence, to promote the MS provided by ATS, this paper proposes a hierarchical autonomous vehicle scheduling model based on tide bicycle-sharing trafc, namely, HATB.Tis model uses GeoHash coding to divide the scheduling into three layers, i.e., top, middle, and bottom, corresponding to the scheduling terminus, area, and point.Based on the genetic algorithm (GA), the model can achieve hierarchical and dynamic scheduling of vehicles and routes to maximize user satisfaction, while minimizing operating costs.
Furthermore, in contrast to current studies on scheduling bikes in ITS, the HATB makes three main contributions to optimizing convergence speed, time complexity, and application difculties of actual scheduling.Te scheduling results based on actual orders ultimately demonstrate HATB can provide a rational reference for LMS in ATS and guide the development of bicycle-sharing regulation and operation.
Te overall structure of this paper is divided into fve sections.Section 2 introduces related solutions and emerging challenges.Te methodology relevant to HATB is described in Section 3. Section 4 elaborates on the scheduling results and superiority of the model.Finally, Section 5 summarizes this study and sketches future research directions.
1.1.Related Works.In the transport domain, LMS refers to the direct interaction between the end of public transport and users, which often sufers from scattered users, peaktime conficts, and uneven distribution.As an efective way to cope with the LMS problems, bicycle-sharing has become a non-negligible component of urban transport.For example, Cheng et al. [14] have demonstrated that bicyclesharing increases the proportion of green transport in cities and solves the low efciency at the end of the travel chain.
In general, current research on bike-sharing mainly focuses on its development status and travel characteristics, but few on its scheduling.Researchers like Soriguera and Jiménez-Meroño [15], Gimon [16], and Lu et al. [17] concur that even while bicycle-sharing has considerable quantities, the spatio-temporal diferences in user demands, no fxed parking area, and fewer available bikes may lead to a more signifcant overall imbalance.Terefore, it is vital to take efective scheduling strategies to rebalance and optimize the distribution of bikes, thereby addressing difculties in management and operation.In this context, scheduling bicycle-sharing can be regarded as a heuristic algorithmbased, e.g., GA, ant colony algorithm (ACO), and vehicle routing problem (VRP) [18][19][20], which can be classifed as static or dynamic scheduling according to diferent strategies and objectives.
Dynamic scheduling mainly focuses on peak time and relies on user demands.For instance, a mathematical model for dynamic scheduling is created by Zhang et al. [21] based on the parking area's actual capacity and users' predicted arrival times.Shui and Szeto [22] partition peak time to optimize scheduling routes by regarding scheduling in each time interval as static scheduling.Chiariotti et al. [23] propose that scheduling bikes can dynamically determine the scheduling time through historical orders.In general, dynamic scheduling lessens operating costs' impact on operators by prescheduling bikes to avoid a shortage occurs.However, based on the uncertain use of bikes, frequent scheduling with complex constraints is necessary, which may lead to higher operating costs and slower convergence, thus making it challenging to fulfll user demands in realtime.
Another more common scheduling strategy is static scheduling during of-peak time.For example, Lang [24] provides a multiwarehouse model based on the Tabu search algorithm to minimize scheduling distance and improve scheduling efciency and robustness.Bae and Moon [25] use a dual time window with customer service levels to reduce total transport and labor costs.Since static scheduling only considers the predicted demands of stations, increasing more bikes for stations to guarantee user demands means that the time complexity of the heuristic algorithm grows exponentially.Moreover, to fulfll the actual demands, the allocated bikes by these studies may exceed the station's capacity.
Besides, such above-given studies are mainly applied to typical scenarios, as presented in Figure 1, where a single scheduling station serves one zone and only the routes within the scheduling zone are considered.It is often limited in actual scheduling by the service range of the station, which needs to frequently adjust the boundaries of this scheduling zone, thus leading to some research on hierarchical scheduling strategies.By defning scheduling priorities based on demand intensity, Sakakibara et al. [26] and Ni et al. [27] highlight the feasibility and reliability of hierarchical scheduling.In order to illustrate the fexibility, Zhang [28] and Ma et al. [29] set stations with similar demands in the same layer in accordance with the spatio-temporal characteristics of bikes.However, the defnition of hierarchies in these methods is too subjective and not clear, making it difcult to implement in practice.
In summary, whilst a considerable body of research has been carried out on VRP, much less fts the spatial-temporal and cross-regional mobility characteristics of bicyclesharing.In addition, it seems to be a common problem that existing studies focus more on mathematical modeling 2 Journal of Advanced Transportation but neglect the analysis of actual demands.When scheduling according to the above-given methods, three challenges in terms of the problematic application in practice, namely, the high time complexity of models and slow convergence for algorithms, are increasingly apparent.
Particularly driven by diverse and emerging technologies and demands, ITS is evolving into ATS, which illustrates that MS should be autonomously fulflled and managed by more intelligent systems with fewer human intervention [30].Terefore, it is an urgent need to study and improve the monolithic strategies and rationalize the actual demands to achieve hierarchical and autonomous scheduling of bicyclesharing, thus rationally guiding the provision of the LMS by ATS.
The HATB Methodology
Tis section uses three subsections to present the framework, hypotheses, and construction of the proposed HATB.
Model Framework.
As existing scheduling methods, in general, require frequently adjusting boundaries and increasing vehicles to alleviate the diference between bikes' supply and demand, which may inevitably increase transport time and costs; the scheduling framework can be set up as a hierarchical scheduling structure, i.e., a top-middlebottom hierarchy of scheduling terminus-area-point.
However, since current hierarchical methods have a subjective defnition of their hierarchies, geocode can ensure the objective; e.g., what3words [31] uses fxed 3m × 3m squares to divide the earth, and pluscode [32] represents each latitude and longitude level by 2-bit code, whose length range in levels 1 to 3 jumps from 110km and 5.5km to 275m.
Tese geocodes, accordingly, have good accuracy but loss fexibility; they may not meet the actual scheduling requirements.
Terefore, GeoHash encoding, proposed by Morton [33], can be used to better support efective and efcient scheduling.Its maximum length of 12 bits can represent a geographic location with arbitrary precision.For example, the GeoHash strings WX4ER and WX4G2 represent two regions of Beijing (China), where each character is a certain rectangular area.Moreover, the order information (Data Sources: https://biendata.xyz/competition/mobike/) on bicycle-sharing, as extracted in Table 1, also indicates the feasibility of dividing the scheduling layer via GeoHash.
Te coding defnition, as described in Table 1, illustrates that the 7-bit string matches the characteristics of actual bike stops, namely, area size, and the 5-bit string suits for vehicles to dispatch bikes in light of their loading capacity, i.e., 400 bikes.Terefore, the overall framework of the proposed hierarchical scheduling model, called HATB, can be obtained as presented in Figure 2. In general, this framework is characterized by a number of scheduling areas in each of the three layers, namely, bike stops consist of the bottom layer of scheduling, while the top and middle layers likewise have demands and capacity restrictions for bikes, and hence, the scheduling within the same layers is regional scheduling for seeking optimization.
Model Hypotheses.
Considering the complexity of actual scheduling, the proposed HATB in this paper defnes the following hypotheses and the frequency of scheduling as once in the morning peak and once in the evening peak, respectively.
(1) All scheduling vehicles own the same attributes (2) In each scheduling route, the vehicle departs from one scheduling terminus (area) and returns to this place after deploying bikes to corresponding areas (points) contained (3) Fuel consumption and vehicle loss should be considered (4) Each scheduling area can only be served once (5) Te actual orders determine the scheduling demand (6) All scheduling tasks are required to be completed within the specifed scheduling cycle (7) Te scheduling areas and points have sufcient space to accommodate the bikes deployed in or out during a scheduling cycle 2.3.Model Construction.Based on the above-given hypotheses and the actual operations of bike-sharing, considering only the operating costs will gradually lose customers, and weighing only user satisfaction runs counter to the essence of business proftability.Hence combining these two factors, this paper constructs a regional scheduling model for bikes to minimize operating costs (F 1 ) and maximize user satisfaction (F 2 ).min: Te parameters z and μ indicate that bike-sharing operators need to adjust the weight coefcients of operating and penalty costs according to their own emphasis.
F 1 : Te Objective of Minimizing Operating Costs.
Te actual operation costs need to consider both fxed and fexible costs, as summarized in equation ( 2), which is determined jointly by the value of scheduling vehicles [34], the unit transport cost (i.e., vehicle loss: 1 CNY/km, fuel consumption: 1 CNY/km, and labor cost: 100 CNY/person), and the scheduling distance.4) suggests that each area can only be served once; equation (5) shows that the transport distance must not exceed the maximum scheduling distance; equation (6) points that the number of bikes loaded by the vehicle must not exceed its maximum capacity, namely, 400; equation ( 7) means x t ij as 0-1 variable; equation (8) proves that the number of bikes deployed by the vehicle is a non-negative integer.
F 2 : Te Objective of Maximizing User Satisfaction.
User satisfaction can be improved by adding time window constraints, as described in equation (9), which means maximizing user satisfaction can equivalently transfer into minimizing the penalty cost of scheduling timeout.
Equation (10) indicates that the vehicle departs from the terminus at time zero; equation (11) means that the time calculation for a vehicle to arrive in an area; equation (12) demonstrates that the scheduling cannot arrive in area j from area i before ; equation (13) represents that vehicle needs to arrive in an area within the time window.
Te meaning of the parameters in the above-given equationsare shown in Table 2.
Case Study
Te highlights of HATB solving are illustrated in terms of algorithm settings, scheduling results, and model evaluation in this section.3, refects that the data are consistent with the "last mile" defnition [16], and hence, shows its reasonable usability according to prominent tidal characteristics.Tis paper proposes a GA with natural number encoding (NGA), as defned in Algorithm 1, to optimize the scheduling.In general, the frst and the last 0 s represent the scheduling terminus or area, [1, N] represents the zones that need to be scheduled, and other 0 s separate the routes of diferent vehicles, e.g., a chromosome example might be 0-3-0-1-2-5-7-0-4-8-6-0, namely, three vehicles serving eight zones.
Since the crossover and mutated sub chromosomes may lead to transport overload and overtime and the time window is more likely to be violated, the penalty factors for the two constraints are set to 10 and 500, respectively.
Scheduling Results
. A total of 220 scheduling areas in the morning peak are used as HATB test cases to obtain the optimal hierarchical scheduling routes, as shown in Figure 3 and Tables 4 and 5.
Journal of Advanced Transportation
As described in the previous section, regional scheduling is applied for each layer.Note that, each scheduling area has a positive or negative raw demand that refects the redundancy or scarcity of bikes.Furthermore, scheduling prioritizes selfsatisfaction in the route, namely, redundancy supports scarcity, and hence, route demand indicates the self-satisfaction gap for corresponding regional scheduling.
Terefore, the experimental results can be summarized as the total scheduling distance for optimal routes in the entire Beijing is around 719.5 km.As for satisfying the demands, 17 vehicles required to participate in the scheduling to deploy bikes.Moreover, using route one in Table 4 as an example, the scheduling can be summarized as a vehicle departing from the scheduling terminus and returning to the terminus after completing regional scheduling sequentially and autonomously in accordance with the area-point (middle-bottom) hierarchy.In addition, the routes' information in reality for Table 5 is mapped to Table 6 via GeoHash.
Model Evaluation.
To further verify the reliability of this model, Figure 4 shows the comparison for iteration between the HATB and other models proposed in the literature with similar objectives.Specifcally, based on the GA, Gao et al. [35] provided a promising perspective on improving operation efciency by reducing operating costs and service quality during peak times to minimize the total operating a0-a11-a4-a18-a17-a9-a1-a12-a15-a19-a5-a0 170 (a0) Te proposed HATB converged at the 64 th generation, and the total time cost is 148.9 s, with an average running time of 15 s per generation.Due to the diferent objectives, only the convergence speed of the above models is compared.It can be seen from Figure 4 that the HATB signifcantly outperforms the model proposed in the existing literature.Such a result indicates that this model is more proper for practical scheduling applications since it optimizes with higher convergence speed and lower time complexity.
Conclusion and Future Works
Even though the emerging and diversifed technologies and demands are driving TS to renovate conventional MS to be selfactuating, the current bicycle-sharing scheduling and maintenance rely on manual experience, which lacks scientifc guidance and efciency.Terefore, to achieve sustainable development, operators urgently need to develop a rational scheduling strategy to balance the distribution conficts and supply user demands in time.
Terefore, this paper proposes a hierarchical scheduling model, called HATB, to address the unsolved issues by current studies in terms of slow convergence, high time complexity, and problematic application, and hence, to support the rational and autonomous provision of LMS.In summary, according to bicycle-sharing properties, namely, spatio-temporal characteristics, cross-regional mobilities, and actual demands, HATB takes 220-morning peak areas as tests to validate its improved validity, feasibility, and efciency for practical application.
As compared to the similarly used methods, HATB can, accordingly, obtain the following improvements.A hierarchical framework is frst designed through GeoHash encoding to solve the cross-regional mobility of bikes and reduce the time complexity of global optimization.Next, a GA for regional scheduling is built by combining the tidal characteristics of bicycle-sharing to minimize operating costs and maximize user satisfaction, which signifcantly accelerates the algorithm's convergence.Last, the use of actual orders considerably enhances the ability in the practical application of instant response to any regional scheduling demand.
Tis work was carried out as a preliminary to obtain the present results.However, there are still problems such as the inability to adapt scheduling throughout 24 hours or the lack of comprehensive constraints.As the closure of this study, one recommendation for further research is to use a form of "GA + Tabu" algorithm to exploit its global search capability and thus improve the big data processing capability.Another research direction is adding weather and road characteristics to optimize the model reliability further.
Table 1 :
Te order information of corresponding GeoHash string length.
Table 3 :
Examples of user travel characteristics.Te experiment data comes from the 2017 Mobike cup algorithm challenge, which involves 3,214,096 orders and 485,465 bikes (10 th May 2017-23 rd May 2017).Te characterized user travel, as presented in Table
Table 5 :
Examples of hierarchical autonomous scheduling results for route one.
Table 6 :
Te mapped scheduling results with examples of route one (Table5).
[37]s.Angeloudis et al.[36]achieved user appeal increase by ofering a new method of planning bike routes and distributions.Moreover, Zhao et al.[37]optimized the total scheduling distance to accommodate large-scale scheduling via an ACO. | 4,243.4 | 2023-02-18T00:00:00.000 | [
"Engineering",
"Computer Science",
"Environmental Science"
] |
Machine Learning Techniques Applied to the Study of Drug Transporters
With the advancement of computer technology, machine learning-based artificial intelligence technology has been increasingly integrated and applied in the fields of medicine, biology, and pharmacy, thereby facilitating their development. Transporters have important roles in influencing drug resistance, drug–drug interactions, and tissue-specific drug targeting. The investigation of drug transporter substrates and inhibitors is a crucial aspect of pharmaceutical development. However, long duration and high expenses pose significant challenges in the investigation of drug transporters. In this review, we discuss the present situation and challenges encountered in applying machine learning techniques to investigate drug transporters. The transporters involved include ABC transporters (P-gp, BCRP, MRPs, and BSEP) and SLC transporters (OAT, OATP, OCT, MATE1,2-K, and NET). The aim is to offer a point of reference for and assistance with the progression of drug transporter research, as well as the advancement of more efficient computer technology. Machine learning methods are valuable and attractive for helping with the study of drug transporter substrates and inhibitors, but continuous efforts are still needed to develop more accurate and reliable predictive models and to apply them in the screening process of drug development to improve efficiency and success rates.
Introduction
Drug transporters are a group of transmembrane proteins that are widely distributed throughout the human body. They facilitate the movement of endogenous and exogenous substances into and out of biofilms, thereby influencing drug absorption, distribution, metabolism, excretion, and other pharmacokinetic processes. The investigation of transporters holds great importance in relation to pharmacokinetics, pharmacodynamics, drugdrug interactions (DDIs) and drug toxicity. Over 400 transporters have been identified in the human genome [1], primarily belonging to two superfamilies: the ATP-binding cassette (ABC) and the solute carrier (SLC) transporter. Over nearly two decades, various in vitro, in situ/ex vivo, and in vivo methods have been developed to study transporter function and drug-transporter interactions for the identification of their substrates or inhibitors. In vitro models comprise membrane-based and cell-based assays, whereas in vivo models encompass transporter gene knockout, natural mutant animal models, and anthropogenic animal models. In situ/in vitro models pertain to isolated and perfused organs or tissues, such as the liver, kidney, intestine, lung, and brain [2]. Although traditional research methods are constantly updated and improved, their experimental costs and time consumption remain significant obstacles in the research process, which is also a common challenge encountered during drug development. In addition, computational methods such as virtual screening (VS) and molecular docking are also employed in the study of
Drug Transporters and Important Implications
Transporters are a ubiquitous class of proteins that are located on the cell membrane and facilitate transport functions throughout the human body ( Figure 1). Until now, it has been generally assumed that multispecific drug transporters are derived from two transporter superfamilies: the ABC superfamily and the SLC superfamily. ABC transporters are a family of efflux transporters that transport drugs and endogenous substances by reversing the energy concentration gradient after ATP hydrolysis and are related to drug bioavailability, tumor multidrug resistance, and disease. Among the ABC family members, P-glycoprotein (P-gp), multidrug resistance-associated protein (MRP), and breast cancer resistance protein (BCRP) are considered to be important causes of multidrug resistance (MDR) of tumor cells [8] and therefore are the most studied subtypes related to drug transport. In addition, there are bile salt transporters (BSEP). Bile salt transporters that are not functioning properly or are expressed abnormally have been identified as significant factors contributing to various liver diseases, particularly those causing cholestasis [9]. Most SLC transporters are located on the cell membrane and rely on electrochemical and ion concentration gradients to transport substrates, regulate the exchange of soluble molecular substrates between the two sides of the lipid membrane, and maintain the stability of the intracellular environment. Over 400 transporters have been identified to date, displaying a wide range of substrates such as sugars, amino acids, vitamins, nucleotides, metals, inorganic ions, organic anions, oligopeptides, and drugs [10]. The SLC22 transporter family is among the most extensively researched SLC families in terms of drug handling [11], playing a central role in the transport of small molecule endogenous substances, drugs, and endotoxins across tissues and interfacial fluids. SLC transporters involved in drug transport are primarily composed of organic anion transporters (OATPs), organic anion transporters (OATs), organic cation transporters (OCTs), and oligopeptide transporters (PEPTs). Another SLC transporter, the multidrug and toxic compound efflux transporter (MATE), is an efflux transporter. Transporters expressed in the intestine, liver, and kidney play a critical role in the drug absorption, distribution, metabolism, and excretion (ADME) process. These transporters play a crucial role in regulating drug concentrations in both blood and tissues. Oral medication is primarily absorbed in the gastrointestinal tract, and its bioavailability is influenced by both uptake and efflux transporters present in this region. PEPT1, a transport protein expressed on the brush border membrane of the intestine, facilitates the absorption and transportation of peptide-like anticancer drugs within the gut. Linking the drug with a dipeptide can improve its bioavailability in the human body [12]. P-gp is the most extensively studied efflux transporter and plays a crucial role in limiting the bioavailability of numerous orally administered drugs [13]. MRP2 and BCRP are also expressed in the intestinal tract, with known substrates including statins, methotrexate, and other compounds. Transporters play a significant role in drug tissue distribution and elimination, ultimately influencing drug selectivity. Within the blood-brain barrier (BBB), various transport proteins, including P-gp, BCRP, and OCTs, play crucial roles in the distribution of neuroactive drugs. These transport proteins regulate the velocity and direction of drug transportation across the BBB. P-gp and BCRP can collaborate to facilitate the transportation of chemotherapy drugs [14].
OATP, PEPTs, etc.) expressed on renal tubular epithelial cells that participate in proximal tubular secretion and reabsorption processes. These transporters play a crucial role in transferring drugs or their metabolites into urine for excretion [18]. In summary, alterations in transporter function can affect the ADME process and consequently drug efficacy, with transporters playing a crucial role in pharmacokinetics.
Transporters also play a crucial role in DDI by modulating the disposition of drugs within the body. DDI occurs when a drug influences the action of another drug by inhibiting or inducing one or more processes. Transporter-mediated DDIs, particularly those involving transporters expressed in the intestine, liver, kidney, and BBB, have garnered significant attention. DDI is likely to occur when the co-administered drug is a substrate, inhibitor, or inducer of the transporter protein. Through machine learning techniques, we can find the substrate or inhibitor of the transporter in a more efficient way, which can help us to better understand drug interactions during transporter studies, which can be important for both drug development and basic medical research.
Machine Learning
Artificial intelligence (AI) is widely utilized, leveraging computational power to emulate human cognitive processes. Machine learning, a pivotal component of artificial intelligence, can be traced back to 1943 [19]. The term refers to the capability of software to accomplish a task by means of learning from data and has been widely employed in Figure 1. Expression of ABC and SLC transporters with major roles in drug efficacy or toxicity in human intestinal epithelia, hepatocytes, kidney proximal tubule epithelia, brain capillary endothelial cells, and choroid plexus epithelial cells.
SLC family members such as OCT1, OAT2, and NTCP are responsible for drug uptake into liver cells [15], whereas transport proteins involved in drug hepatobiliary efflux include P-gp, MRP2, BSEP, and BCRP [16,17]. There are many transporters (OCT, OAT, OATP, PEPTs, etc.) expressed on renal tubular epithelial cells that participate in proximal tubular secretion and reabsorption processes. These transporters play a crucial role in transferring drugs or their metabolites into urine for excretion [18]. In summary, alterations in transporter function can affect the ADME process and consequently drug efficacy, with transporters playing a crucial role in pharmacokinetics.
Transporters also play a crucial role in DDI by modulating the disposition of drugs within the body. DDI occurs when a drug influences the action of another drug by inhibiting or inducing one or more processes. Transporter-mediated DDIs, particularly those involving transporters expressed in the intestine, liver, kidney, and BBB, have garnered significant attention. DDI is likely to occur when the co-administered drug is a substrate, inhibitor, or inducer of the transporter protein. Through machine learning techniques, we can find the substrate or inhibitor of the transporter in a more efficient way, which can help us to better understand drug interactions during transporter studies, which can be important for both drug development and basic medical research.
Machine Learning
Artificial intelligence (AI) is widely utilized, leveraging computational power to emulate human cognitive processes. Machine learning, a pivotal component of artificial intelligence, can be traced back to 1943 [19]. The term refers to the capability of software to accomplish a task by means of learning from data and has been widely employed in various domains, such as data integration and analysis [20,21]. Machine learning possesses the ability to identify complex patterns from vast and complex molecular descriptor datasets, making it particularly suitable for predicting transporter substrates and inhibitors. Depending on the type of data, such as whether sample labels are available, machine learning algorithms can be classified as supervised, semi-supervised, and unsupervised learning [22]. When machine learning is used to predict transporter substrates or inhibitors, it is often done through supervised learning, where models are built using labeled training data. Model building can include options such as decision trees, random forests, neural networks, support vector machines, logistic regression, k-nearest neighbors, and more. Each model has its own characteristics and suitable environments.
Decision Trees and Random Forests
Tree-based algorithms are very popular in machine learning and are a method of classification and regression using decision trees [23]. Decision tree learning is a supervised learning technique based on the concept of recursive classification. In this method, classification models are represented as tree structures that start at a decision point and use a feature that can split the data. Each split is connected to a new decision point that contains more features to further separate the data. In addition to simple decision trees, there are newer ensemble methods, such as random forest (RF) and gradient boosting trees (XGBoost). Random forest builds multiple decision trees and combine their prediction results to improve prediction accuracy and prevent overfitting. Each decision tree is created using a subsample of features, not each feature [24].
Neural Network
Artificial neural network (ANN), deep neural network (DNN), and deep learning (DL) are also common algorithms in the field of machine learning. The concept is grounded in the architecture of the human brain and can be effectively applied to both regression and classification problems. An Ann model consists of units that combine multiple inputs and produce a single output, including an input layer, an output layer, and a hidden layer between them, each consisting of multiple neurons in parallel. The existence of hidden layers enables the categories in the input signal that are not linearly separable to be distinguished. The nonlinear activation function modifies the signal of the input node and outputs it to the next node, each output node corresponds to the task to be predicted, and finally, the complex information is classified [25]. DNNs are artificial neural networks with multiple hidden layers, which are considered deep learning algorithms with more complex networks and data volumes, so the problem of overfitting needs to be considered. There are several well-known variants of deep learning, such as convolutional neural networks, recurrent neural networks (RNNs), and so on [26].
Support Vector Machine
Support vector machine (SVM) is a kind of machine learning with maximization (support) of separating the margin (vector), which is a classical nonlinear classification and regression modeling algorithm. The separation hyperplane is constructed in space, the distance between the separation hyperplane and the nearest expression vector is defined as the edge of the hyperplane, and the classification ability is maximized by selecting the maximum edge to separate the hyperplane. Constructing the optimal hyperplane requires support vectors and some training data [27,28]. Achieving the optimal separation requires the application of kernel functions, which can add additional dimensions, and the data become better separated in the high-order space, which is also an advantage of SVM.
Naïve Bayes
Naïve Bayes (NB) uses Bayes' theorem to classify data under the assumption that each feature of a sample is uncorrelated (strongly independent) with other features. Compared with other machine learning algorithms, the Bayesian algorithm is a faster and simpler algorithm, only needs to consider each predictor variable in each class separately [29], and has relatively low accuracy, so it can perform better on less complex data.
k-Nearest Neighbor Algorithm
The k-nearest neighbor algorithm (k-NN) is a machine learning algorithm mainly used for classification and is widely used due to its simple and easy-to-understand design [30]. It classifies unlabeled data by assigning them to the most similar labeled category. The k-nearest training data (neighbors) are considered, and the final classification is determined and checked according to the majority voting rule [31]. Factors such as the k value, distance calculation, and appropriate predictor variable selection can all have a significant impact on model performance [32].
The general procedure for identifying new substrates/inhibitors of drug transporters through machine learning techniques is outlined as follows: (1) A database is built of known compounds as substrates/inhibitors as a dataset; (2) the chemical information of the compound is analyzed, extracted, and converted into a form that can be recognized by the algorithm; and (3) the constructed dataset is split into a training set and a validation set. Machine learning methods are employed to learn from the training set and develop the model, whereas the validation set is used to test and enhance the newly created model. (4) The unknown compounds are predicted and verified.
ABC Transporters
Many human ABC proteins are efflux transporters, including P-gp (ABCB1), MRPs (ABCC), and BCRP (ABCG2), and function as efflux pumps that actively extrude compounds such as drugs from the cell. The classical ABC transporter is structurally composed of four structural domains, two transmembrane domains (TMDs), and two cytoplasmic nucleotide-binding domains (NBDs) [33]. Transporter proteins associated with MDR belong to the ABC transporter superfamily, which is one of the major barriers to cancer therapy and affects drug accumulation in cancer cells [34]. Among these transporters, P-gp is considered to be the major contributor to cellular multidrug resistance. The tissue distribution and cellular localization of transporters influence drug efficacy and toxicity. Therefore, it is essential to study the efflux transport of drugs and identify the substrates of efflux transporters. Additionally, exploring efflux transporter inhibitors represents a promising research direction for addressing drug resistance. The machine learning methods used by the researchers are listed in Table 1. The ABCB1 transporter, also known as P-gp, belongs to the ABCB subfamily. It was first identified in Chinese hamster ovary cells in 1976 [55]. With the introduction of the concept of the ABC transporter family [56], the research on P-gp gradually increased. There are two genes that encode P-gp in humans: MDR1 and MDR1A/1B P-gp, which are mainly distributed in human small intestine, colon, liver, kidney, brain, and other tissues and organs, as well as barrier tissues such as the blood-brain barrier, blood-testis barrier, and placental barrier. They are also expressed in the lung, heart, and spleen [57]. P-gp functions as an efflux transporter for endogenous substances, exogenous substances (drugs and their metabolites), and toxins out of cells. Therefore, in normal tissues, Pgp-mediated efflux transport helps to reduce toxicity and protect cells, but at the same time, it limits the absorption of drugs and reduces the bioavailability [58]. P-gp is highly expressed on the membrane of many tumor cells, which is directly related to the multidrug resistance of tumor cells. Not only anticancer drugs but also HIV protease inhibitors and immunosuppressants are the substrates of P-gp [59]. Therefore, drugs that inhibit P-gp are anticipated to elevate the intracellular concentration of chemotherapeutic agents and enhance their sensitivity. P-gp is the earliest discovered transporter [60] and has been studied for about 30 years. Therefore, there is a large amount of data accumulation on P-gp transporters, and most machine learning methods in the early stage are carried out around P-gp. With the development of computer technology, machine learning prediction models of P-gp have been constantly improved.
In 2019, Kadioglu et al. [35] established a prediction platform for P-gp modulators using machine learning methods (including k-NN, neural networks, RF, and SVM). They used defined chemical descriptors to predict whether test compounds can act as substrates or inhibitors of P-gp. It is noteworthy that they also validated the results using molecular docking in terms of binding energy and docking poses. The RF classification algorithm performed better than other algorithms in feature selection. In 2020, Esposito et al. [36] combined machine learning with MD simulations using the MDFP/ML approach, using molecular dynamics fingerprints (MDFPs) as orthogonal descriptors to distinguish and predict substrates and non-substrates of P-gp. The study used four different ML methods, namely, RF, GTB, SVM, and meta-learner. When the model was validated with an external validation set, it was found that only models trained on MDFPs or attribute-based descriptors could be applied to chemical space areas not covered by the training set. Despite P-gp being a well-known entity for over three decades, the lack of improved selective inhibitors targeting this protein can be attributed to its specificity and unknown structural characteristics.
BCRP
BCRP, a member of the G subfamily of the ABC family, was first identified in the multidrug resistant human breast cancer cell line MCF-7/AdrVp [61]. BCRP is widely expressed and distributed in several normal tissues, such as the small intestine, liver, brain endothelium, and placenta [62]. It can confer resistance by pumping chemotherapy drugs out of cells. In the past decade, the research of machine learning in BCRP has developed rapidly.
Hazai et al. [44] developed an SVM prediction model of BCRP substrate based on the known substrates and non-substrates of BCRP in 2013. For model verification, a training set/testing set machine ratio of 0.75/0.25 was chosen, and the overall prediction accuracy for the independent external validation dataset was 40%. Moreover, the prediction accuracy for the wild-type BCRP substrate was higher than that of the non-substrate, with a rate of 76%. The 3D structure of the substrate was found to be a possible determinant of the BCRP-substrate interaction by the molecular descriptors it used. In 2014, Ding et al. [45] developed an accurate, fast, and robust pharmacophore ensemble/support vector machine (PhE/SVM) model to predict the BCRP inhibition of structurally diverse molecules. Due to the confounding nature of BCRP, this method does not produce significant bias when applied to various structurally diverse inhibitors. In 2016, Montanari et al. [41] integrated data using KNIME workflows to build a multi-label classification model of BCRP/P-gp inhibitory activity using a machine learning approach. Key molecular features affecting transporter selectivity were retrieved by comparing various multi-label learning algorithms. The KNIME workflow is an effective solution for merging data from multiple sources and constructing multi-label datasets that are tailored for BCRP and/or P-gp. Using the dataset created through the KNIME workflow, it was possible to distinguish between selective BCRP inhibitors and selective P-gp inhibitors by examining only two features: the count of hydrophobic and aromatic atoms, and the shared characteristics between dual and selective inhibitors. In 2017, Gantner et al. [42] were the first to combine computer predictions of BCRP with experimental validation to develop nonlinear computer models of BCRP substrates. The J48 decision tree induction algorithm implemented by the C4.5 decision tree algorithm in WEKA3.651 is used to obtain the corresponding nonlinear classification model, and the genetic algorithm (GA) is used to select the best descriptor. The selected non-substrate compounds were experimentally validated using a stereovalgus rat intestinal sac model, which demonstrated the predictive power of the model. The rfSA technique is a feature selection approach that uses both the simulated annealing (SA) algorithm and RF to eliminate redundant and irrelevant features. In 2020, Jiang et al. [43] used XGBoost and DNN methods for the prediction of BCRP inhibitors for the first time and obtained good prediction results. A diverse set of 1098 BCRP inhibitors and 1701 non-inhibitors was compiled as a dataset, and the molecular descriptors linked to BCRP inhibition were explored. It was found that one of the characteristics of BCRP inhibitors was high hydrophobicity and aromatic properties. Seven ML methods (DNN, SGB, XGBoost, NB, weighted k-NN, RLR, and SVM) were used to develop the classification model. The Bayesian optimization algorithm was used to optimize the hyperparameters. The results showed that the SVM, XGBoost, and DNN methods were superior to other methods, and SVM had the best prediction ability. Analysis of the misclassified compounds revealed that most of them had complex structures and may not be able to be accurately characterized by traditional descriptors. In 2021, Ganguly et al. [40] used a Bayesian machine learning model to predict the metabolites most likely to be BCRP or P-gp substrates in CSF and plasma of dKO rats, demonstrating that CSF may be a better substrate for identifying endogenous substrates of BCRP and P-gp.
BCRP has been shown to have a role in the permeability of the blood-brain barrier, resulting in the failure of most CNS-acting drugs in clinical trials [63]. In 2014, Garg et al. [46] developed a machine learning model to evaluate the effect of BCRP on BBB, an artificial neural network model to predict the BBB permeability of molecules, and an SVM model to predict the substrate of BCRP. Through molecular docking analysis, 11 molecules were identified as meeting the criteria for BBB penetration. Additionally, these compounds were predicted to be substrates of BCRP in BBB permeability.
MRPs
MRPs are active transporters of the ABC family and are widely distributed in the lung, kidney, brain, and other organs. MRPs contain many isoforms, among which MRP1, MRP2, and MRP4 are highly expressed in tumor cells and mediate the efflux of a variety of anti-tumor drugs, leading to the occurrence of multidrug resistance. Therefore, the study of MRPs is highly significant in combating multidrug resistance in tumors. Recently developed machine learning methods for predicting substrates or inhibitors of MRPs have demonstrated remarkable accuracy.
In 2017, Lingineni et al. [48] established a SVM model for MRP1 substrate classification based on previous studies [46], and the accuracy of the best MRP1 substrate model in the training set, test set, and external validation set was 87.39, 93.54, and 80%, respectively. The BBB permeability artificial neural network model and molecular docking analysis demonstrated that MRP1 plays an important role in the transport of substances in the BBB.
Kharangarh et al. [47] used k-NN, RF, SVM, and other machine learning methods to train the classification model of MRP2 inhibitors and non-inhibitors using compounds from the Metrabase database and obtained different descriptors through four methods: variance threshold, SelectKBest, RF, and REF. The k-NN, RF, and SVM methods were used to train the machine learning model. The five-fold cross-validation and analysis of relevant parameters showed that the SVM model constructed by the features selected by the RFE method performed well, and the key descriptors for developing MRP2 inhibitor models were obtained by RStudio analysis, which could determine the inhibitory properties of the MRP2 protein in the early stages of drug discovery. The prediction of MRPs substrates can reduce the failure rate of preclinical drug studies, and the prediction of inhibitors can help the study of MDR, both of which can be used in the early stages of drug discovery.
BSEP
BSEP is a kind of ABC transporter encoded by the ABCB11 gene. It is located in the duct membrane of hepatocytes and is responsible for transporting bile acids and bile salts from hepatocytes to bile tubules [64]. Inhibition of BSEP can cause the toxic accumulation of bile salts in cells, triggering cholestatic liver injury and ultimately leading to the premature termination of preclinical development and clinical trials of drug candidates. Machine learning prediction of BSEP has also been studied in recent years. In 2021, McLoughlin et al. [49] developed a model for predicting and classifying BSEP inhibitors. They utilized the Automated Data-Driven Modeling Pipeline (AMPL) to train and assess over 15,500 classification and regression models. The optimal combination of model types, dataset segmentation strategies, chemical characterization methods, and model parameters is determined by testing various configurations using the AMPL's hyperparameter search function. The best performing model for this purpose was finally found to be the RF model, which included MOE descriptor features.
SLC Transporters
More than 400 transporters have been identified, and the SLC22 transporter family is one of the best studied SLC families for drug handling, with a central role for small molecule endogenous substances, drugs, and endotoxins that move between tissues and interfacial fluids. The kidney expresses high levels of OCT2 and OAT1, which are crucial for the renal uptake of clinical drugs and endogenous substances. OAT1, OAT3, OCT1, and OCT2 are widely studied drug transporters. OAT2 is mainly expressed in hepatocytes and involved in the transport of small molecule anion drugs to hepatocytes. OAT1 and OAT3 are mainly expressed in renal cells and regulate the transport of organic anions from the blood into proximal tubular cells. OCT1 is mainly expressed in the hepatic sinusoidal membrane and mediates the transport of drugs and endogenous substances. OCT2 and OCT3 are involved in the renal and biliary excretion of cationic drugs, respectively. In 2016, Ose et al. [39] developed a model for predicting drug transporter substrates based on the SVM method and established a database of seven classes of transporter substrates (OATP1B1/1B3, OAT1/3, OCT1/2, MRP2/3/4, MDR1, BCRP, and MATE1/2-K). Physicochemical parameters were used as the basic descriptors. This model has the potential to accurately predict transporter substrates without the need for in vitro transport assays. In 2017, Shaikh [37] performed multi-transporter modeling and developed substrate prediction models for transporters using quantitative structure-activity relationship (QSAR) and protein stoichiometry (PCM) methods. After evaluating the established models, the top-performing model was merged with other models to create a heterogeneous integrated model for each transporter. This analysis involved 6 efflux transporters, 7 uptake transporters, and 4575 substrate/nonsubstrate data. In 2020, Nigam et al. [50] combined machine learning, chemoinformatics, and multi-specific drug transporter knockout metabolomics to analyze the unique metabolites accumulated in the plasma of OAT1 and OAT3 knockout mice and define the molecular properties of endogenous ligands. Finally, seven key molecular descriptors were obtained. The RF classification model based on the metabolite dataset correctly classified ≥ 75% of the drugs known to interact with OAT1/3. This helps with the physiological role of drug transporters, metabolite-based drug design, and analysis of drug-metabolite interactions. This cheminomics-machine learning approach was subsequently used to analyze OATP and OAT-transported drugs by Nigam et al. [51]. The results showed that liver OATPs preferred highly hydrophobic, complex, and more ring-like drugs as substrates, whereas kidney OATs preferred more polar drugs. This provides a molecular basis for tissue-specific targeting strategies, drug interactions, and drug delivery to minimize toxicity in liver and kidney diseases. In 2021, Jensen et al. [54] used machine learning methods to predict substrates for OCT1. A database containing more than 1000 substances was predicted by virtual screening, and 19 substances were tested in vitro. This study demonstrates that machine learning methods can accurately predict substrates of OCT1, even in the absence of its crystal structure.
The norepinephrine transporter (NET/SLC6A2) is also a member of the SLC family and is more well studied than other SLCS. NETs can regulate NE-mediated physiological effects by terminating noradrenergic signaling by uptake of norepinephrine into presynaptic terminals. In 2023, Bongers et al. [65] developed a technique for the identification of NET inhibitors combined with machine learning methods using RF, GBt, and PLS algorithms to build a model combined with virtual screening and experimental validation and finally identified five novel NET inhibitors. This method incorporates the chemical space of the ligand and utilizes a similarity-based network to select related proteins for the modeling of NETs.
Obtaining data is sometimes limited by privacy issues. Collecting and sorting data also requires a lot of time and effort. After obtaining data, it is also crucial to select chemical descriptors that can establish high-quality models for different transporters. To make it possible to build predictive models in a quick and easy way, Smajic et al. [38] worked with Jupyter Notebooks in 2022 to create a framework that can create or retrain ML models in a semi-automatic manner. Classification models of six transporters (BCRP, BSEP, OATP1B1, OATP1B3, MRP3, and P-gp) were allowed to be generated, and the created models could be updated and shared using Jupyter Notebooks. This is a valuable tool for predicting new data on ABC and SLC transporters. Table 2 shows the sources of the datasets used by the researchers to build their machine learning models.
Conclusions and Future Prospects
Advances in information technology have created new methods for advances in many fields, including pharmaceutical research. Cost and efficiency are also a major challenge in the drug discovery process. Drug resistance is one of the main reasons for the failure of chemotherapeutic drugs, so predicting whether a chemotherapeutic drug is a substrate of a transporter or not is essential; the alteration of drug efficacy and toxicity caused by DDI can be predicted by the identification and prediction of transporter substrates and inhibitors. Nowadays, computerized classification models of various transporter substrates and inhibitors have been established to save experimental resources. They have been instrumental in overcoming drug resistance, DDI discovery, and drug targeting.
The quality of the data has a significant impact on the performance of the model. In addition to the machine learning models that have been built so far in the initial database establishment stage, as well as obtaining data from public databases, we need to manually collect and collate data from different studies or use unpublished data from our own laboratory. This may affect the reliability of the prediction results. Therefore, the selection of descriptors and validation of the model is also an important step to ensure the accuracy of the prediction results.
In this paper, we have reviewed the machine learning technique-based approach to study different transporters. The use of a single source of data and construction of models to understand the role of drugs and transporter proteins is not sufficiently accurate. Classification efficiency and higher predictive accuracy of machine learning models depend on comprehensive and reliable data and trade-offs between individual machine learning approaches. In addition, further validation is needed for transporter substrates and inhibitors predicted through machine learning. In general, machine learning provides a highly useful tool for studying transporters, improving research efficiency, and allowing us to focus on compounds with higher potential.
Author Contributions: Conceptualization, K.L., Y.Z. and S.Y.; methodology, X.K. and K.L.; formal analysis, X.K. and X.Z.; writing-original draft preparation, K.L., G.W. and X.T.; writing-review and editing, D.D. and L.L. supervision, S.Y. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. | 7,247.8 | 2023-08-01T00:00:00.000 | [
"Medicine",
"Computer Science",
"Biology"
] |
Ethical dilemma arises from optimizing interventions for epidemics in heterogeneous populations
Interventions to mitigate the spread of infectious diseases, while succeeding in their goal, have economic and social costs associated with them. These limit the duration and intensity of the interventions. We study a class of interventions which reduce the reproduction number and find the optimal strength of the intervention which minimizes the final epidemic size for an immunity inducing infection. The intervention works by eliminating the overshoot part of an epidemic, and avoids a second wave of infections. We extend the framework by considering a heterogeneous population and find that the optimal intervention can pose an ethical dilemma for decision and policymakers. This ethical dilemma is shown to be analogous to the trolley problem. We apply this optimization strategy to real-world contact data and case fatality rates from three pandemics to underline the importance of this ethical dilemma in real-world scenarios.
Introduction
Infectious disease epidemics have been suppressed and mitigated using a combination of non-pharmaceutical interventions (NPIs) such as lock downs, social distancing, mask wearing and contact tracing, and by pharmaceutical interventions such as immunizing the population using vaccines.In the absence of vaccines, NPIs are the primary option.However, NPIs, and in particular, lock downs, can have significant economic, mental health and social costs associated with them.Instead of protracted or repeated lock downs (as observed during the COVID-19 pandemic), a one-shot intervention has been suggested as a possible alternative for diseases that induce immunity upon recovery from infection.An intense but short-duration lockdown is imposed near the peak of the epidemic to stop the transmission during the overshoot phase of the epidemic and reduce the final size (total number of infections) to the herd immunity threshold of the epidemic (number immune in the population required to stop the growth in infections) [1].The overshoot phase is when the number of active infections start to decline (effective reproduction number is less than one), but a significant number of new infections are created.The overshoot is the difference between the final size and herd immunity threshold.Thus such interventions reduce the overshoot to zero (see Glossary in the electronic supplementary material for detailed definitions of technical terms).
In this work, we explore an alternative strategy to achieve the same outcome through a prolonged but weaker intervention instead of a short and intense intervention.Such an intervention, if implemented early, will have the added benefit of reducing and delaying the peak of the epidemic as well, in contrast to the oneshot intervention [1].As with the one-shot intervention, the rationale of this strategy is to calibrate the intervention in such a manner that the final size of the mitigated epidemic is identical to the herd immunity threshold of the original epidemic.Therefore, when the intervention ends, there is no risk of further introductions developing into future epidemics or a second wave of infections.We show that this strategy is an optimal strategy for minimizing the final size in the long term.
In the context of COVID-19 modelling, research on optimal interventions has attempted to include economic costs along with the objective of reducing infections: using detailed agent-based models [2] and fine-tuned intervention strategies [3,4], a balance is sought between socio-economic and health costs to minimize the total cost [5], or the claim that interventions reduce the economic well-being of a society has been challenged [6,7].Optimal interventions have also been studied as resource allocation problems where a limited stockpile of vaccine is available or a limited 'amount' of social distancing is acceptable and the objective is to find the distribution of the intervention that minimizes the reproduction number or a health-related objective function [8][9][10].
We do not include economic costs in an explicit manner in our model.The amount of reduction in R 0 can be interpreted as the cost-the higher the reduction in R 0 , higher the social and economic cost of intervention.The calculations involved in finding the optimal strategy mainly rely on the knowledge of the basic reproduction number (or the next generation matrix).We show that in populations with transmission heterogeneity, implementing an optimal intervention to minimize the final size could involve a moral/ethical dilemma for decision-makers, which is analogous to the commonly known trolley problem [11,12].The dilemma arises as a result of transmission heterogeneity in the population.We performed a literature search with relevant keywords and were unable to find any research that examined NPIs with an ethical dilemma (see electronic supplementary material for keywords).A pre-print, Ragonnet et al. [13], had a similar approach in that they optimized synthetic contact matrices from various European countries to minimize deaths or years of life lost by achieving herd immunity for the COVID-19 epidemics.Their model and intervention scenarios are quite complex: a transmission model with six stages of infection, waning immunity and duration of intervention.They find through numerical methods that increasing transmission in younger age groups is required to minimize the years of life lost for the COVID-19 epidemics.Another article, Babajanyan & Cheong [14], also used this strategy of achieving herd immunity in a model with susceptible → infected → recovered (SIR) disease and resource growth dynamics in the context of COVID-19.This study did not explore any strategies where transmission is increased.Our work differs from the above-mentioned studies as we explore strategies that increase transmission, in detail and discuss the ethical dilemma.The model we use has the minimal complexity required to explore the underlying mechanisms of this ethical dilemma for a wide range of basic reproduction numbers and to show the impact of heterogeneity and population structure on epidemics and interventions.
In the following sections, we explain the modelling framework, results of our analysis, and conclude with a discussion of our modelling assumptions and the ethical dilemma that decision-makers could face.
Methods
We use deterministic SIR and SIR-like models to study the optimal intervention.In § §2.1 and 2.2, we explain the models used for a homogeneous population and for a heterogeneous population, respectively, in addition to describing the calculations for finding the optimal intervention.In §2.3, we describe how this optimization strategy is applied to real-world data and in §2.4,we explain how an optimal intervention can be found if there is a delay in the start of the intervention.
Homogeneous population
We use an SIR model with the variables s, i and r to represent the fractions of individuals in the total population who are susceptible, infected and recovered, respectively [15,16].The population is assumed to be closed (no entry/exit) and it is normalized such that s + i + r = 1.
In this case, the final size of the epidemic is completely determined by the basic reproduction number R 0 and can be obtained using the following equation [15,16]: where s(t 1 ) and s(t 2 ) are the fractions of susceptible and r(t 1 ) and r(t 2 ) are the fractions of recovered individuals in the population at time instants, t 1 and t 2 .Using the conditions i(t 1 ) ≈ 0, r(t 1 ) = 0 and i(t 2 ) ≈ 0, which describe the population at the start and end of an epidemic, the well-known final size relation can be obtained [15][16][17][18] rð1Þ ¼ 1 À e ÀR0rð1Þ : ð2:2Þ An intervention that reduces transmission would affect the basic reproduction number as R 0 !R 0 ð1 À cÞ where 0 ≤ c ≤ 1.In the case of a homogeneous population, herd immunity is achieved when the fraction of susceptible individuals in the population is less than 1=R 0 .Therefore, we substitute sðt 2 Þ ¼ 1=R 0 and s(t 1 ) = 1 and solve for c.We find the optimal reduction in the basic reproduction number is We verify this analytical result in §3.1 by simulating an epidemic where R 0 is changed to R 0 ð1 À cÞ in the early stage of the epidemic and the intervention is switched off once the active infections, i(t), decline to a negligible number.
Heterogeneous population
In a heterogeneous population, individuals may be further stratified into groups.To represent the fraction of individuals in the total population who belong to a group k, we use the variables s k , i k and r k such that s k + i k + r k = n k , where n k is the proportion of the population who belong to group k and P k n k ¼ 1. Heterogeneity in transmission characteristics can affect the behaviour of epidemics in a significant manner.Epidemics in populations with different transmission structures but identical reproduction numbers can have widely different final sizes.An epidemic in a heterogeneous population can be described by the following SIR-like model, assuming identical duration of infection for all groups and measuring time in the units of the average infection duration, The term G kl is the expected number of infections that would be caused in fully susceptible group k by an infected individual in group l.The dominant eigenvalue of G gives the reproduction number of the system [19].
The epidemic sizes for this model are given in [17,18] r k ð1Þ ¼ n k ð1 À e À P l B kl r l ð1Þ Þ: ð2:8Þ It should be noted that we normalize the final size for the heterogeneous population to be the number of infections in a group as a fraction of the total population.The recipe for optimization is similar to the homogeneous case.In the heterogeneous case, we find the level set where the reproduction number is equal to one (analytically in the case of two groups and numerically for more groups), which is the infinite set of values of s k that would achieve the herd immunity threshold.Then, we optimize subject to this constraint to find the values, s à k , that minimizes the cost function (the final size or a weighted sum of final sizes of each group).From this, we obtain r à k ¼ n k À s à k .Finding the level set requires finding the proportion of susceptibles of each group, s k , which would ensure that the reproduction number (top eigenvalue of G) is equal to one.In equation (2.8), the final sizes, r k (∞), can be replaced by the optimal final sizes, r à k , to find the optimal contact matrix.Comparing the original contact matrix with the optimal one tells us how the contact structure of a population must be changed in order to obtain the optimal outcome.
A crucial point to note here is that unlike with the homogeneous population, it is possible for certain elements of B and for certain final sizes to increase, r à k .r k ð1Þ, in order to minimize the cost function.In other words, the optimal intervention corresponds to an increase in transmission within certain groups or among pairs of groups.In such cases, the change in reproduction number cannot be a measure of the economic or social cost.Nonetheless, this leads to some interesting results which are presented in the next section.
A weighted cost function which is a weighted sum of the final sizes in each group is useful when we are interested in minimizing a certain outcome of infections rather than the number of infections, for example, deaths or hospitalizations.The optimization problem of finding the state of the population which minimizes a general objective function and fulfils the herd immunity condition can be solved semi-analytically for the case of two groups and is presented in the electronic supplementary material.For more than two groups, we solve the optimization problem numerically.
A schematic diagram of the optimization procedure for both homogeneous and heterogeneous populations is shown in electronic supplementary material, figure S6.
Real-world contact matrix
We used a contact matrix calculated using surveys from a sample population stratified into six age groups in the Netherlands [20].The contact matrix scaled by a disease-specific parameter gives the next generation matrix.Using the next generation matrix and the age distribution, the optimal intervention for a given cost function can be obtained.We calculated the optimal intervention using this contact matrix for a range of R 0 values and four different cost function weightings (an unbiased cost function and three from observed case fatality rates (CFRs) of 2009 pandemic in Mexico, 1918 pandemic in the USA and COVID-19 pandemic) [21][22][23].The age groups, their population sizes and CFRs are shown in table 1.Note that the age stratification used in the CFR study for the 1918 pandemic do not match exactly with the age groups of the contact matrix and furthermore the estimates were extracted from figures.For the 2009 pandemic and COVID-19, CFR was reported with a high age resolution but the size of the age groups was not immediately available.Due to lack of data on infection fatality rates, we are using the CFRs as a proxy for the probability that an infected individual dies.Thus the CFR values and the results relying on them are meant to be for illustration purposes only.
The severity of the dilemma in the optimal intervention can be quantified through the number of infections (or deaths) caused due to the intervention per infection (or death) prevented.It can be calculated using severity of dilemma (for infections) sum of all increases in final sizes sum of all the decreases in final size ð2:9Þ and severity of dilemma (for deaths) ¼ sum of all increases in deaths sum of all decreases in deaths : ð2:10Þ
Delayed intervention in a homogeneous population
In the above sections, we have assumed that the basic reproduction number (or the next generation matrix) is a known entity and therefore an intervention is implemented right at the start of the epidemic.Calculating the strength of the optimal intervention requires knowledge of the reproduction number, the intervention would have to start after the epidemic has been established and enough observational data have been collected to calculate the reproduction number.While the basic principle would still hold, a delay could change the strength of the optimal intervention.To find the optimal strength for a delayed intervention, we use the final size relation with the final state s ¼ 1=R 0 , i = 0 and an initial arbitrary state s L , i L at a time instant t L when the intervention begins.We replace basic reproduction number in equation (2.1) with R 0 ð1 À cÞ and solve for Using a numerical solution of the SIR equations, s L and i L can be found and the above equation can be solved for c.Equation (2.11) reduces to equation (2.3) when s L = 1, i L = 0 and c = 1 when s L ¼ 1=R 0 and i L > 0. If s L , 1=R 0 , then c > 1 which is biologically meaningless and reflects the fact that the population is already below the herd immunity threshold.
Model assumptions
The homogeneous model assumes that all individuals in the population are identical and every individual is equally likely to come in contact with every other individual.To introduce some complexity in this model, we use the heterogeneous model where individuals are stratified into homogeneous groups.We are using deterministic differential equation models with continuous variables to simulate the dynamics.This means that the number of active infections can decay exponentially but can never reach zero.Thus, the models used here cannot simulate a scenario in which an intervention eliminates a disease before reaching herd immunity threshold, as was the case in Australia, New Zealand, Hong Kong, mainland China, Singapore and several other jurisdictions (broadly known as the zero-COVID strategy).Throughout the paper, we use the SIR disease progression.Therefore, our analysis applies to diseases that induce long-term immunity or for which re-infection is not possible.
Homogeneous population
Simulation of the SIR model differential equations confirms our assertion in equation ( 2.3).As shown in figure 1, a 'weak' intervention reduces the final size but does not reduce the overshoot to zero.A strong intervention, on the other hand, reduces the final size during the intervention but a resurgence occurs as soon as the intervention ends.The final health outcome under the strong intervention is worse than (or at least comparable to) the weak intervention, while incurring a higher social and economic cost during the intervention.The resurgence occurs because the small number of infections and sufficient number of susceptibles remaining in the population lead to new infections after the intervention is lifted.An intervention that is strong enough to minimize the final number of infections, while avoiding a resurgence, is the one whose final size (during the intervention) matches the herd immunity threshold of the unmitigated epidemic.This is the optimal intervention.
Heterogeneous population
Introducing heterogeneity in the model opens up a space of interventions that is not seen in the homogeneous case.In the homogeneous case, the herd immunity threshold is defined by a single point, but in the case of a structured population, the threshold is given by a collection of points.This can be seen by considering the following: the condition required for reaching herd immunity is that the typical infected individual must not infect more than one individual.In the homogeneous case, one can randomly choose a sufficient number of individuals and immunize them to ensure that the number of infectious contacts is less than one.If the population is structured, the typical infected individual must not infect more than one individual, on an average.As long as the average number of infectious contacts is less than one, herd immunity is achieved, irrespective of how the immunization has been distributed among the various groups in the population.Thus there are infinitely many interventions that lead to herd immunity and prevent resurgence.Out of all these possibilities, we define the optimal intervention to be the one that minimizes the final size.
When the population can be described using two subpopulations or groups, the optimal intervention belongs to one of the following types: the first group is fully infected, the second group is fully infected, or none of the groups are fully infected.This creates the possibility that under the optimal intervention, the number of infections in one of the groups is larger than what would have occurred in the unmitigated epidemic, subject to the structure of the population.In figure 2, we show an example in which this occurs.This leads to an ethical dilemma wherein a certain group in the population incurs a higher cost (due to an increased number of infections) than would have happened without the intervention in order to minimize the cost for the whole population.Thus, the nonlinearity of the infectious disease dynamics, combined with population structure, lead to an ethical dilemma for policy/decision-makers which is analogous to the well-known trolley problem [11,12] (figure 2).The trolley problem involves a set-up in which a train is going to hit a group of people who are lying on the tracks.The train cannot be stopped, but a lever can be pulled to switch the train onto a different track on which fewer people are lying.The dilemma that is posed by this situation is whether it is ethical to save more lives by ending a lesser number of different lives?
Diseases often lead to a worse health outcome (mortality rate, hospitalization rate, chance of leading to chronic conditions, etc.) in certain groups of the population (the elderly The first plot shows that the optimal intervention leads to an increase in the number of infections in the first group.It is an example of the ethical dilemma of implementing the optimal intervention, which is explained further in (c).The second figure shows the plot for the cost function when the first group is given twice the weight as the second group, which means prevention of infections in the first group takes precedence over the second group.In this case, the intervention reduces the infections in both the groups.(c) The ethical dilemma involved in implementing optimal interventions is analogous to the well-known 'trolley problem'.If the decision-maker does not act, the incoming epidemic (represented as a trolley) is going to cause many infections.If the decision-maker implements an optimal intervention (switches the tracks), the number of infections in the total population is minimized, but someone who otherwise would have been safe from infection, becomes infected (shown by the increased number of infections of group (II)).
royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230612 age groups for instance).Instead of minimizing the final size of the epidemic (which is the sum of final sizes in each group), it may be more prudent to minimize a cost function, which is a linear combination of the final sizes in the groups, such that a group with a worse outcome of infection is given a higher weight in the cost function.As changing the cost function would change the optimal solution, the cost function plays a role in determining the ethical dilemma.For the example shown in figure 2, the ethical dilemma is no longer present under the given weighted cost function; the infections are reduced in both groups.
Real-world contact matrix
When the basic reproduction number is close to one, at least one of the age groups is required to endure a higher final size for all the cost functions we used (figure 3; electronic supplementary material, figures S3-5).The cost functions are weighted using estimates of CFRs of the 2009 influenza pandemic, the 1918 influenza pandemic, and COVID-19 pandemic [21][22][23].In addition to looking at the final sizes and how they change in various age groups, we can use the CFRs to estimate the deaths in each of the age groups and how they change with the optimal intervention.For the COVID-19 pandemic, pre-adolescents have the lowest CFR, and it increases for higher age groups (figure 3 and table 1). Figure 3b1,b2,c1,c2 shows that as R 0 is increased, the age groups start to experience an increase in infections (relative to no intervention case) in the following orderpre-adolescents, adolescents, children and finally young adults, which is also the order in which the CFR increases.Thus, the CFR may explain the nature of the ethical dilemma.At R 0 % 1:3, the severity of the ethical dilemma is highest, with 0.55 new infections for every infection prevented and 0.0025 new deaths for every death prevented (figure 3d1, d2).The severity for deaths is quite low compared to other pandemics because of the large disparity in the CFR across age for COVID-19.
For the 2009 flu pandemic (electronic supplementary material, figure S4), pre-adolescents have the lowest CFR, and it increases with age (electronic supplementary material, figure S4 and table 1).Electronic supplementary material, figure S4 (columns B and C) shows that as R 0 is increased, the age groups start to experience an increase in infections (relative to no intervention case) in the following orderadolescents, pre-adolescents and finally young adults, which is not in the increasing order of CFRs.Thus, the CFR does not explain the nature of the ethical dilemma.For infections, the severity of dilemma is highest at about R 0 ¼ 1:3, royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230612 where 0.44 new infections are created for every infection prevented.For deaths, the dilemma is the most severe at both R 0 ¼ 1:3 and 2:25, where 0.045 new deaths are caused for every death prevented.
For the 1918 flu, the CFR with age is often described as a 'W' shaped curve (electronic supplementary material, figure S5 and table 1).Electronic supplementary material, figure 5 (columns B and C) shows that as R 0 is increased, the age groups start to experience an increase in infections (relative to no intervention case) in the following orderadolescents, pre-adolescents, and finally middle adults, which is also the order in which the CFR increases.Thus, the CFR may explain the nature of the ethical dilemma.At the peak of severity (R 0 % 1:3), 0.47 new infections are created for each infection prevented and 0.12 new deaths for every death prevented.
For realistic CFRs, the dilemma in terms of infections is quite severe at its worst, with almost one person getting infected for protecting two individuals from getting infected.Some features of the ethical dilemma are common to all three pandemics.As R 0 is increased, the severity of dilemma never quite reaches zero but seems to approach zero in a non-monotonic manner and pre-adolescents and adolescents always endure an increase in infections.The nature of the ethical dilemma may be explained by the CFRs of the age-groups in the 1918 and COVID-19 case, but not in the case of the 2009 pandemic.
For an unbiased cost function (electronic supplementary material, figure S3), we see very different results.The severity of dilemma is zero for most of the R 0 range.In the space where the dilemma does occur, only the young adults and adolescents experience an increase in final size.At the most severe ethical dilemma, 0.175 new infections are created for every infection prevented.
Delayed intervention
We calculate the optimal strength of the intervention and simulate the model to confirm the mathematical analysis in §2.4.Using equation (2.11), we observe that the strength of optimal intervention increases in a super-linear manner with the duration of delay.The results are presented in figure 4. As the population approaches the herd-immunity threshold, the strength of intervention approaches one, corresponding to the one-shot intervention [1].
Discussion
In this work, we have examined a strategy of optimal intervention which allows the epidemic to cause just enough infections to induce herd immunity, eliminate the overshoot, and prevent future introductions from becoming epidemics.In addition to minimizing the final size, this intervention would also slow down the growth of the epidemic and reduce the peak, which allows time to develop treatments and increase healthcare capacity.For a homogeneous population, the results are straightforward: decrease the transmission by a pre-determined amount so that the final size reaches the herd immunity threshold and no more.A sensitivity analysis of the homogeneous model and intervention strategy was performed where the optimal strength of intervention and the resulting final size were computed for both the actual value of R 0 and a 'measured' value of R 0 with four different error rates (see electronic supplementary material, figure S2).We find that in both underestimation and overestimation of R 0 , the epidemic size is larger, but it is better to overestimate R 0 .
In the case of heterogeneous transmission, our results indicate that the optimal strategy may require increasing infection in some of the groups and decreasing it in others, in order to minimize the final size for the whole population.This is analogous to the trolley problem, and it calls for a discussion around the ethics of subjecting certain groups to a higher rate of disease incidence, and the feasibility of this policy.If increasing transmission in certain groups is not viable either due to operational reasons or ethical considerations, herd immunity can still be achieved (and resurgence prevented) by reducing transmission in all groups.We have also explored the role of the cost function in determining the ethical dilemma by weighting the final sizes of different age groups using CFRs of the 1918, 2009 and COVID-19 pandemics and shown that the ethical dilemma happens in all three cases.Our work shows that even without an explicit consideration of economic and social costs of an intervention, there are challenging ethical questions to be answered for the first-order problem of minimizing the final size.
The optimal interventions shown for the 1918, 2009 and COVID-19 pandemics are not meant to be policy advice because the estimates of CFR were approximate and also because influenza and COVID-19 can be described by an SIR-like model only when new variants of the pathogen do not emerge.They are meant to show that the ethical dilemma royalsocietypublishing.org/journal/rsif J. R. Soc.Interface 21: 20230612 we have discussed in this paper is not merely a theoretical observation in the parameter space of the mathematical model but a possibility that one should be aware of for future epidemics and pandemics.Ethical dilemmas in public health are well known, and there have been debates on prioritization based on age for vaccination during the COVID-19 pandemic [24].Prioritizing one age group means that another group receives less protection, but under no circumstances does a discriminatory vaccine distribution policy increase the chances of infection in a group compared to the no vaccine scenario.Thus the dilemma presented in our paper is fundamentally different to a vaccine allocation dilemma and is equivalent to the trolley problem.We have used mathematical modelling to show that optimal interventions may require a policymaker to contend with a trolley-problem-like situation where the epidemic under an optimal intervention will infect someone who would not have been affected if there was no intervention.We found two works that use an intervention similar to ours, but they did not consider increase in transmission as a strategy for optimal outcomes [14] or did not discuss the ethical implications [13].Therefore, we believe that our paper contributes to an important point of discussion with regards to optimality of NPIs.
In addition to the ethical dilemma shown through our modelling here, interventions that require increasing transmission prompt an ethical discussion in relation to disadvantaged groups.Cultural, economic and social conditions factor into the contact structure of any human population-a high number of contacts due to living in close spaces, a high susceptibility to infection due to preexisting health conditions or poor access to healthcare facilities, etc. Mathematical models of epidemics can throw light on possible choices of policy and may even help us pick the ones that lead to optimal outcomes.But the decisions made by policymakers are intertwined with political will, their popularity and social attitudes.These eventually determine whether a particular intervention is favoured by a decisionmaking body [25,26].Disadvantaged groups, across the world, do not exercise sufficient political power to represent their interests in decision-making bodies.In such a case, a decision-making body may find it convenient to subject a disadvantaged group to a higher final size in order to decrease the net final size for the whole population and achieve herd immunity.The intervention strategy presented here, always carries such risks with it; and representation of disadvantaged groups thus becomes essential, especially for a policy such as this one.
There are also some practical limitations to the strategy presented here.There would be a natural tendency for individuals to protect themselves from getting infected even if interventions are not in place, so asking individuals to increase their transmission may not be a feasible strategy [27].The optimal interventions could require a group of individuals to fully isolate themselves from the rest of the population.Such interventions are difficult to implement, as there would always be a small possibility for infections to be introduced into the isolated group [27].If the transmission in other groups is increased, it would imply a larger chance of introduction into the isolated group.
We have assumed an SIR structure for disease progression in an individual.But, as long as the disease can be reasonably described by a model in which individuals do not become susceptible after getting infected, we would expect our results to be valid.A crucial detail that we have ignored is the stochastic and discrete nature of disease spread since it can capture the elimination behaviour of outbreaks, i.e. it can incorporate the difference between existence and absence of infections.The deterministic assumption and the use of continuous variables in our model means that after an intervention is over, the small number of infections present in the population will lead to another epidemic if herd immunity is not achieved (shown in figure 1, strong intervention).This however, is one of the possible outcomes.It is possible that the intervention completely eliminates all infections in the population, in which case a new epidemic does not result from any residual infections.However, even in this case, the population remains vulnerable to an epidemic due to lack of herd immunity.Thus, a new epidemic can occur if new infectious individuals are introduced into the population.Another possibility is that the epidemic may get established with a delay due to the stochastic dynamics.Factors around contact-tracing and surveillance capacity (to eliminate the disease) and travel restrictions (to prevent introduction of new infections) are important for the selecting the optimal policy response, in addition to the results presented here.
Figure 1 .
Figure 1.(a) Simulation of four types of intervention for an epidemic with R 0 ¼ 1:5 in a homogeneous population: (i) no intervention-leads to largest final size, (ii) weak intervention-reduces the final size, (iii) strong intervention-reduces final size during the intervention, but leads to a resurgence in infections once the intervention is removed, (iv) moderate intervention but optimal-final size during the intervention is same as the herd immunity threshold.(b) The global minimum for the final size shows that an optimal intervention strength exists.The resurgence of infections under a strong sub-optimal intervention is subject to certain assumptions which are discussed in the text.(c) The final size without any intervention (equation (2.2)) and the final size with optimal intervention (same as the herd immunity threshold for recovered state) is shown against the basic reproduction number.
Figure 2 .
Figure2.Population structure and nonlinearity of infectious disease dynamics lead to an ethical dilemma.(a) A plot of the infections in the two groups of the population.The corresponding cost function and next generation matrix are shown.The red curve shows the herd immunity threshold, the cross shows the final size without an intervention, the circle shows the final size if the optimal intervention is used.(b) A comparison of the interventions when the cost functions are different.The first plot shows that the optimal intervention leads to an increase in the number of infections in the first group.It is an example of the ethical dilemma of implementing the optimal intervention, which is explained further in (c).The second figure shows the plot for the cost function when the first group is given twice the weight as the second group, which means prevention of infections in the first group takes precedence over the second group.In this case, the intervention reduces the infections in both the groups.(c) The ethical dilemma involved in implementing optimal interventions is analogous to the well-known 'trolley problem'.If the decision-maker does not act, the incoming epidemic (represented as a trolley) is going to cause many infections.If the decision-maker implements an optimal intervention (switches the tracks), the number of infections in the total population is minimized, but someone who otherwise would have been safe from infection, becomes infected (shown by the increased number of infections of group (II)).
Figure 3 .
Figure3.COVID-19 pandemic: a real-world contact matrix from a sample of the Dutch population is used to determine the effect of optimal intervention on different age groups for a range of R 0 values.Estimates of case fatality rates for the COVID-19 pandemic in Mexico have been used to weight the cost function (for minimizing total deaths in the population)[23].Rows (1) and (2) show the plots for infections and deaths, respectively.Column (a) shows the epidemic size and deaths if no intervention was performed.Column (b) shows the relative change in epidemic size and deaths under optimal intervention.Column (c) shows the magnitude of change in epidemic size and deaths under optimal intervention.Column (d ) shows the severity of the ethical dilemma (see main text for definition) with R 0 .The legend for columns (a-c) shows age groups and the number in parentheses shows the weight assigned to it in the cost function.These weights are proportional to the case fatality rates.
Figure 4 .
Figure 4. (a)The time series of cumulative infections (i + r) for optimally controlled epidemics for a range of delays in the intervention.The vertical dash-dot lines show the time at which the intervention starts, and its corresponding time series is represented by the same colour.(b) The strength of optimal intervention, c, plotted against the delay in intervention t L using equation(2.11).The super-linear increase in the strength, c, shows the need for an early implementation.Parameters: homogeneous SIR model with basic reproduction number R 0 ¼ 1:5 and γ = 1.Intervention is implemented for a duration of 50 time units.
Table 1 .
[21][22][23]ps used in the contact matrix from[20], their names used in this article, size of the group (as a proportion of the total population), approximate estimates of case fatality rates (CFRs) obtained from[21][22][23]. | 8,370.6 | 2024-02-01T00:00:00.000 | [
"Economics",
"Medicine",
"Environmental Science"
] |
Phenolic compound derived from microwave-assisted pyrolysis of coconut shell: Isolation and antibacterial activity testing
. Indonesia is rich in natural resources, coconut plantation being one of them. The coconut processing industry produces coconut shell (CS) waste. The most effective technique to increase the value of this waste is to convert CS biomass through pyrolysis process. This research focuses on intensification of conversion process of CS by Microwave-Assisted Pyrolysis (MAP) to obtain PA. PA contains phenolic compounds which have antibacterial properties so they can be formulated as an antibacterial agent. CS has moisture and ash content of 8.89% and 0.21%, respectively. PA was produced from the MAP of CS at 600W at 450 ℃ and 500 o C for 10, 20, and 30 minutes. The PA was extracted using ethyl acetate to obtain phenolic contents. Optimum pyrolysis condition was found at 400 o C for 30 minutes and yield of PA was determined at 32.20 g with total phenolic content (TPC) of 112.13 mg GAE/g . The inhibition zone of phenolic extract from coconut shell (PECS) using E. Coli was determined within 22-25 mm that quantitatively PECS can effectively kill bacteria. PECS by MAP and its aplication as an antibacterial agent has never been performed, so this work is an important contribution in the intensification of pyrolysis and in medical field.
Introduction
Indonesia is the second largest coconut producing country in ASEAN after Philippines [1]. The total area of Indonesian coconut plantations in 2018 reached 3.547 million hectares with a total production of 2.866 million tons. The development of coconut production in Indonesia had tendency to increase by 1.95% per year in 1980 -2018 period [2]. The high demand for processed coconut exports has resulted in increased waste in the form of coirs and coconut shells (CS). A coconut produces 30% fiber while 15% -19% of the total weight is the weight of the CS [3]. Even as organic waste, the CS cannot decompose quickly in the environment. Therefore, the amount of the waste continues to increase along with the increase in processed coconut products. One of the ways to utilize CS into available products is thermochemical conversion which is pyrolysis. Products of the pyrolysis process include charcoal, tar, gas and pyroligneous acid (PA) [4]. PA is smoke from the pyrolysis process which is condensed into a liquid product consisting of water, alcohol, organic acids, phenols, aldehydes, ketones, esters, furans, pyran derivatives, hydrocarbons, and nitrogen compounds [5]. PA compounds have antibacterial, anti-oxidant and anti-inflammatory properties that are prospective to be utilized for medicinal and pharmaceutical products formulations [6].
Based on its composition, the CS is composed of 3 main components: 26.60% of cellulose, 27.70% of hemicellulose, and 29.40% of lignin [7]. The hemicellulose compound can be degraded at temperatures of 200℃ − 260℃, cellulose at 240℃−350℃, while lignin at 280℃ − 500℃ [8]. High cellulose and hemicellulose content contribute to higher bio-oil yields, while high lignin content increases biochar yield [9]. Although lignin has a high level of resistance to thermal degradation, decomposition of lignin compounds produces charcoal, carboxylic acids, methanol, phenols and other aromatic compounds in liquid products [10]. Therefore, the high lignin content in a material can produce high phenolic compounds.
Pyrolysis consists of two processes: slow pyrolysis and fast pyrolysis. Slow pyrolysis is a slow process of thermal degradation of organic components in biomass due to the absence of oxygen to a final temperature of approximately 500℃ [8]. Fast pyrolysis is a process of pyrolysis using high temperature where raw materials are heated quickly in the absence of air so as to produce vapor and dark brown liquids. This process is more sophisticated and can be carefully controlled to provide high yields on liquid products [11]. Previous researches with these following raw materials have been conducted include slow pyrolysis walnut shells to produce activated charcoal [12], produce PA from palm kernel shell [13] and Rosmarinus officinalis leaves [14], produce activated charcoals form rice husk [15], and produce liquid smoke from young coconut, bamboo and durian rind [16]. This research was conducted using conventional thermal heating with an external heater such as a furnace or hot mantle. This heating system is relatively slow and inefficient. Therefore, intensification of the process was carried out. Therefore, the pyrolysis could go more quickly and saved energy by using Microwave-Assisted Pyrolysis (MAP). Research using MAP has been carried out on several biomass such as oil palm fiber to make hydrogen gas [17] and pyrolysis liquid oil [18], palm kernel shell to produce anti -bacterial agent [19] and bio-oil [20], characterization of empty fruit bunch for MAP [21], and pineapple waste to produce PA [22]. However, from a number of previous studies, there have not been any studies found on the application of MAP to CS aimed at utilizing PA as an antibacterial agent.
MAP was developed as an alternative heating source in the pyrolysis process. Volumetric heating properties and better control and uniform heat distribution can be achieved through heating on MAP [23]. MAP can prevent the formation of secondary reactions thereby increasing the quality of the products. MAP technology has the potential to save energy and reduce process costs. In producing PA, MAP is influenced by several factors including the type of reactor, particle size, pressure, temperature, power level, residence time, flow carrier gas and the use of activated carbon [24]. Therefore, this research investigated the effect of changes in time and temperature in the MAP process to determine the feasible conditions for producing the most PA.
One of the interesting benefits of PA to be investigated is its antibacterial properties. Therefore, the extracts of phenolic compounds in this study were tested on Escherichia coli bacteria. In previous studies, PA as the result of the CS MAP process had not been tested as an antibacterial agent. Therefore, this study contributes to the effect of MAP operating conditions on PA products and testing phenolic extracts from coconut shell (PECS) from the results of MAP as an antibacterial agent that can be utilized in the medical field. Specifically, the purpose of this study was to determine the effect of time and temperature on PA production in MAP as well as the characteristics of phenolic extracts as an antibacterial agent.
Pre-treatment
CS was cleaned of fiber and dirt, then dried in the sun for 7 days. CS has been dried was subsequently reduced in size by using a hammer mill. Then sieving process was performed so that the same size approximately 1 -1.18 mm was obtained. The small size CS was stored in a place protected from humidity. Based on the raw material test, the CS used for PA production in this study had water content of 8.89% and ash content of 0.21%.
Microwave-assisted Pyrolysis (MAP)
Microwave-assisted pyrolysis (MAP) was carried out referring to a study conducted by Abas [18] with modification of raw material size of 1-1.18 mm. 100 g materials and 50 g of activated carbon were dried at 105℃ for 24 hours. Furthermore, both were put into a reactor made of quartz glass. Thermocouple type-R was mounted on the reactor and connected to the Picolog Data Logger System (PicoLog Recorder software, version 5.23.0) to show changes in temperature during the reaction. Nitrogen gas was flowed at 2 L/min, supplied from the top of the reactor for 15 minutes before the pyrolysis process was conducted. Other modifications were the power output on the microwave of 600W. Pyrolysis temperature was controlled at 400℃ and 450℃, while cooling water was set at 6℃ for the condensation process. The residence time of the MAP process was carried out in 10, 20 and 30 minutes. Liquid and gaseous products went through the condenser column and was collected in a round bottom flask (at the bottom of the condenser) as a mixture of PA and bio-oil, while the uncondensed gas was discharged into the environment. Biochar formed in the reactor was called a solid product. The end of this process was to determine the percentage of solid, liquid, and gases product based on the mass of each product.
Purification using Extraction Process
Pyroligneous acid (PA) is obtained from the liquid product from the pyrolysis process. The liquid product was filtered using Whatman no. 4 filter paper to separate PA from bio oil and other impurities. Ethyl acetate (EA) solvent was added to the PA referring to the research conducted by Loo et al. [25], with these following modifications: PA was extracted using EA at a ratio of 1: 1 using a 100 mL separatory funnel, was shaken for 3 minutes at room temperature and let it stand for 30 minutes to form two layers. The upper layer was collected as an organic layer and the lower layer as a mixture layer was extracted for the second time using fresh EA. The extracted PA was concentrated using a rotary evaporator (Heidolph, Germany) at a pressure of 120 mBar and a temperature of 80℃ to 1/3 of the initial volume. Then phenolic extract from coconut shell (PECS) was stored in a desiccator.
Total Phenolic Content (TPC)
Total phenolic content (TPC) was determined by using the Folin Ciocalteu reagent as was performed by Ma et al. [4], Gallic acid at a concentration of 10 -100 µg/mL was used as the standard reference for making calibration curves. Each sample of 1 mL (100 µg/mL) was mixed with 1 mL of 50% (v/v) of Folin-Ciocalteu phenol reagent in the test tube. The mixture was shaken for 30 seconds and left at room temperature for 2 minutes. After 2 minutes, 1 mL of sodium carbonate (Na2CO3) solution was added to neutralize the mixture. The mixture was shaken for 10 seconds and allowed to stand in a dark place for 2 hours at room temperature. Tests were carried out in a dark room because the reagent was sensitive to light and to reduce UV interference from redox reactions. Measurement of absorbance of a blue mixture was recorded at 765 nm using a UV-Vis spectrophotometer (Shimadzu, Japan). Results were reported as averages expressed as micrograms of gallic acid equivalents (GAE) per milligram of sample (µg GAE/mg) The TPC concentration was calculated as suggested by Abdelhady et al. [26], as follow: Total phenolic content (TPC), Where: C = concentration of Gallic acid established from the calibration curve in mg/ml V = the volume used during assay in mL M = mass of the extract used during assay in g
Application as Antibacterial Agent
The antibacterial test was conducted by the Central Laboratory of Health and Testing of Medical Apparatus in Central Java. The bacteria used for testing was Escherichia coli with the disk diffusion method. The reaction was the formation of a clear zone around the phenolic compound, then the diameter of the clear zone was generated to determine the antibacterial strength.
MAP Process
The study intended to investigate the relationship between product yield (solid, liquid and gases) with much emphasis on the PA product (liquid), residence time, and temperature. In this work, the pyrolysis process was conducted in different operation condition, in which its sample was mentioned as S1 to S6 as detected in Table 1. The percentage of product yield of MAP were calculated as weight percent (wt %) with initial CS of 100 gram. The effect of temperature change was proven when the thermal heating level increased to 450℃, the liquid yield decreased while the gas yield increased. Operating conditions at high temperatures cause larger molecules such as liquids to split to produce gas. This is consistent with previous research [27]. The effect of residence time was proven when residence time increased to 30 minutes, liquid yield increased while char and gas yield decreased. This is consistent with previous research [13]. Low temperatures and long residence times support char production, while higher temperatures and short residence times result in high liquid product production [28].
Total Phenolic Content (TPC)
The TPC was determined from linear equation of gallic acid calibration curve with the specific at Fig. 2 was expressed in microgram which equals to gallic acid per milligram sample (µg GAE/mg). Absorbance of mixtures containing gallic acid, Folin Ciocalteu and sodium carbonate increases proportionately with increasing concentrations of gallic acid [29]. The calculation results of TPC S1 to S6 are shown on Table 2. In this work, the highest TPC of 112.13 mg GAE/g was observed in S3 under MAP operating conditions at 400℃ for 30 minutes. Because phenolic compounds are not decomposed in low temperature conditions. Furthermore, some phenolic compounds that dissolve in water are extracted into water in lower temperature conditions due to low evaporation of water [4]. The phenolic structure can be maintained at lower temperatures under lower vapor pressure in the reactor because some phenols are primary pyrolysis products and are produced mainly by cracking β -Ο -4 aryl ether bonds in lignin [30]. Previous studies reported that TPC in PA extract of pineapple biomass was 2.67 ± 0.14 mg GAE [22] and palm kernel shell was 49.96 mg GAE/g [29]. This shows that TPC in CS is greater than pineapple biomass and palm kernel shell.
Antibacterial Activity
Six samples of phenolic extract from coconut shell (PECS) were tested for their antibacterial activities at the Central Laboratory for Health and Medical Apparatus Testing in Central Java. The results are shown in Fig. 3.
Fig. 3. Antibacterial activity of PECS on inhibition of Escherichia coli
According to Davis dan Scout [31], anti-bacterial activity in the clear zone test is divided into the following: the diameter of zone of inhibition which is less than 5 mm is categorized as weak, zone of inhibition of 5 -10 mm is categorized as medium, zone of inhibition of 10 -20 mm is categorized as strong, and zone of inhibition more than 20 mm is categorized as very strong. The test results showed the antibacterial activity of each sample of coconut shell phenolic compounds was very strong. Samples that had the largest zone of inhibition diameter were at S3 (25 mm) with MAP 400℃ operating conditions for 30 minutes. The diameter of inhibition zone can be influenced by TPC concentration. The higher concentration leads to the higher antibacterial activity [32]. The reduction of important organic ions that enter the cell results in inhibited growth to death of the cell [33]. Damage to the bacterial cell wall results in decreased permeability of the cytoplasmic membrane to transfer important ions needed by bacteria. The high phenolic compounds in PECS can denature proteins contained in bacterial cell walls so that the inhibitory ability of PECS can act as an antibacterial agent.
CONCLUSION
From this study, MAP of CS samples under different operating conditions showed the highest liquid product acquisition at 400℃ with residence time of 30 minutes. Based on the TPC test of PA extract (PECS), PA from MAP with this condition has the highest concentration of 112.13 mg GAE/g and shows the largest bacterial growth zone of inhibition, which is 25 mm. However, more studies must be carried out prior to any commercialization attempts for this compound. | 3,497.2 | 2020-01-01T00:00:00.000 | [
"Environmental Science",
"Materials Science",
"Chemistry"
] |
Improved spectral optical coherence tomography using optical frequency comb
We identify and analyze factors influencing sensitivity drop-off in Spectral OCT and propose a system employing an Optical Frequency Comb (OFC) to verify this analysis. Spectral Optical Coherence Tomography using a method based on an optical frequency comb is demonstrated. Since the spectrum sampling function is determined by the comb rather than detector pixel distribution, this method allows to overcome limitations of high resolution Fourier-domain OCT techniques. Additionally, the presented technique also enables increased imaging range while preserving high axial resolution. High resolution cross-sectional images of biological samples obtained with the proposed technique are presented. ©2008 Optical Society of America OCIS codes: (110.4500) Optical coherence tomography; (110.4155) Multiframe image processing; (120.2230) Fabry-Perot; (170.3010) Image reconstruction techniques References and Links 1. J. J. Kaluzny, A. Szkulmowska, T. Bajraszewski, M. Szkulmowski, B. J. Kaluzny, I. Gorczynska, P. Targowski, and M. Wojtkowski, "Retinal imaging by spectral optical coherence tomography," European journal of ophthalmology 17, 238-245 (2007). 2. V. Christopoulos, L. Kagemann, G. Wollstein, H. Ishikawa, M. L. Gabriele, M. Wojtkowski, V. Srinivasan, J. G. Fujimoto, J. S. Duker, D. K. Dhaliwal, and J. S. Schuman, "In vivo corneal high-speed, ultra high-resolution optical coherence tomography," Archives of ophthalmology 125, 1027-1035 (2007). 3. U. Schmidt-Erfurth, R. A. Leitgeb, S. Michels, B. Povazay, S. Sacu, B. Hermann, C. Ahlers, H. Sattmann, C. Scholda, A. F. Fercher, and W. Drexler, "Three-dimensional ultrahigh-resolution optical coherence tomography of macular diseases," Investigative ophthalmology & visual science 46, 3393-3402 (2005). 4. V. J. Srinivasan, M. Wojtkowski, A. J. Witkin, J. S. Duker, T. H. Ko, M. Carvalho, J. S. Schuman, A. Kowalczyk, and J. G. Fujimoto, "High-definition and 3-dimensional imaging of macular pathologies with high-speed ultrahigh-resolution optical coherence tomography," Ophthalmology 113, 2054 e2051-2014 (2006). 5. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, "Optical coherence tomography," Science 254, 1178-1181 (1991). 6. R. Leitgeb, C. K. Hitzenberger, and A. F. Fercher, "Performance of Fourier domain vs. time domain optical coherence tomography," Opt. Express 11, 889-894 (2003). 7. J. F. de Boer, B. Cense, B. H. Park, M. C. Pierce, G. J. Tearney, and B. E. Bouma, "Improved signal-tonoise ratio in spectral-domain compared with time-domain optical coherence tomography," Opt. Lett. 28, 2067-2069 (2003). 8. A. F. Fercher, C. K. Hitzenberger, G. Kamp, and S. Y. Elzaiat, "Measurement of Intraocular Distances by Backscattering Spectral Interferometry," Opt. Commun. 117, 43-48 (1995). 9. M. Wojtkowski, R. Leitgeb, A. Kowalczyk, T. Bajraszewski, and A. F. Fercher, "In vivo human retinal imaging by Fourier domain optical coherence tomography," J. Biomed.Opt. 7, 457-463 (2002). 10. S. R. Chinn, E. A. Swanson, and J. G. Fujimoto, "Optical coherence tomography using a frequencytunable optical source," Opt. Lett. 22, 340-342 (1997). 11. F. Lexer, C. K. Hitzenberger, A. F. Fercher, and M. Kulhavy, "Wavelength-tuning interferometry of intraocular distances," Appl. Opt. 36, 6548-6553 (1997). 12. S. Yun, G. Tearney, J. de Boer, N. Iftimia, and B. Bouma, "High-speed optical frequency-domain imaging," Opt. Express 11, 2953-2963 (2003). (C) 2008 OSA 17 March 2008 / Vol. 16, No. 6 / OPTICS EXPRESS 4163 #92342 $15.00 USD Received 31 Jan 2008; revised 1 Mar 2008; accepted 7 Mar 2008; published 12 Mar 2008 13. T. Endo, Y. Yasuno, S. Makita, M. Itoh, and T. Yatagai, "Profilometry with line-field Fourier-domain interferometry," Opt. Express 13, 695-701 (2005). 14. B. Grajciar, M. Pircher, A. Fercher, and R. Leitgeb, "Parallel Fourier domain optical coherence tomography for in vivo measurement of the human eye," Opt. Express 13, 1131-1137 (2005). 15. Y. Nakamura, S. Makita, M. Yamanari, M. Itoh, T. Yatagai, and Y. Yasuno, "High-speed threedimensional human retinal imaging by line-field spectraldomain optical coherence tomography," Opt. Express 15, 7103-7116 (2007). 16. M. Wojtkowski, V. J. Srinivasan, T. H. Ko, J. G. Fujimoto, A. Kowalczyk, and J. S. Duker, "Ultrahighresolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation," Opt. Express 12, 2404-2422 (2004). 17. M. A. Choma, M. V. Sarunic, C. H. Yang, and J. A. Izatt, "Sensitivity advantage of swept source and Fourier domain optical coherence tomography," Opt. Express 11, 2183-2189 (2003). 18. S. H. Yun, G. J. Tearney, J. F. de Boer, and B. E. Bouma, "Pulsed-source and swept-source spectraldomain optical coherence tomography with reduced motion artifacts," Opt. Express 12, 5614-5624 (2004). 19. R. Huber, D. C. Adler, and J. G. Fujimoto, "Buffered Fourier domain mode locking: Unidirectional swept laser sources for optical coherence tomography imaging at 370,000 lines/s," Opt. Lett. 31, 2975-2977 (2006). 20. B. Cense, N. A. Nassif, T. C. Chen, M. C. Pierce, S.-H. Yun, B. H. Park, B. E. Bouma, G. J. Tearney, and J. F. de Boer, "Ultrahigh-resolution high-speed retinal imaging using spectral-domain optical coherence tomography," Opt. Express 12, 2435-2447 (2004). 21. M. Wojtkowski, T. Bajraszewski, I. Gorczynska, P. Targowski, A. Kowalczyk, W. Wasilewski, and C. Radzewicz, "Ophthalmic imaging by spectral optical coherence tomography," Am. J. Ophthalmol. 138, 412-419 (2004). 22. H. Lim, J. F. De Boer, B. H. Park, E. C. W. Lee, R. Yelin, and S. H. Yun, "Optical frequency domain imaging with a rapidly swept laser in the 815-870 nm range," Opt. Express 14, 5937-5944 (2006). 23. V. J. Srinivasan, R. Huber, I. Gorczynska, J. G. Fujimoto, J. Y. Jiang, P. Reisen, and A. E. Cable, "Highspeed, high-resolution optical coherence tomography retinal imaging with a frequency-swept laser at 850 nm," Opt. Lett. 32, 361-363 (2007). 24. D. C. Adler, Y. Chen, R. Huber, J. Schmitt, J. Connolly, and J. G. Fujimoto, "Three-dimensional endomicroscopy using optical coherence tomography," Nature Photonics 1, 709-716 (2007). 25. P. Targowski, M. Wojtkowski, A. Kowalczyk, T. Bajraszewski, M. Szkulmowski, and I. Gorczynska, "Complex spectral OCT in human eye imaging in vivo," Opt. Commun. 229, 79-84 (2004). 26. M. Wojtkowski, A. Kowalczyk, R. Leitgeb, and A. F. Fercher, "Full range complex spectral optical coherence tomography technique in eye imaging," Opt. Lett. 27, 1415-1417 (2002). 27. A. Bachmann, R. Leitgeb, and T. Lasser, "Heterodyne Fourier domain optical coherence tomography for full range probing with high axial resolution," Opt. Express 14, 1487-1496 (2006). 28. Y. Yasuno, S. Makita, T. Endo, G. Aoki, M. Itoh, and T. Yatagai, "Simultaneous B-M-mode scanning method for real-time full-range Fourier domain optical coherence tomography," Appl. Opt. 45, 1861-1865 (2006). 29. R. K. Wang, "In vivo full range complex Fourier domain optical coherence tomography," Appl. Phys. Lett. 90, 054103 (2007). 30. Z. Wang, Z. Yuan, H. Wang, and Y. Pan, "Increasing the imaging depth of spectral-domain OCT by using interpixel shift technique," Opt. Express 14, 7014-7023 (2006). 31. Y. Yasuno, V. D. Madjarova, S. Makita, M. Akiba, A. Morosawa, C. Chong, T. Sakai, K.-P. Chan, M. Itoh, and T. Yatagai, "Three-dimensional and high-speed swept-source optical coherence tomography for in vivo investigation of human anterior eye segments," Opt. Express 13, 10652-10664 (2005). 32. B. Hyle Park, M. C. Pierce, B. Cense, S.-H. Yun, M. Mujat, G. J. Tearney, B. E. Bouma, and J. F. de Boer, "Real-time fiber-based multi-functional spectral domain optical coherence tomography at 1.3 μm," Opt. Express 13, 3931-3944 (2005). 33. H. Y. Ryu, H. S. Moon, and H. S. Suh, "Optical frequency comb generator based on actively modelocked fiber ring laser using an acousto-optic modulator with injection-seeding," Opt. Express 15, 1139611401 (2007). 34. E. Gotzinger, M. Pircher, R. Leitgeb, and C. K. Hitzenberger, "High speed full range complex spectral domain optical coherence tomography," Opt. Express 13, 583-594 (2005). 35. B. Baumann, M. Pircher, E. Gotzinger, and C. K. Hitzenberger, "Full range complex spectral domain optical coherence tomography without additional phase shifters," Opt. Express 15, 13375-13387 (2007).
Introduction
Optical Coherence Tomography (OCT) is a non-contact and non-invasive high-resolution technique for imaging of partially transparent objects.It has found a wide spectrum of applications in biomedical imaging, especially in ophthalmology [1][2][3][4].OCT enables reconstructing information about the depth structure of a sample using interferometry of temporally low coherent light.There are two variants of OCT techniques depending on the detection system: Time-domain (TdOCT) and Frequency-domain (FdOCT).TdOCT was proposed by Huang et al. in 1991 [5].FdOCT provides significant improvement of imaging speed and detection sensitivity as compared to TdOCT [6,7].FdOCT enables reconstructing the depth resolved scattering profile at a certain point on the sample from a modulation of the optical spectrum caused by interference of light beams [8] and can be performed in two ways: either the spectrum is measured by a spectrometer (Spectral OCT) [8,9] or in a configuration including a tunable laser and a single dual balanced photodetector (Swept source OCT) [10][11][12].In Spectral OCT (SOCT) a light source with broad spectral bandwidth (~100 nm) is used in combination with a spectrometer and a line or array of photo-sensitive detectors [9,[13][14][15].SOCT instruments achieve shot noise limited detection [6] with a speed up to 50k Ascans/s and an axial resolution as high as 2 μm in tissue [16].The second method, Swept Source OCT (SS-OCT), employs a rapidly tunable laser [17,18].SS-OCT usually operates at speeds comparable to SOCT.However, the recent introduction of Fourier Domain Mode Locking (FDML) enabled a dramatic increase in imaging speed of SS-OCT up to 370k A-scans/s [19].
The axial resolution of most SS-OCT systems is on the order of 10 μm in tissue and doesn't match high resolution SOCT systems.Due to the high imaging speed, FdOCT systems enable the acquisition of three dimensional image data in-vivo which is especially beneficial for numerous ophthalmic imaging applications [20,21].Currently, the high axial resolution of 2-3 μm of SOCT systems in the 850nm range can not be matched by SS-OCT systems.Lim et al. [22] reported SS-OCT operating at around 840 nm with speed up to 43.2k A-scans/s and axial resolution of 13 μm in air.Different SS-OCT operating at 850 nm was described by Srinivasan et al. [23].The system operates at 16k A-scans/s and achieves axial resolution of 7 μm.At 1300 nm center wavelength, high speed OCT instrument based on swept source enables 5-7 μm axial resolution [24].
In spite of the resolution advantage of SOCT instruments, limitations in the imaging range due to a finite resolution of the spectrometer represent a major drawback.In general the effect of the depth dependent sensitivity drop together with mirror-conjugate images [6] reduces the total imaging range of both Fourier-domain techniques but is more significant in SOCT.There are several techniques allowing minimizing these shortcomings.These techniques are based on the reconstruction of the complex interferometric signals [25][26][27][28][29] and thus eliminating mirror-images caused by Fourier transformation of real valued signals, increasing the effective ranging depth by a factor of two.A different method for doubling the imaging range was proposed by Wang et al. [30].They propose an interpixel shift technique in order to effectively double the number of collected samples.However, the method is based on mechanical movement of the detector making this approach comparably slow.
In this contribution we identify and analyze factors influencing sensitivity drop-off in Spectral OCT and propose a system employing an Optical Frequency Comb (OFC) to verify this analysis.It appears that OFC effectively reduces the depth dependent drop of sensitivity and might be considered as a method improving performance of SOCT.The Optical Frequency Comb is considered to be a spectrum consisting discrete and equidistantly distributed optical frequency components created either by optical filtration of spectrally broadband light or generated as a laser optical comb.This new method enables more flexible change of the measurement depth without the need of introducing any changes in the SOCT device.Additionally, in the presented technique samples of interference signal extracted by an optical comb spectrum are equidistantly distributed in optical frequencies, thus completely avoiding the necessity of wavelength to frequency rescaling [9,31,32].
Phenomena deteriorating the depth dependent sensitivity in SOCT
A significant technical weakness of SOCT is the depth dependent signal drop [9,16,30].Spectral OCT devices comprise the spectrographic set-up, which enables the spatial separation of light with different k(ζ), where ζ denotes a spatial coordinate corresponding to the direction determined by the distribution of photo-sensitive elements of the detector.In a simplified SOCT experiment with a mirror as an object, the interference signal can be represented by a cosine function of wave number k multiplied by the doubled optical pathdifference Δz between the two arms of the Michelson interferometer We assume here equal back reflected light intensities from the reference and sample arm.In an ideal case of the cosine function, which infinitely spreads in k-space, the Fourier transform yields two Dirac deltas δ(z ± Δz) located at Δz and −Δz.In a real experiment, the interference signal is limited by the spectral bandwidth G[k(ζ)] ≡ G(ζ) of the light source (Fig. 1(a)).Thus, in the conjugate space the Dirac deltas are convolved with the coherence function Γ(z): { } ( ) where DC indicates low frequency components of the spectral fringe signal called also as autocorrelation function [9], and Г(z In an SOCT system the width of the sinc function depends on δζ and it is related to the decrease of the interference fringes visibility as a function of increasing modulation frequency.The limited resolution of a spectrometer causes a suppression of amplitudes of high frequency components of the spectral fringe signal.
There are additional significant factors affecting the signal in SOCT devices.Usually the spectrometer used in SOCT comprises a diffraction grating followed by a lens and CCD or CMOS array.The spectrometer enables to obtain a spectrum evenly sampled in wavelength, not in wave number k-space: As the structural information is encoded in frequencies of k dependent spectral fringes, two problems arise.Both are related to variable spectral width (in wave numbers) of an individual pixel and simultaneously to a spectral separation between two adjacent pixels.Both of these effects cause that the short wavelength part of the spectrum is more sparsely sampled (in k) than the long wavelength part.This means that high frequencies of the spectral fringes are aliased and irretrievably lost in the part of the spectrum while the rest of the signal can remain within the Nyquist limit.We called this effect as partial aliasing.Figure 2 shows a simulated decrease of the signal caused by partial aliasing as a function of normalized optical path difference for different spectral spans.The signal was simulated for each optical path difference, numerically recalculated to k-space and Fourier transformed.The amplitude of the resulted point-spread function (PSF) was drawn on the graph.For wider spectral spans, the effect appears at smaller optical path differences.As it could be expected the amplitude of the PSFs is affected strongly by the partial aliasing for higher frequencies of the spectral fringe signal (higher optical path differences).Additionally this effect increases with the spectral bandwidth (higher axial resolution of SOCT system).In our simulation the maximal loss of signal power caused by the effect reaches 5.2 dB at the end of axial measurement range regardless of spectral span.Another important factor decreasing the SOCT signal is the electronic interpixel crosstalk present in CCD detectors.Due to this effect, the charge from a particular pixel is spread over the neighboring pixels, what causes additional degradation of the spectrometer resolution.The depth dependent signal loss function associated with this effect can be experimentally found by illuminating a single pixel of the camera with a focal spot smaller than the dimension of the pixel.The Fourier transformation of the CCD detector response (Fig. 3) will provide a function describing the fringe visibility loss.We analyzed the influence of the interpixel crosstalk effect in a high speed line scan CCD camera (Atmel Aviiva M4 CL2014, 14x14 μm pixel size) using a beam of a single mode, monochromatic laser at 830 nm, collimated by a microscopic objective OLYMPUS 20X, expanded in a telescopic system with magnification of 5x and focused onto a single pixel by a Spindler&Hoyer focusing objective with a focal length of 30 mm.The calculated diameter of the spot size at the level of e −2 of the intensity profile is 5.3 μm.The visibility of the registered fringes due to the interpixel crosstalk effect drops to 0.7 which gives an additional −3,1 dB signal power loss.In a real spectrometer it is very hard to distinguish between the interpixel crosstalk effect and the decrease of the spectral resolution caused by the finite size of the focal spot size.In order to analyze these effects jointly we repeated above mentioned experiment by using different focusing lenses but keeping the same entrance beam diameter.A logarithmic plot of the maximal sensitivity drop (corresponding to the Nyquist frequency after Fourier transformation) as a function of focal length is presented in Fig. 4. The black solid line corresponds to the calculated values of sensitivity drop caused only by the influence of the finite focal spot size for a given CCD pixel size (14 μm).Once the focal spot size is getting bigger than the pixel width, the signal (fringe visibility) starts to decrease.The experimental data roughly follows the theoretical curve, but the deviation from the curve shape is probably due to the imperfect optical system which does not guarantee the ideal focal spot.However, constant −4 dB offset between the theoretical curve and the measured points is clearly visible.This offset corresponds to the previously measured value of the interpixel crosstalk.This experiment also shows that the spectral fringe signal is always convolved with the interpixel crosstalk described by the function Crosstalk(ζ) and it is also convolved with the focal spot function SpotSize(ζ).For the sake of simplicity we can analyze these two effects jointly describing them as a single function: [ ] where On the same plot the experimental data show the measured normalized depth dependent sensitivity drop of the SOCT system.
Implementation of the optical frequency comb in SOCT device
In order to increase the spectral resolving power of the detection unit in SOCT we propose to use a light source, which generates a discrete distribution of optical frequencies (called optical frequency comb) instead of continuous broadband light.In order to give a proof of concept of this idea we constructed passive optical frequency comb generator comprising a broadband light source and a fiber Fabry-Perot (F-P) filter.The real interference signal I(k) is thus multiplied by a transmission function T FPI of the F-P filter where T and R are transmission and reflection coefficients of the F-P interferometer surfaces respectively, γ is defined as γ = − (2d) −1 ln( ) R , d is separation between two surfaces in F-P interferometer which is related to FSR = c/(2d) assuming an air-gap in the F-P, and Combining Eq. ( 5) and Eq. ( 6) we obtain: Calculating Fourier transform of the Eq. ( 7) we obtain where g(z) denotes the Fourier transform of I(k) and describes the object structure.From the Eq. ( 8) one can see that the object image is periodically repeated with the period 2d in z-space.
The signal drop within the imaging range is determined by an exponential function and depends on the reflectivity R of the mirrors in the Fabry-Perot interferometer.To get advantage of using a comb in SOCT, one should ensure that any two adjacent comb lines illuminating a CCD detector are clearly separated.Such an arrangement strongly reduces the influence of the interpixel crosstalk and the limited spot size, since a signal from a particular line of the comb does not disturb the signal of the adjacent lines.Moreover, the line width BW of the optical frequency comb is chosen to be much smaller than the spectral range covered by a single pixel (BW << δk).Thus the signal drop caused by a sinc function corresponding to the pixel size is replaced by the Fourier transform of the shape of the single comb line.Therefore, the corresponding sensitivity loss is less significant.
SOCT Multiplexing technique using tunable optical frequency comb generator
In our experiments we selected FSR = 4 δk, so peaks of the optical frequency comb are clearly separated.The exact positions of the comb peaks on the CCD detector are then used to decode the interference signal.Each sample of the spectral fringe pattern is obtained from the part of the detector illuminated by a particular comb line.Thus the resulting interference signal is compound with samples presented directly in k domain.This, however, gives the number of reconstructed signal points to be four times smaller than the number of pixels of the detector, what results in four-fold reduction of imaging range (area with M = 1 on Fig. 6).To overcome this drawback, we propose to use, so called, multiplexing technique.In this new method, the interference pattern is sampled M times at a given point of the object.This can be mathematically expressed by: where j indexes points of the resulting multiplexed signal, l indexes the samples of a single comb record and it changes from 0 to the number of detected comb lines, and m indicates particular comb measurement and changes from 0 to M − 1.Each set of samples is obtained by shifting the comb over the CCD detector by a fraction of the FSR.The comb is effectively shifted by changing the gap width d inside the F-P interferometer.Such change slightly alters the FSR, however, this change is relatively small with respect to changes of the comb line positions (different set of wavelengths are transmitted by the F-P interferometer).During postprocessing, a multiplexed signal, G MUX (j) is then obtained by summation operation Since the G MUX (j) is represented in k-space, it can be directly Fourier transformed to provide a single A-scan.For the special case of M = 1, there is no multiplexing technique and a single A-scan is obtained from single set of samples.To get the imaging range similar to that obtained by standard SOCT system, four measurements per single axial scan (M = 4 on Fig. 4) should be performed with the consecutive shifts of FSR/4.To reconstruct the final interference signal, four spectral fringes have to be multiplexed.5) with the calculated exponential function describing the signal drop due to convolution of the fringes with the comb shape.Comb lines are assumed to illuminate every fourth pixel of the CCD camera.Maximal imaging depth, as compared to the standard SOCT, is reduced four times for M = 1, is the same for M = 4, and is 50 % wider for M = 6.The signal loss at the specific maximal depth calculated for the method with M = 4 is −3.1 dB, and with M = 6 is −4.5 dB.The effective number of samples can be higher than the number of pixels of the detector resulting in expansion of the imaging depth.It must be noted, that simple increase of number of pixels in CCD camera in standard SOCT will theoretically increase imaging depth, but because of dramatic loss of sensitivity, in practice there will be no improvement in imaging range.
Apparatus and results
Figure 7 shows a schematic drawing of the experimental setup as a demonstration system.The light source BLS (Broadlighter T840, Superlum, Moscow), a broad spectrum, optical isolator OI and tunable fiber Fabry Figure 8 shows images of in vivo cornea and retina of a human eye using the described method for M = 1 (i.e.without multiplexing).Both objects are imaged with high axial resolution, however the imaging range is just equal to the depth of the object.To demonstrate the applicability of the multiplexing method, we examined the anterior chamber of porcine eye in vitro.Figure 10(a) shows a result obtained by standard SOCT technique.With this method we could image only a part of the anterior segment due to the limited imaging range as well as reduced sensitivity.Figure 10(b) displays the effect of the multiplexing method with M = 4 to obtain approximately the same axial measurement range.Because of the increased sensitivity the folded image of the iris became visible.In order to image the entire anterior segment without iris folding we performed multiplexing technique for M = 6 (Fig. 10(c)).
Since our preliminary setup of the OFCG reduces the power of the object beam to 25 μW, the power of light illuminating the eye for a standard SOCT experiment was also attenuated to this level to ensure comparable conditions for both experiments.To partially compensate the low power of incident light, the exposure time was increased to 150 μs per single signal registration in all measurements.Because of a postmortem changes in the porcine eye the crystalline lens is not visible in Fig. 10.Another experiment using multiplexing technique performed in other porcine eye enabled reconstruction of the entire anterior chamber with visible anterior surface of the crystalline lens (Fig. 11).
Discussion
The technique presented in this contribution overcomes several limitations of high resolution SOCT such as limited imaging range, partial aliasing and sensitivity drop with depth what is demonstrated in Fig. 9, 10 and 11.The price to be paid for these improvements is M-fold increase in the measurement time.This inconvenience can be potentially reduced by replacing CCD cameras with CMOS photodetectors.In this case it is possible to read out a reduced number of pixels from the entire photosensitive array in proportionally shorter time.Therefore, the multiplexing method can be used in SOCT instruments comprising CMOS detectors without loosing measurement time.Further increase of the measurement time is due to considerable reduction of optical power caused by a F-P filter.This can be solved either by application of light sources with higher power and the Fabry-Perot interferometer or by employing an active OFC generator [33].
The multiplexing technique relies on shifting the comb by changing an air gap d in the Fabry-Perot interferometer what slightly influences its FSR.Because of that, the consecutive combs are not shifted in ideal parallel manner.The maximal deviation at the both edges of the spectrum reaches ±12 % of the pixel width.If a signal is sampled imperfectly in k-space, the artificial image repetitions appear after Fourier transformation.Since the effect is deterministic, it is corrected in the numerical post processing.However, due to the imperfections of numerical processing, the artifacts still remain and are visible in Fig. 10(b) and (c).Further improvement of the software and electrical stability of the driving system should effectively reduce this effect.Also a combination of uneven distribution of comb peaks onto the detector array and decreased sensitivity drop-off can cause the presence of double aliased images.Such an artifact can be visible in the Fig. 8 (b) as a part of the retinal structure in the low-frequency area of the cross-sectional image.This effect can be removed by careful adjustment of the spacing between the comb peaks (FSR of the Fabry-Perot interferometer) to superimpose the real and the double aliased images.
The image reconstruction in the presented method can be also affected by a sample motion.This is a common problem of methods based on multiple registration of the spectral fringe signals [25,29,34,35].Thus reduction of registration time in proposed method is indispensable to minimize motion artifacts.Relatively long repetition time was chosen to avoid influence of mechanical resonance close to 50 kHz observed in our FPI device.Firstly we had to increase the time constant of the electronics driving the FPI, then we had to delay the CCD camera trigger to ensure the FPI is relaxed within 99% of the final position -then the integration of light is proceed.Higher power of light on the object permits the appropriate reduction of the measurement time, which makes the requirement of object stability less difficult to fulfill.
In order to obtain very stationary comb over the exposure time, the Fabry-Perot filter is driven by a staircase signal.Because of high capacity of the F-P interferometer (2.2 μF) the rising time of the signal increases the repetition time between consecutive spectra registration to 400 μs.This problem might be reduced either by improvements in the driving electronics.
It must be noted, that simple increase of number of detector pixels in standard SOCT will also theoretically increase imaging depth, but the accompanying dramatic loss of sensitivity would make this procedure impractical.The interpixel shift technique [30] solves the problem of partial aliasing, but it is also affected by considerable sensitivity drop.The presented method has the potential to increase the imaging range in more flexible manner -just by software control.In contrast, standard SOCT system needs hardware changes (reconstruction of a spectrometer), while the interpixel shift technique requires mechanical movement of the detector.
Conclusion
A novel Spectral Optical Coherence Tomography method using an Optical Frequency Comb (OFC) is demonstrated.This technique overcomes several limitations of high resolution SOCT.In the presented method an optical frequency comb is generated with a broadband light source and a Fabry-Perot interferometer.The optical comb can be shifted over the CCD detector by a fraction of one FSR to increase the effective number of samples of the signal.This allows increasing the imaging range with preserved high axial resolution.We presented preliminary data demonstrating the general performance, advantages and limitations of the multiplexing SOCT method using an optical frequency comb.High quality, high resolution cross-sectional images of biological samples with increased imaging range were obtained with the presented technique.It was demonstrated that the multiplexing technique expands imaging depth up to 5.1 mm of the SOCT system preserving 3.5 µm axial resolution over the entire depth.In result, the entire anterior chamber of porcine eye is imaged with a high level of details.
is linked to the spectral density G(ζ) according to Wiener-Khinchin theorem.Since the spectrum is registered by an array or matrix of photosensitive elements, interference fringes are additionally convolved with the rect function Π δζ/2 (ζ) representing a single photo-sensitive element of the detector with δζ as a width of a single pixel.The Fourier transform of the rect function is a sinc function − Fig. 1(b).
Fig. 1 .
Fig. 1. a) Simulation of the interferometric signal; dotted line: spectrum of the light source G(ζ); solid line: modulation due to interference; b) corresponding Fourier transform after integration within the "pixel" width δζ.
Fig. 2 .
Fig.2.SOCT amplitudes of the axial Point-Spread Function depending on the axial position for different optical spectral spans.In the presented simulation the cosine signal generated in λ- space is numerically recalculated to k-space.The simulation does not include the signal integration within the particular pixels.The amplitudes are normalized to the value corresponding to z = 0 and the z scale is normalized to the maximal optical path difference z max for the specific spectral span.
Fig. 3 .
Fig. 3. Analysis of interpixel crosstalk influencing the performance of the SOCT system.a) a part of the signal registered by a line scan CCD detector (inset shows the total signal) illuminated with a laser beam tightly focused onto a single pixel.b) Fourier transform of the intensity signal on a linear scale corresponding to the fringe visibility loss due to the interpixel crosstalk.The spikes visible on the Fourier transform graph are caused by coherent noise introduced by the internal electronics of the CCD detector.
where ⊗ denotes convolution operation.All effects, including the rectangular characteristic of a single pixel Π δζ/2 (ζ), partial aliasing A(ζ), the finite focal spot size and interpixel crosstalk B(ζ), deteriorate the resolution of the spectrometer and all of them are convolved with the spectral fringe signal I(ζ): reg (ζ) is the registered spectral fringe signal.The Fourier transform of the spectral fringes I(ζ) will be multiplied with the Fourier transform of the functions П δζ/2 (ζ), B(ζ) and A(ζ).
Fig. 4 .
Fig. 4. Points representing the maximal signal drop (registered at the end of the axial measurement range) as a function of focal length of the imaging lens measured and calculated for a CCD camera model Aviiva M4 CL2014 from Atmel.The black solid line shows the calculated signal drop caused by the finite spot size at the detector.
Fig. 5 .
Fig. 5. Reduction of fringes visibility as a function of optical path difference z.The plot shows separate effects: finite pixel size (red) calculated theoretically, aliasing (blue) found by simulation and spot size (green) determined by experiment prformed for the focal length of the spectrometer objective f= 200 mm.The solid black line is the cumulative occurrence.The squares represent experimental data.The signal power drop can be as high as −19 dB.
Fig. 6 .Figure 6
Fig.6.Comparison of theoretical sensitivity drop in standard (red) and improved (black) SOCT, as a function of reduced imaging range.Both curves were calculated with parameters related to the experiment.Gray areas correspond to imaging ranges for standard SOCT and multiplexing method with different M Figure6compares the sensitivity drop of standard SOCT (same as the black line in Fig.5) with the calculated exponential function describing the signal drop due to convolution of the fringes with the comb shape.Comb lines are assumed to illuminate every fourth pixel of the CCD camera.Maximal imaging depth, as compared to the standard SOCT, is reduced four times for M = 1, is the same for M = 4, and is 50 % wider for M = 6.The signal loss at the specific maximal depth calculated for the method with M = 4 is −3.1 dB, and with M = 6 is −4.5 dB.The effective number of samples can be higher than the number of pixels of the detector resulting in expansion of the imaging depth.It must be noted, that simple increase of number of pixels in CCD camera in standard SOCT will theoretically increase imaging depth, but because of dramatic loss of sensitivity, in practice there will be no improvement in imaging range.
-
Figure7shows a schematic drawing of the experimental setup as a demonstration system.The light source BLS (Broadlighter T840, Superlum, Moscow), a broad spectrum, optical isolator OI and tunable fiber Fabry-Perot interferometer FPI (Micron Optics, FSR = 89 GHz, BW = 1.62 GHz, Finesse = 55) form an optical frequency comb generator OFCG.The light
Fig. 7 .
Fig. 7. a) SOCT system setup using an Optical Frequency Comb generator OFCG; b) optical frequency comb signal registered by a spectrometer.BLS -broad band light source, OIoptical isolator, FPI -tunable Fabry-Perot interferometer, AMP -amplifier, C -coupler, PCpolarization controllers, L 1-6 -lenses, F -neutral density filter, D -prism pair for dispersion compensation, RM -reference mirror, GS -galvoscanner, OB-object, DG -diffraction grating, DRV -control unit.Note, that the modulation of the comb is not 100%.It is due to the phenomena deteriorating the spectrometer resolution described in section 2
Fig. 8 .
Fig. 8. Cross-sectional images of human eye in vivo obtained by the SOCT system using optical frequency comb for M = 1.a) cornea b) foveal region of the retina To avoid the limited imaging range, we performed experiments using multiplexing technique for M = 4 and M = 6 and compared with the result of the standard SOCT technique.The multiplexing method for M = 4 enables to image the same imaging range as a standard system while for M = 6 the imaging range expands 1.5 times.
Fig. 9 .Figure 9
Fig. 9. SOCT sensitivity drop as a function of optical path difference for standard SOCT (black dots) and SOCT using optical frequency comb in multiplexed measurements for M = 4 (red rhombs) and M = 6 (blue squares)Figure9shows results of measurements of depth-dependent sensitivity.The predicted value of the maximal signal drop for the multiplexing method for M = 4 is −3.1 dB and for M = 6 is −4.5 dB and are in agreement with the experimental results.This experiment shows that the described method gains approximately 14 dB sensitivity at the end of the axial measurement range (3.4 mm) compared to the standard technique.It also shows that it is possible to extend the measurement range to 5 mm with decrease of sensitivity less than 5 dB.
Fig. 10 .
Fig. 10.Cross-sectional image of the anterior chamber of porcine eye in vitro obtained by a) standard SOCT, and multiplexing SOCT using OFC for b) M = 4 and c) M = 6.
Fig. 11 .
Fig. 11.Cross-sectional image of the anterior chamber of porcine eye in vitro obtained by multiplexing SOCT technique with M = 6.In this image the anterior surface of the crystalline lens is also visible.
The function T FPI (k) can be expressed as a convolution of the Cauchy (or Lorentz) distribution function and Dirac comb D π/d (k): ) (C) 2008 OSA 17 March 2008 / Vol.16, No. 6 / OPTICS EXPRESS 4169 | 8,330 | 2008-03-17T00:00:00.000 | [
"Engineering",
"Physics"
] |
Sensorless Modeling of Varying Pulse Width Modulator Resolutions in Three-Phase Induction Motors
A sensorless algorithm was developed to predict rotor speeds in an electric three-phase induction motor. This sensorless model requires a measurement of the stator currents and voltages, and the rotor speed is predicted accurately without any mechanical measurement of the rotor speed. A model of an electric vehicle undergoing acceleration was built, and the sensorless prediction of the simulation rotor speed was determined to be robust even in the presence of fluctuating motor parameters and significant sensor errors. Studies were conducted for varying pulse width modulator resolutions, and the sensorless model was accurate for all resolutions of sinusoidal voltage functions.
Introduction
Electric motors were an important component of industry for well over 100 years, and especially so today. Electric motors are advantageous as they are extremely efficient, environmentally friendly, and are simple to control; much more than mechanical engines that often require complex transmissions. As a result, today electric vehicles (ELV) are becoming more and more available on the road, with hybrids such as the Toyota Prius, plug-in hybrids such as the Chevy Volt, and all-electric cars such as the Tesla becoming much more mainstream on the roads. The US Navy is heavily invested in electric motors for the modern Ford (CVN-78) Class aircraft carrier, with the new Advanced Arresting Gear (AAG) and Electromagnetic Aircraft Launch System (EMALS) utilizing many new motor technologies to improve aircraft launch and recovery. Finally, while subways have used electric motors for well over a century, today advance high-speed trains are being built all over the world. With every passing year, society becomes more and more dependent on the benefits of electric motor technology.
By far the biggest advantage of electric motors is that they are relatively easy to control. Most mechanical engines, whether internal combustion, gas turbine, or steam driven, all require a complex mechanical transmission to give the operator some level of control of the final speed and torque output of the engine. The speed and torque of an electric motor, however, can be controlled by simply adjusting the magnitude and frequency of the power input. One common tool for electric motor control is a Pulse Width Modulator (PWM), which can a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 be used to control the input electric frequency, which can control torque and speed [1]. It is essential to fully understand all of the effects of the PWM in order that a proper control algorithm can be developed for the electric motor.
One important feature of electric motor technology is the ability to have sensorless control. In any motor controller, it is necessary to know the rotor position and speed in order to determine what electric inputs to use for the given application. In an ELV, for example, the cruise control or the driver may want to increase or decrease the motor power depending on the motor speed. An external speed sensor and encoder can be used to measure the rotor speed, but such a system will take up space and weight, as well as be a potential point of failure on an electric motor system. If the encoder is replaced with a sensorless approach that only measures the motor current to determine the speed, the system may be built smaller and more compact. If the sensorless approach is used in combination with a speed encoder, the sensorless system can provide information on the current in real time, which helps to alert the user if there is an issue with the electric motor system. Finally, adding the sensorless approach to complement a traditional encoder adds redundancy to the system in the event of encoder failure.
There have been numerous previous journal publications on sensorless approaches for electric motors [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18], and even for three-phase motors [19][20][21]. These modeling approaches, however, have overwhelmingly been analytical, with little focus on the effects of the PWM. The PWM has a series of discrete pulses, with a fixed resolution, where the direct current input can be either on or off. There are several PWM approaches [1], but they result in an approximation of an electric voltage cycle; the greater the PWM resolution, the more the electric current represents a sinusoidal waveform. This effort will seek to model the input current, not as a waveform, but as the discrete components one can expect in a PWM waveform of finite resolution.
Electric Motor Model
This effort will seek to model a three-phase electric induction motor, created from DC power modified by a PWM, in order to compare and test out any sensorless algorithm. In threephase electric power, there are three alternating voltage legs offset by 120˚, where u as ðtÞ ¼ U M Á cosðo f tÞ; where U M (volts) is the average voltage amplitude, and ω f (radians/s) is the electric voltage angular frequency. In order to simplify the motor analysis, a Park Transform [1] is performed to convert the voltage and current into two separate values instead of three. A Park Transform matrix is defined as where " P T is the Park Transform matrix, and θ is the reference angle. For a stationary reference frame θ = 0, which will be used throughout this effort, the Park Transform is simply and thus this matrix can be applied towards the three-phase voltage vector with the values defined in Eq (1) Just so long as the three-phase input is balanced, where all three legs are of equal magnitude and separated by 120˚, the Park Transform can convert three separate voltage values to two, which simplifies the motor analysis. The Park Transform is used to convert the three-phase currents into two separate values; referred to as the quadrature and direct axis components. Throughout the analysis, there are two voltages within the stator u qs and u ds , two voltages within the rotor u qr and u dr , two currents within the stator I qs and I ds , and two currents within the rotor I qr and I dr . During the motor simulation, all of the voltage and current parameters in the rotor and stator from the previous time-step are known and used in the analysis, along with the true rotor speed. For the sensorless prediction, however, the algorithm can only work with measurable properties such as the voltage and current in the stator; the rotor currents obviously cannot be measured directly, and the goal of the algorithm is to determine the unknown rotor speed. The next step is to calculate the quadrature and direct axis components of the rotor voltage with the known rotor speed ω r (rad/s), rotor and stator currents from the previous time-step, as well as the new stator voltages in the next time-step. The zero-axis component is not necessary in this specific model for predicting the torque. The current rates of changes start off as zero, and go into and out of the model and the rotor voltages can be used to calculate the rate of change of the (quadrature and direct axis) rotor and stator currents where the voltage " U sr qd 0 and current " I sr qd 0 vectors are and the matrices in Eq (5) are defined as where the values of these parameters are defined as where R r (O) is the rotor resistance, R s (O) is the stator resistance, L ms (Henry) is the stator magnetizing inductance, L ls (Henry) is the stator leakage inductance, and L lr (Henry) is the rotor leakage inductance. The change in current is calculated simply by The model uses iteration to converge on both the previous time-step current I 0 and current rate-of-change dI dt . The initial guess for the new time-step is simply the current data from the previous time-step.
Throughout the motor simulation, the motor parameters are offset by a random fluctuation. In any practical application, the motor parameters will always fluctuate from the published or measured values, and therefore in order to test the robustness of the model, the simulation will offset the simulated motor parameters by a specified amount (typically 5%); these offset parameters will not be used within the sensorless algorithm. The motor fluctuation equation is where Θ is a transient arbitrary motor property, Θ 0 is a arbitrary motor property as it is specified, (%) is the percent random error that will be simulated, and δ is a random number ranging from 0 to 1. Finally, once the rotor and stator current are determined, the motor torque T motor (N Á m) can be calculated where P is the number of poles in the motor. This torque can be used to determine the change in speed on the rotor and ultimately the vehicle powered by the motor.
Sensorless Algorithm
All of the equations in the previous section are to accurately model a three-phase electric motor for a given electrical input; all of the parameters including the motor parameter fluctuation, the rotor speed, and the rotor voltages and currents were used in the motor model. The goal of the sensorless algorithm is, however, to determine the rotor speed that is unknown to the motor controller. It is obviously impossible to measure the rotor voltage and current in real time, but the stator currents can be measured and compared with the controller input voltages. The goal of the sensorless algorithm will be to attempt to determine the unknown rotor speed, rotor voltage, and rotor current, from the measured and known stator input voltage out of the PWM, as well as the stator current (which can be physically measured in real time) determined by the motor model. The sensorless algorithm first will attempt to determine the rotor currents; this can be determined from the stator flux. The rate of change of the stator flux can be calculated from known stator voltages and currents dC qs dt ¼ u qs À r s Á I qs ; dC ds dt ¼ u ds À r s Á I ds ; ð14Þ and the stator flux can be determined from the calculated rate of change and the flux is tracked throughout the simulation. At any point of no voltage, such as at the start of the simulation, the flux is initially assumed to be zero. Once the stator flux is predicted, the rotor current can be calculated and the rotor current predicted value is tracked throughout the simulation. Unlike the stator flux, the rotor flux is not useful in this sensorless algorithm, but it can nevertheless be determined with the rotor currentŝ Looking at Eq (5), a function for the rotor speedô r (rad/s) can be found provided one knows all of the voltages. The stator voltages are known, but the rotor voltages need to be predicted. If the rotor speed is known, the rotor voltages can be calculated with Eqs (3) and (4), but since the rotor voltage is not known, the sensorless algorithm splits these equations into terms that are and are not a function of the rotor speed u 0 qr ¼û 0 qr1 þû 0 qr2 Áô r ; These terms can be used to find new voltage vectors and " U sr and these new voltage vectors can be plugged into the equation for the rate of change of the current (Eq (5)) which can be separated into and thus the rotor speed can be separated out aŝ
Model Parameters
This effort is not seeking to model a particular ELV; only an arbitrary scenario to demonstrate this sensorless algorithm and how it's accuracy is immune to both variation in the PWM resolution, as well as random fluctuations of the voltage and current sensors. For the sake of this arbitrary simulation, an ELV will be modeled, where the controller will ramp up the speed from 0 to 100 mph over 5 minutes, in 30 second increments of 10 mph. The speed will be predicted based solely on measurements of the stator voltages and currents, and the overall distance traveled will be compared to the integrated velocity predicted by the stator voltages and currents. The voltage will be set to the maximum 10 kV when accelerating, and a cruising voltage of 2.5 kV will be applied when the predicted voltage is within 5% of the cruising speed. In addition, a random hill function is applied to adjust the gravity force accelerating or decelerating the EVL; the standard deviation of the random hill grade is ±17.3184˚, while the maximum hill angle possible is ±30˚. When the angle is known, a vehicle (and thus rotor) acceleration / deceleration is applied from the force of gravity where θ is the random hill angle, g is the gravitational acceleration of 9.81 m/s 2 , and a hill (m/s 2 ) is the accelation / deceleration of the ELV due to the random hills on the road. This can be modified to represent an angular acceleration where M car (kg) is the mass of the ELV, D W (m) is the diameter of the tire, and _ o hill (rad/s 2 ) is the angular acceleration of the wheel axial due to the randomly fluctuating hill. Finally, the model will use a drag coefficient, applying a decelerating force proportional to the square of the forward speed where ω RPM is the rotor speed in revolutions per minute, and K drag is an arbitary coefficient for drag where ρ (kg/m 3 ) is the density, A (m 2 ) is the surface area, and C D is the dimensionless drag coefficient; for the vehicle's shape, C D is a function of the dimensionless Reynolds Number. In this simulation, the drag coefficient is set at K drag = 2 N/RPM 2 . A parametric study of the sensorless model under varying conditions was performed. Each 5 minute run was conducted with a consistent time step of 5 milliseconds, but with varying PWM resolutions that ran from 20 Hz to 200 Hz in increments of 20 Hz; varying PWM angular functions are simulated in Fig 1. These PWM resolutions and the time-step were selected due to the simulated electric motor frequency of 6 Hz. To further test the robustness of the sensorless algorithm, after every minute of simulation, the overall motor parameters would fluctuate randomly by 1%; the adjusted motor parameters would be unknown to the sensorless prediction algorithm. Finally, the error range was varied in 5% to 45% in increments of 10%; this percent error was randomly applied to the measured stator voltages and currents at each time step, to validate the robustness of the prediction method. These fifty independent computation tests were validated by comparing the integrated simulated and predicted distances traveled at every time step increment.
Model Results
In each of these fifty simulations, at each of the 60,000 time steps, the distance traveled was calculated by numerically integrating the true analytical velocity. A second, predicted distance was determined by numerically integrating the velocity predicted by the measured stator voltage and current. It is qualitatively obvious in Fig 2a that the predicted velocity at each specific time-step will vary considerably to the actual velocity; however, the numerically integrated distance traveled was observed in Fig 2b to match remarkably.
At every time-step, the percent error of the distance traveled was calculated. The sensorless algorithm is considered to have matched if the predicted distance traveled varied from the actual distance traveled by more than 0.01%. As observed in Table 1, the predicted integrated distance traveled matches remarkably; only a few percentage points of time-steps varied more than this threshhold! There was an expected increase in error with increasing voltage and current sensor fluctuation; this error was very minor, as the prediction method managed to correct itself. In addition, the sensorless prediction method had no discernable change in error rate with differing PWM fluctuations, even as the PWM resolution was increased by an order of magnitude. This parametric study has demonstrated the robustness of this prediction method for determining the velocity and distance traveled by a vehicle, without having direct measurements of the wheel speed.
Conclusion
This effort succeeded in first building a numerical motor model [1] to replicate the controls on an arbitrary ELV traveling through random hilly conditions. A sensorless rotor-speed prediction algorithm was then developed, which used analytical equations [1,2,4,[19][20][21] to predict the change in stator flux, which could be used to predict (based on the previous time step flux) the stator flux. This flux can be used to predict the rotor currents, which are ultimately used to predict the rotor voltages, flux, and most importantly the unknown rotor speed. This method allows the motor controller to have redundancy on the rotor speed sensor, or even disregard it if the size and weight are an issue; the only measurements needed were the stator voltage and currents.
Of course, sensorless motor modeling is nothing new . What makes this technique unique is the emphasis on AC signals that are not quite sinusoidal due to limits in the resolution of the PWM. This sensorless prediction method has been validated to be extremely robust even in the presence of unknown motor fluctuations, with voltage and current sensors fluctuating up to 45% of the true value, and it can be used for any PWM output. Provided the sensorless time-step is small enough, the sensorless model can work with any alternating electrical current function provided it is separated into three separate legs. As a result, this technique, which only requires a computer and a voltage and current sensor in the motor's stator, can contribute greatly to ensuring greater reliability and safety for critical applications of threephase induction motor controllers. | 4,109 | 2017-01-11T00:00:00.000 | [
"Engineering"
] |
Permeability Estimation of Regular Porous Structures: A Benchmark for Comparison of Methods
The intrinsic permeability is a crucial parameter to characterise and quantify fluid flow through porous media. However, this parameter is typically uncertain, even if the geometry of the pore structure is available. In this paper, we perform a comparative study of experimental, semi-analytical and numerical methods to calculate the permeability of a regular porous structure. In particular, we use the Kozeny–Carman relation, different homogenisation approaches (3D, 2D, very thin porous media and pseudo 2D/3D), pore-scale simulations (lattice Boltzmann method, Smoothed Particle Hydrodynamics and finite-element method) and pore-scale experiments (microfluidics). A conceptual design of a periodic porous structure with regularly positioned solid cylinders is set up as a benchmark problem and treated with all considered methods. The results are discussed with regard to the individual strengths and limitations of the used methods. The applicable homogenisation approaches as well as all considered pore-scale models prove their ability to predict the permeability of the benchmark problem. The underestimation obtained by the microfluidic experiments is analysed in detail using the lattice Boltzmann method, which makes it possible to quantify the influence of experimental setup restrictions.
Pressure interaction force acting from particle j on particle i, [N] F V i j Viscous interaction force acting from particle j on particle i, [N]
Introduction
The characteristics of flow through porous media play an important role in a wide range of natural and industrial applications. A classical example is found for soil through which groundwater seeps. In this context, we can intuitively understand the terms porosity and permeability. Porosity is a measure of the void space in the porous medium, whereas permeability is a measure for the resistance of the porous medium itself against flow. In this regard, the famous law of Darcy (1856) provides a correlation between flow and the corresponding driving force in a porous medium via Here, v is the fluid's filter (Darcy) velocity, p is the pressure, μ is the dynamic viscosity and K is the second-order intrinsic permeability tensor. In particular, the Darcy velocity v = φ w F is the fluid's seepage velocity w F weighted by the porosity φ. Gravitational forces are neglected in (1). The computation of the total (void) porosity for porous media with a known geometrical structure is straightforward. However, this is not the case for the permeability. The permeability is crucial for physically consistent modelling and accurate numerical simulations of flow and transport processes in porous media, but associated with great uncertainty. Therefore, it deserves our particular attention.
In this paper, we use the following definition of scales to distinguish the relevant processes and parameters accordingly. The intrinsic permeability is an effective parameter that accounts on the scale of a representative elementary volume (REV scale) for the geometric configuration on the pore scale. The REV scale is the one where Darcy's law is valid; it is therefore often denoted also as the Darcy scale. Our proposed methodology involves different mathematical and numerical models that all resolve the flow through the porous medium on the pore scale. It is compared to an experimental determination of permeability, which is inherently at the REV scale. For further details on permeability and porosity as effective REV-scale parameters, we refer to e. g. Helmig (1997); Hommel et al. (2018).
Different experimental, semi-analytical and numerical techniques to compute or estimate the permeability exist in the literature. The intrinsic permeability can be determined experimentally by imposing a flux, measuring the corresponding pressure drop and applying Darcy's law. In recent years, microfluidic experiments have been increasingly used for various experimental investigations of porous media, cf. Yoon et al. (2012); Karadimitriou and Hassanizadeh (2012); Scholz et al. (2012).
Several approximations for the semi-analytical description of the permeability-porosity relationship exist, the most famous being the Kozeny-Carman equation, cf. Kozeny (1927); Carman (1997). In this context we mention also the Hazen relation (Hazen 1892), the powerlaw relation by Verma and Pruess (Verma and Pruess 1988) and the Timur and Morris-Biggs relations (Timur 1968;Morris et al. 1967). For a recent overview of porosity-permeability relationships for evolving porous media we refer the reader to Hommel et al. (2018). These relations have in common that they try to describe all variability using the porosity and one or several fixed scaling factors. In practice, these scaling factors are often fitted to the observed permeability data, if available. However, the structure and the permeability of a porous medium is not solely described by its porosity; the same porosity can yield different permeability values, e.g. Schulz et al. (2019). The shape, distribution, interfacial tension and roughness of the grains as well as the coordination number, connectivity and size of the pores significantly affect the permeability of the medium (Berg 2014;Millington and Quirk 1961;Valdes-Parada et al. 2009;. However, this information is unfortunately unknown for most settings. The Kozeny-Carman relation includes geometrical information through the tortuosity, which can be estimated for simple grain packings (Yazdchi et al. 2011). An extensive review concerning the validity of the Kozeny-Carman relation and its modifications for different porous-medium geometries is provided in Schulz et al. (2019).
Different averaging techniques can be applied to compute the permeability: the homogenisation theory based on two-scale asymptotic expansions, the volume averaging theory and the numerical upscaling (Whitaker 1999;Hornung 1997;Auriault et al. 2009;Gray and Miller 2014). The theory of homogenisation provides a useful tool to efficiently compute the permeability of regular porous media requiring solutions to flow problems in a periodic unit cell. Another advantage of the method is that it is not restricted to the scalar case. Depending on the geometrical characteristics (e.g. thin porous media, parabolic velocity profile) different assumptions can be made and, therefore, different flow problems in unit cells are solved (Fabricius et al. 2016;Chamsri and Bennethum 2015). Volume averaging (Whitaker 1999;Gray and Miller 2014) offers an alternative technique to compute permeability based on solving local flow problems. However, under periodic closure conditions, which is the case for the benchmark problem considered in this work, the resulting set of equations coincides with the one obtained from homogenisation (Valdes-Parada et al. 2009). Therefore, volume averaging is not considered in this paper.
Finally, solutions of the pore-scale resolved models can be obtained using the lattice Boltzmann method (LBM), Smoothed Particle Hydrodynamics (SPH) or the Finite Element Method (FEM) and the simulation results can then be upscaled in order to compute the intrinsic permeability (Pan et al. 2001;Sivanesapillai et al. 2014).
In reservoir simulations with a length scale of kilometres, a field scale is typically introduced. In this case, Darcy's law is considered to be valid both on the Darcy scale and the field scale. The Darcy-scale permeability tensor is usually highly heterogeneous and needs to be upscaled in order to develop efficient numerical methods (Farmer 2002;Wu et al. 2002;. However, such methods are beyond the scope of this article. In the literature, comparisons of individual methods for permeability estimation exist (Schulz et al. 2019;Chamsri and Bennethum 2015;Song et al. 2019;Sugita et al. 2012;Guibert et al. 2015;Yazdchi et al. 2011). For highly complex geometries, the numerical methods can generally only be applied to a small subsample of the domain, whereas experimental and empirical relations are applied for the domain as whole, giving inherent uncertainties and inaccuracies due to rock heterogeneities (Song et al. 2019;Guibert et al. 2015). Therefore, we perform a comparative study of a broad variety of methods with a designed benchmark experiment. The goal of this paper is to estimate the intrinsic permeability of an artificially produced regular porous medium using different techniques, to investigate the applicability of each method and to validate the considered approaches for computing effective properties of porous media. The manuscript can serve as a benchmark for researchers working on modelling flow and transport processes in porous media, where the geometric structure is available but the effective parameters are not known (artificially produced composite materials, geometry reconstructed by imaging techniques). The results of the paper will also be of use for scientists working on the development of advanced averaging methods for computing effective properties of porous media.
We would like to draw the reader's attention to the limitations of this work in terms of scales. Since one of the objectives of this work is to compare the numerical schemes with physical experiments, the experimental potential was the limiting factor in terms of the length scales under consideration. Our experimental infrastructure would not allow for formations with ultra small features (smaller than a few microns). Additionally, even if such features were possible to achieve, in terms of the fabrication method for the artificial porous medium employed, this would induce pressures which would lead to the deformation of the material, leading to increased conditional inaccuracies. Consequently, this experimental approach in general cannot account for very low permeabilities (nano-or micro-Darcy), which would also demand for the use of gases instead of fluids, and a fundamentally different approach to estimate the corresponding permeability (Klinkenberg effect). In accordance to this limitation, the corresponding numerical schemes under investigation and comparison, also do not bear these characteristics and assumptions.
The paper is organised as follows. In Sect. 2, the setup of the benchmark problem, which serves as the basis for all considered methods, is described. Section 3 presents calculations using the Kozeny-Carman equation providing a semi-analytical porosity-permeability relation. Further, four mathematical homogenisation approaches are given in Sect. 4. In particular, a three-dimensional (3D) approach is described in Sect. 4.1, a classical two-dimensional (2D) approach in Sect. 4.2, a very thin porous medium (VTPM) approach in Sect. 4.3 and a pseudo 2D/3D approach in Sect. 4.4. Numerical methods to compute the intrinsic permeability of the benchmark problem via upscaling the pore-scale simulations are discussed in Sect. 5, namely FEM in Sect. 5.1, SPH in Sect. 5.2 and LBM in Sect. 5.3. The experimental setup and the experimental results of the benchmark problem are presented in Sect. 6. Finally, a comparison of the results is given in Sect. 7 and a concluding discussion is provided in Sect. 8.
Benchmark Problem
As a common basis for the discussion of different permeability estimates, it is crucial to precisely describe the chosen geometry of the porous structure and the related modelling assumptions. As a benchmark example, a simple regular and homogeneous porous medium is chosen. In a 3D domain of dimensions 14 mm × 10 mm × 0.091 mm (length l × width b × thickness h), equidistantly aligned cylinders with the same radius r are embedded, cf. Fig. 1. The radius for the manufactured sample investigated in the experiments is r = 0.4 mm, cf. Sect. 6. In addition, the other methods consider also radii of 0.35 mm, 0.45 mm, 0.47 mm and 0.49 mm. The smallest repeating unit consists of a square of 1 mm edge length. We chose the REV having one cylinder in the centre, which corresponds to the unit cell in this example. The computation of the porosity φ is straightforward and given in Table 1 for different radii. In terms of basic modelling assumptions, we consider no-slip boundary conditions at the internal boundaries (interfaces) of the cylinders as well as at the top and bottom surface of the domain. Furthermore, we assume creeping flow conditions of a viscous liquid at very low Reynolds numbers and, thus, neglect inertia effects. Finally, we assume the solid structure as impermeable and rigid, i. e. no solid deformations.
Semi-Analytical Approach
The well-known Kozeny-Carman equation is a semi-analytical, semi-empirical relation for estimating the permeability of porous media (Kozeny 1927;Carman 1997): where k is the intrinsic permeability, φ is the porosity, σ is the specific surface area and c kc is the Kozeny-Carman constant. The porosities φ of the considered geometries are given in Table 1. The specific surface area for cylinders is σ = 2/r . In general, it is difficult to estimate the value of c kc since it depends on many factors including the flow tortuosity and roughness of the grains. Carman (1997) originally proposed the value c kc = 5 or granular porous media, which has also been suggested for cylindrical grains by Schulz et al. (2019). We will hence also use c kc = 5 in the current study. Table 3 lists the values of permeability k obtained from relation (2) for different radii of solid cylinders. A typical procedure in practical applications is to use the Kozeny-Carman constant as a fitting parameter. This is not done here as our goal is to find the permeability values given by the Kozeny-Carman equation without using extra information to fit parameters. However, fitting of c kc for cylindrical obstacles does not give one unique value of c kc applicable for several cylinder radii (Yazdchi et al. 2011). Moreover, the Kozeny-Carman equation (2) provides a permeability estimation for porous media, which are extended over infinite space. Thus, the influence of the walls in the benchmark problem (cf. Fig. 1) is not considered in the Kozeny-Carman approximation, cf. the discussion in Sect. 7. For modifications of the Kozeny-Carman relationship (2) we refer the reader to e.g. Schulz et al. (2019).
Mathematical Homogenisation
In this section, we compute the permeability using the theory of homogenisation (Auriault et al. 2009;Hornung 1997). We consider fluid flow through the three-dimensional porous with a periodic arrangement of pores, cf. Fig. 1a. We denote by ε = δ/ the ratio of the characteristic pore size δ to the length of the domain of interest. At the pore scale, flow in the pore space ε is described by the steady Stokes equations completed with the no-slip condition on the boundary of solid inclusions and appropriate boundary conditions on the external boundary ∂ . We denote by v ε and p ε the non-dimensional velocity and pressure of the fluid. This problem formulation ensures non-trivial flow in the range of Darcy's law (Hornung 1997).
As it is common in the theory of homogenisation, we define the porous microstructure by periodic repetition of the scaled unit cell Y , which consists of the solid part Y s and the fluid part Y f , cf. Fig. 2. To obtain the permeability of the porous medium we follow the classical procedure of homogenisation (Hornung 1997). We assume two-scale asymptotic expansions for the pore-scale velocity and pressure where a, b ∈ N 0 depend on the homogenisation approach applied and v i , p i are y-periodic functions. Computing the derivatives ∇ = ∇ x + ε −1 ∇ y , substituting expansions (4) into the pore-scale problem (3) and combining terms with the same degree of ε, we obtain Darcy's law (1) valid in the porous-medium domain .
We consider different assumptions for the geometric structure of the medium (full 3D, 2D, very thin porous medium and pseudo 2D/3D) with the appropriate unit cells (circular and cylindrical solid inclusions) and apply four different homogenisation approaches to compute the intrinsic permeability. We use the software package FreeFem++ (Hecht 2012) for numerical simulations of all homogenisation approaches considered in this section.
Classical Three-Dimensional Approach
We consider solid cylinders with height h > 0. Thus the corresponding unit cell Y phys = Y phys s ∪ Y phys f is also characterised by the macroscopic height h, cf. Fig. 2b. To compute the permeability by means of homogenisation we solve the following threedimensional cell problems for j = 1, 2, 3: taking into account Y phys f π j dy = 0. The non-dimensional intrinsic permeability is given by and π j are the solutions to the cell problems (5) for j = 1, 2, 3. For the considered geometry (cylindrical solid inclusions) we obtain the diagonal permeability The intrinsic permeability k is computed numerically, the units are taken into account and the physical permeability values are presented in Table 3 for different cylinder radii. Taylor-Hood P2/P1 elements are applied for the velocity and the pressure, respectively. For the solid inclusions with radius r = 0.4 the 3D unit cell Y phys is partitioned into twelve elements in x 3 -direction and approx. 29 000 elements in the whole fluid part of the unit cell Y phys f .
Classical Two-Dimensional Approach
The classical two-dimensional approach corresponds to the case where no effect from the top and bottom of the porous domain is included, and hence corresponds to the height h of the medium being infinite. This is a very strong simplification which is not valid for our benchmark in general (see discussion in Sect. 7). However, in comparison to the 3D homogenisation approach the 2D approach is computationally much cheaper. In this case, we define a two-dimensional unit cell Y (Fig. 2a) and obtain the cell problems in two space dimensions ( j = 1, 2): where Y f π j dy = 0. With this approach we obtain the non-dimensional permeability and π j are the solutions to the cell problems (6) for j = 1, 2. Again, we get a diagonal tensor K = (k i j ) i, j=1,2 = diag{k, k} and present the physical permeability values k in Table 3 for different radii. For discretisation of velocity and pressure the Taylor-Hood finite element pair is used with approx. 20 500 elements for the radius r = 0.4.
We observe that for highly porous structures (r = 0.35, r = 0.4) the permeability is one order of magnitude higher than the one obtained by the 3D approach. This is due to the fact that the 2D approach neglects the wall friction at the top and bottom, compared to the 3D approach, where the no-slip condition is valid at these boundaries.
Very Thin Porous Media
We apply the homogenisation approach proposed in Fabricius et al. (2016) for Very Thin Porous Media (VTPM) where the cylinder height h is much smaller than the distance δ − 2r between the cylinders. In this case, the relation h = δ 2 is used for the derivation of the permeability tensor. The main idea is to pass the limit h/ε → 0 in the 3D approach. In this case we get the following cell problems for j = 1, 2: taking into account Y f w j dy = 0. The non-dimensional permeability is given by where w j is the solution to the cell problems (7) for j = 1, 2. As in Sects. 4.1 and 4.2 , we obtain a diagonal permeability tensor K = diag{k, k} and present the physical permeability values k in Table 3. For the numerical simulations P1 finite elements are used, and for inclusions of the radius r = 0.4 the fluid part of the unit cell is partitioned into approx. 20 500 elements. The cell problems (7) are computationally very cheap compared to problems (5) and (6). However, applying the VTPM approach we assume that the height h is much smaller than the interspatial distance between two solid obstacles (Fabricius et al. 2016) and the fluid flow in x 3 -direction is negligible. Therefore, this approach fails for porous media where the distance between the cylinders is smaller than the height. For r = 0.45 the interspatial distance is 0.1 > h. The interspatial distance for r = 0.47 and r = 0.49 is 0.06 < h and 0.02 < h, accordingly. Therefore, the computational results for the two last radii in Table 3 (homogenisation VTPM) are not relevant.
Pseudo Two-/Three-Dimensional Approach
Another simplification of the 3D homogenisation approach is obtained assuming that the velocity is horizontal and its vertical variability is resolved through a parabola as no-slip boundary conditions are assumed and the viscous boundary layer is larger than half of the cell height. Such a simplification is applicable when the height of the domain is dominating the flow profile. The procedure is in accordance to Flekkøy et al. (1995), where the Stokes flow between two parallel plates is considered. In particular, the effect from the top and bottom boundaries is incorporated into the model through a viscous drag force. Hence, this pseudo 2D/3D homogenisation approach also relies on the porous medium being thin, where the thickness is still comparable to the typical pore size. Therefore, only the flow in the middle of the domain (x 3 = h/2) needs to be considered, and the following 2D-cell problems are obtained ( j = 1, 2): The non-dimensional permeability in this case is given by where the scaling factor 2/3 comes from the integral of the parabolic profile. As in Sects. 4.1-4.3, we obtain a diagonal permeability tensor K = diag{k, k} and present the physical permeability values k in Table 3. For the numerical simulations we applied Taylor-Hood elements and for the setting with radius r = 0.4 mm approx. 20 500 elements are used.
Computational costs for the cell problems (8) are the same as for the classical 2D approach but the permeability estimation complies with the 3D approach for moderate cylinder radii (r < 0.47 mm). However, the difference between the 3D approach and the pseudo 2D/3D approach increases for larger radii (r = 0.47 mm and r = 0.49 mm). This is due to decreasing gaps between the cylinders leading to a violation of the assumption on the parabolic flow profile.
Pore-Scale Models
In this section, three different numerical methods for pore-scale resolved computations are applied to the benchmark problem, namely the FEM, SPH and LBM. In these models, implementations of the Navier-Stokes equations are available and, thus, used in the numerical computations. However, the solution of this more general formulation reduces to the solution of the stationary Stokes problem in the limit case of Reynolds numbers tending towards zero, as it is postulated in the benchmark problem.
Finite-Element Method for the Navier-Stokes Equations
The simulation is realised using the commercial finite-element (FE) software tool Abaqus /CFD 1 . Thereof, a FE implementation of the incompressible Navier-Stokes equations is used for the performed computational fluid dynamics (CFD) simulations. For the flow domain, a constant flow rate of 10 μL/min is prescribed at the inlet surface, while the outlet surface is assigned with a constant pressure of 0 Pa, cf. Fig. 3a.
The remaining surfaces, i. e. the top, bottom, lateral and internal surfaces of the cylinders, are assigned with no-slip boundary conditions. For the spatial discretisation, a finite-element mesh is generated with linear hexahedral elements since quadratic elements are not available in Abaqus for fluids. Linear elements are not optimal to approximate the parabolic velocity profile but numerically much cheaper. Therefore, the geometry is meshed in total with 327 440 elements using ten elements along the thickness of the structure in order to adequately resolve the arising parabolic velocity profile. For the material properties of the liquid, a density of 1 000 kg/m 3 and a viscosity of 8.9 × 10 −4 Pa s are chosen according to water at 25 • C. For the considered example, a linear pressure drop from the left to the right side (i. e. a constant pressure gradient in the steady state) is obtained, cf. Fig. 3b. The volumetric flow rate through the pore structure is provided as an output variable within Abaqus. Thus, the Darcy filter velocity can be computed by dividing it with the permeable cross-sectional area. Based on these quantities, the intrinsic permeability is obtained via the Darcy equation (1), applied in the flow direction. The estimated permeability values are collected in Table 3.
Smoothed Particle Hydrodynamics for the Navier-Stokes Equations
In Smoothed Particle Hydrodynamics (SPH), the discretisation of the governing weakly compressible Navier-Stokes equations spans a set of integration points x i , also referred to as particles, cf. Monaghan (1988); Gingold and Monaghan (1977). At each of these interacting collocation points, scalar fields , cf. Nomenclature of physical properties are interpolated employing convolution with a continuously differentiable kernel function W (x, R). The kernel radius R determines a sphere of influence and likewise declares neighbouring particles j. The approximation of integrals converts continuous field functions into particle properties cf. Nomenclature, kernel representation into spatial discretisation W (x, R) = W i j , and turns differential operators into short-range interaction forces. Hence, the motion equation to be solved for each fluid particle is found as where F V i j denotes viscous interaction forces, F P i j pressure interaction forces and F B i body forces. In this context, the implemented SPH formulation is derived on top of the tool HOOMD-blue (Anderson et al. 2008;Glaser et al. 2015;Sivanesapillai et al. 2016) providing state of the art no-slip no-penetration fluid-solid boundaries (Adami et al. 2012), ghost-particle methods for periodic boundaries (Sivanesapillai et al. 2016), a Verlet time integration scheme and stabilisation via an artificial viscosity term (Monaghan and Gingold 1983). Internal surfaces and boundaries in x 3 -direction are assigned with no-slip conditions. The domain is periodic in x 1 -and x 2 -directions, cf. Fig. 1a. For the liquid, material parameters of water are chosen. A body force equivalent to a pressure gradient is applied as the driving force.
In terms of the considered benchmark problem, cf. Fig. 1, the arising flow is computed for a single REV, cf. Fig. 1b, as in homogenisation. The REV structure is discretised using a resolution of 200×24×200 particles for all radii configurations. The discrete structure of the REV and the velocity field in steady state are shown in Fig. 4. Based on the resulting velocity profiles of the SPH simulation, the permeability is computed using the rearranged version (9) of Darcy's law (1). As in Sect. 5.1, the mean particle velocity w F (seepage velocity) needs to be multiplied by the porosity φ to obtain the filter velocity v used in (1). The computed permeabilities are collected in Table 3.
As a basic requisite of the SPH approach the fluid domain is discretised with fluid particles. For a meaningful approximation of the velocity profile in the pore-space passages, at least 10-15 particles are required. For example, with the chosen resolution, a radius r = 0.35 mm results in 60 particles, while r = 0.47 mm allows only 12 particles over the passage. This leads to an enormous increase of the computational effort for larger radii (r > 0.47 mm) due to the need to increase the resolution. Therefore, the SPH is not suited to simulate the benchmark problem for a radius of 0.49 mm with acceptable computational costs.
Lattice Boltzmann Method
The lattice Boltzmann method (LBM) is based on the mesoscopic representation of fictional particle movements (Succi et al. 1991) and provides the third pore-scale resolved method discussed in this paper. Particles stream on a lattice with discrete velocities and collide to relax to an equilibrium. The probabilities f i (x, t) of finding a particle with velocity c i at lattice node x at time t evolve according to the lattice Boltzmann equation with the Bhatnagar-Gross-Krook (BGK) collision operator BGK , the time step t and the number Q of discrete velocities. The collision operator governs the rate at which the probabilities f i relax towards the equilibrium distribution where w i are the weights of each discrete velocity and c s is the speed of sound. The discretisation of velocities limits the applicability of the lattice Boltzmann method to flows with a maximum Mach (Ma) number of 0.3 (Krüger et al. 2017), which is the case here (Ma ≈ 0.001). It is commonly known that the continuum flow approximation holds up to a Knudsen number of Kn ≈ 0.001. For the application and flow rates chosen here Kn 0.001, taking the cylinder diameter (millimetre range) as reference length. Further details on the Knudsen number limits of the LBM can be found in Silva and Semiao (2017); Silva (2018), whereas He and Luo (1997) provides a detailed derivation of lattice Boltzmann equation from the Boltzmann equation. The LBM is not directly applicable for high-speed flows (Re > 10000) as the spatial and temporal resolutions required for such flows become computationally prohibitive (Jain et al. 2016). For the benchmark problem, Re < 1.0 (Fig. 8b). Therefore, the lattice Boltzmann equation is valid within the limits for the application considered here. The macroscopic velocity v and pressure p are obtained from the particle distribution functions, cf. Succi et al. (1991). For this benchmark problem, a D3Q19 stencil is chosen, i. e. 19 discrete velocities (Q = 19) in three dimensions (D = 3) are used (Krüger et al. 2017). No-slip boundary conditions are prescribed at the obstacles as well as the top and bottom walls of the domain. A flow rate guaranteeing a low Reynolds number is prescribed in the form of velocity boundary conditions at the inlet and outlet. The simulations are performed using the LBM solver of the open-source software ESPResSo, cf. Arnold et al. (2013); Roehm and Arnold (2012). In order to justify the resolution for the LB simulation, a grid convergence study (using a quarter of the benchmark setup) is performed prior to the calculation of the permeability of the benchmark geometry, cf. Fig. 5.
In accordance to Sect. 5.1, at least 10 lattice cells are required to resolve the quadratic velocity profile in the pore space. Therefore, a height resolution of 10 lattice cells is chosen to minimise the numerical effort. For the radius r = 0.49 mm, the lattice resolution was increased to 40 cells in x 3 -direction and only a subset of 4×1 cylinders, periodically repeated in x 2 -direction, is considered. The permeability is estimated using the computed velocity and pressure values according to (9). The values are collected in Table 3.
Micromodel Experimental Setup
The experimental setup is comprised of the micromodel, a syringe pump, the pressure sensors and a computer for data acquisition. The micromodel is manufactured out of Poly-Di-Methy-Siloxane (PDMS), following the principles of soft-lithography (Xia and Whitesides 1998;Karadimitriou et al. 2013;Karadimitriou and Hassanizadeh 2012;Auset and Keller 2004). Using this technique, micromodels with a very low surface roughness, in the order of a few tens of nanometers, are produced. Therefore, the influence of the surface roughness on the permeability estimation of the benchmark problem is negligible. The micromodel's flow network is presented in Fig. 6. A CETONI neMESYS 1000N mid-pressure syringe pump 2 is used for the introduction of the fluid, in combination with 2.5 ml glass syringes. The flowinduced pressure difference between the inlet and the outlet of the flow network p, cf. Fig. 6, is measured with two Elveflow MPS2 pressure sensors 3 , with a measurement range of 0-1 bar and acquired with a 16 bit data acquisition system (CETONI QMix I/O). The flow network is connected to the syringe pump and the pressure sensors via Teflon (Polytetrafluoroethylene, PTFE), 1/16 OD, 0.5 mm ID tubing. The connected computer is able to control the syringe pump and communicate with the pressure sensors via a CETONI BASE 120 module 2 , cf. Fig. 7.
Permeability Estimation
The intrinsic permeability of the flow network is estimated by imposing a flux and measuring the arising pressure drop p = p 2 − p 1 , cf. Fig. 6. For this purpose Darcy's law (1) is employed. In order to increase the accuracy of the results and get a better estimation of the measurement error, three runs of series of different flow ratesxxx ranging from 1 to 7 L/s are applied for one minute each, cf. Fig. 8, while the induced pressure is measured at the inlet and outlet, cf. Fig. 6.
The absolute pressure drop across the length of the flow network is directly proportional to the flow rate. Therefore, a linear regression was applied. Rearranging Darcy's law (1), we obtain the following expression for the intrinsic permeability where A = bh is the cross-sectional area of the porous domain, l is the length of the porous domain and μ = 10 −3 Pa s is the dynamic viscosity of water. The permeability is calculated by equation (13) using the slope s pq (pressure over flow rate) of the linear regression, cf. Table 2. 1.4 × 10 −2 9.1 × 10 −7 3.9 × 10 +11 3.9 ± 0.2 × 10 −11 The corresponding Reynolds numbers range from 0.1 to 0.7, cf. Fig. 8b. The error bars in Fig. 8b are related to the pressure measurement. Note that apart from the quantifiable error in the pressure measurement, there are additional uncertainties in the procedure towards obtaining the permeability of the porous domain. The pressure drop caused by the triangular inlet and outlet domains is neglected in this case, cf. Fig. 6. Also, during the photo-lithography step of the manufacturing process, the photo-resist-covered silicon wafer is exposed to ultra violet light under a mask. If the illumination source is not collimated, this will create a slope in the resulting photo-resist walls, which will later be transferred to the walls of the actual micromodel via the soft lithography process. This bias in the process will eventually create channels in the flow network with a smaller cross-sectional area than the intended one, leading to a potentially smaller intrinsic permeability of the network in total. As expected, this is found in comparison to the homogenisation and pore-scale methods, cf. Fig. 11. Therefore, further studies of the full microfluidic setup (Fig. 6) are performed using LBM.
Numerical Study of the Microfluidic Setup
For the simulation of the entire microfluidic device with the inlet and outlet, the computer aided design (CAD) file was discretized with about 176 million lattice cells. This resolution was chosen based on our previous comprehensive mesh convergence study (Jain et al. 2016). For this simulation the Musubi LBM solver (Klimach et al. 2014) is chosen since this feature is currently not implemented in the ESPResSo framework. Simulations are executed using 512 CPU cores of the compute cluster installed at the Institute for Computational Physics at the University of Stuttgart. The boundary conditions are prescribed according to the experiments with a flow rate of 1 L/s, which translates to a velocity boundary condition at the inlet and a zero-pressure condition at the outlet.
The pressure drop across the full device ( p 2 − p 1 ) was calculated as p LBM = 3.96 mbar from LBM simulations, which is slightly different from the experimental measurement p exp = 4.2 ± 1.2 mbar, but well within the error margin. This indicates that the inclusion of the inlet and outlet in the numerical simulations results in a better agreement with the experimental values, compared to the LBM simulations without the inlet and outlet (Fig. 5).
To further quantify the role of the inlet and outlet, we analysed the flow physics in these regions, and its effect on the flow field within the porous structure thereof. As can be observed from Fig. 9c, the inlet geometry leads to an uneven flow distribution not only in the inlet itself, but also ranging into the array of cylinders. These observations are reflected in Fig. 10a which shows the normalised pressure across the micromodel geometry. The pressure values are obtained by averaging across various x 2 − x 3 cross sectional planes placed between every column of cylinders in the porous domain, and at a distance of 1mm in the inlet and outlet domains. The dotted vertical lines indicate the beginning and end of the porous domain corresponding to M1 and M2 in Fig. 9a.
It can be clearly seen from Fig. 10a that the disturbed flow from the tortuous inlet causes significant pressure drop even after the first array of cylinders, and the outlet has same effect up to second-to-last array of cylinders (highlighted with grey vertical bars). To further elucidate the role that these irregularities play in the permeability estimates we calculate the stepwise pressure drop p i at the cross sectional plane i as a central derivative, i. e. p i = ( p i−1 − p i+1 )/2. The results are shown in Fig. 10b and confirm that the effects of the inlet and outlet channels carry over into the regular domain about one column deep before diminishing. Quantitatively, the permeability computed from the pressure drop between M1 and M2 is 17.4 × 10 −11 m 2 , while the permeability computed from the linear pressure drop region between the second and second to last column of cylinders is 15.9 × 10 −11 m 2 . This suggests that even if in the experiment the pressure is measured directly in front of the regular domain, there would be still a systematic difference to models assuming a periodic repetition of cylinders in x 1 -direction. This difference, however, is of the order of the variation of the results provided by the numerical and semi-analytical methods, cf. Table 3 and can therefore be considered minor. Due to the current design of the micromodel, the exact pressure drop of the main porous domain cannot be measured in the experiment, making a more detailed quantitative comparison with simulations of the benchmark problem impossible. Furthermore, despite a detailed grid convergence study there are non-quantifiable errors in any numerical methods that can contribute to the discrepancies with experiments. It is remarked that numerical dissipation in LBM, even at the scales of grid spacing, and the numerical dispersive effects are smaller compared to other second-order accurate methods (Marié et al. 2009). It can thus be inferred that the LBM simulation, which captures the whole experimental setup, achieves a marvellous agreement with the experimental measurements.
Comparison of Methods
The results from all applied approaches are collected in Table 3 and visualised in Fig. 11. It stands out that for r = 0.4 mm the 3D and pseudo 2D/3D homogenisation approaches as well as all pore-scale models provide similar permeability estimates.
An analysis of the homogenisation and the pore-scale approaches shows that the noslip boundary conditions in x 2 -direction do not influence the results, thus the width of the benchmark problem is chosen to be large enough. In contrast, the no-slip boundary condition in x 3 -direction has an influence on the results of the estimated permeability. This can be seen in the difference between the permeability values obtained from the Kozeny-Carman relation and the classical 2D homogenisation with the ones obtained by the other methods. The 2D homogenisation assumes no frictional effect from the top and bottom boundaries, which is obviously a crude assumption for the considered case. To evaluate the influence of the frictional effects, a corresponding FEM simulation for cylinders (r = 0.4 mm), periodically repeated in all directions (infinite height), yields a permeability of k = 1.9 × 10 −9 m 2 . This value is much closer to the value k = 3.9 × 10 −9 m 2 obtained from the Kozeny-Carman relation (2), cf. Sect. 3, and the value k = 1.8 × 10 −9 m 2 obtained from the 2D homogenisation, cf. Sect. 4.2, which do not account for the frictional effects of the surrounding walls. . 11 Visualisation of the results collected in Table 3 As mentioned in Sect. 3, the Kozeny-Carman constant c kc can often be fitted in practical applications. If a fit of the parameter c kc towards the 3D homogenisation permeability values is carried out for this benchmark problem, then c kc = 167 gives a least-squares fit, or c kc = 133 minimises the 2-norm of the logarithm of the permeability values. However, even with fitted c kc one would find that the curve still has the wrong shape. Therefore, relationships like Kozeny-Carman fail to describe the evolution of the permeability even for simple porous media. The 3D homogenisation and the pseudo 2D/3D homogenisation approaches consider the frictional influence in x 3 -direction, while the other homogenisation methods do not. In particular, for the 3D approach no additional simplifications are made, making it the most accurate homogenisation approach which can be used as reference solution for the other homogenisation approaches. For decreasing pore-space passages (r → 0.5 mm), the frictional influence is reduced relatively and, thus, the permeability estimate of the classical 2D homogenisation approach tends towards the 3D one. Homogenisation approaches are computationally more efficient than pore-scale simulations but they are solely applicable for periodic microstructures. Pore-scale models provide accurate, detailed local information, such as 3D velocity and pressure fields also in the case of arbitrary geometries. We compare simulation results obtained by the pore-scale models and present in Fig. 12 the velocity profiles of FEM, SPH and LBM along the pore throat in the center of the domain (x 1 = 7.5 mm, x 2 = [4.9, 5.1] mm). The considered pore-scale methods deliver very similar velocity profiles. All models work with different driving forces, therefore the normalised and not the absolute velocities are compared.
For the considered benchmark problem, the use of a 1D Darcy's law is reasonable to estimate the scalar-valued intrinsic permeability. However, if certain 3D effects may play a role they can be extracted from pore-scale simulations in further studies, e. g. to estimate an anisotropic permeability tensor. As seen in other comparison studies using 3D CT scans of real rock samples, further uncertainties in terms of rock heterogeneities are expected to affect the estimated permeability results (Guibert et al. 2015;Zhang et al. 2019;Song et al. 2019).
Conclusions
The intrinsic permeability k of the benchmark problem (r = 0.4 mm) was estimated by the applicable approaches -classical 3D and pseudo 2D/3D homogenisation as well as all porescale models -in the range between 16.4 × 10 −11 m 2 and 17.7 × 10 −11 m 2 , cf. Table 3 and Fig. 11. In addition to the benchmark problem, further pore structures with different radii were studied and the results were discussed with regard to the individual strengths and limitations of the used methods. The experimentally determined permeability is underestimated in comparison with the computational approaches that consider the regular domain only and not the inlet and outlet channels between the location of pressure measurement and the porous domain, due to the design of the PDMS micromodel. However, it is consistent with our expectations and confirmed by the performed numerical LBM study of the microfluidic setup. The Kozeny-Carman relation does not provide a good estimate for the permeability. The 3D homogenisation gives accurate information about the permeability, while simplified 2D approaches have a limited range of applicability. Classical 2D homogenisation yields an overestimation of the permeability due to the wall effects on the top and bottom boundaries of the porous domain. Pore-scale resolved numerical simulations provide accurate estimates about the permeability, but are limited by computational costs.
Conflicts of interest There are no conflicts of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 10,084.2 | 2021-04-17T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Materials Science"
] |
Printed Multilayer Piezoelectric Transducers on Paper for Haptic Feedback and Dual Touch-Sound Sensation
With a growing number of electronic devices surrounding our daily life, it becomes increasingly important to create solutions for clear and simple communication and interaction at the human machine interface (HMI). Haptic feedback solutions play an important role as they give a clear direct link and response to the user. This work demonstrates multifunctional haptic feedback devices based on fully printed piezoelectric transducers realized with functional polymers on thin paper substrate. The devices are flexible; lightweight and show very high out-of-plane deflection of 213 µm at a moderate driving voltage of 50 Vrms (root mean square) achieved by an innovative multilayer design with up to five individually controllable active layers. The device creates a very clear haptic sensation to the human skin with a blocking force of 0.6 N at the resonance frequency of 320 Hz, which is located in the most sensitive range of the human fingertip. Additionally the transducer generates audible information above two kilohertz with a remarkable high sound pressure level. Thus the paper-based approach can be used for interactive displays in combination with touch sensation; sound and color prints. The work gives insights into the manufacturing process; the electrical characteristics; and an in-depth analysis of the 3D deflection of the device under variable conditions
Introduction
Our daily life is more and more surrounded and influenced by complex, multifunctional electronic devices. These devices are often controlled by human machine interfaces (HMI) giving options for input and output signals. Commonly, visual and acoustical feedback is used. An active tactile feedback to create additional or alternative haptic sensations at the point of interaction has rarely been incorporated into electronic devices so far. However, there has been a growing interest in the field of haptics over the last decade, mainly driven by the wide market penetration of touch screens and the latest developments in virtual environments and wearables [1,2].
We can appreciate a sense of touch by the use of device-generated environments; these are referred to as haptic devices. As a result, virtual objects appear real and tangible when touching them. It is possible for the user to create an interface with a virtual environment using the haptic technology; this is achieved through the touching sense, making use of forces, vibrations, or motions available to the user. Such mechanical simulation contributes to the creation of virtual objects, the control of virtual objects, and augmentation of the remote control properties of different machines and devices [3]. There have been numerous attempts to understand and apply haptic abilities for both humans and machines. This can be seen in the large number of activities in disciplines like robotics and telerobotics, computational geometry and computer graphics, psychophysics, cognitive sciences, and neurosciences [4]. Additionally, haptic devices can find application in more general areas that can be related to the virtual environments; such as medicine [5], gaming [6,7], robotics [8], communication [9], mobile devices [10], data visualization and multi-user environments [11].
In particular, polymers are very attractive for the realization of flexible electronics, e.g., to be used as electronic skin, interface for HMIs, physiological signal monitoring, and so on [12].
In this case, piezoelectric polymer actuators are devices made of smart materials capable of undergoing deformations in response to suitable electrical stimuli. They represent an emerging class of electromechanical drivers. Piezoelectric polymer actuators show functional and structural properties that have no equals among traditional actuation technologies (namely electrostatic and electromagnetic), such as large active strains in response to driving voltages, high power density, high mechanical compliance, structural simplicity and versatility, scalability, no acoustic noise, and, in most cases, low costs [13][14][15]. Polyvinylidene fluoride (PVDF) and its co-and ter-polymers can be seen as one of the best candidates for mechanical and acoustic sensors, actuators, energy harvesters and non-volatile memory applications [16]. Novel inorganic ferroelectrics, like lead-free Bi 0.5 Na 0.5 TiO 3 (BNT) and antimony sulfoiodide nanowires, also show remarkable piezoelectric properties. However, they show limitations for simple processing technologies and require typically high sintering temperature not compatible to most flexible substrate materials [17,18].
With respect to haptic applications, Ju et al. demonstrated tactile feedback on flexible touch screens; accordingly, they designed and fabricated transparent relaxor ferroelectric polymer film vibrators with solution processed but non-printed poly(vinylidene fluoridetrifluoroethylene-chlorotrifluoroethylene) (P(VDF-TrFE-CTFE)) as the active layer. This tactile-feedback touch screen had a natural frequency designed to be around 200-240 Hz close to the highest sensitivity range of the human fingertips, which is around 300 Hz as reported by Gescheider et al. [19]. Thus, in a laminate structure combining touch sensors, the film vibrator and a flexible display panel, an advanced user experience can be created [20].
For a cost-effective production and the realization of large-area flexible actuators, the combination of organic actuator materials and additive printing technologies can play an important role. Poncet et al. presented the study of screen printed P(VDF-TrFE) based haptic circular buttons, providing force restitution or vibration sensations when touched by the user causing a tactile sensation on the human fingertips [21]. The first resonant mode frequency of an 18 mm diameter membrane was simulated to be 1.937 kHz with a deflection of 1.5 µm at 6 V rms . Digital inkjet printing was the method selected for the manufacturing of piezoelectric P(VDF-TrFE) layers [22].
Our group demonstrated fully mass printed large-area piezoelectric actuators on the basis of electroactive polymers (EAP) on thin foil and paper substrates for flexible loudspeakers [18][19][20][21]. Recently, a truly fully roll-to-roll manufacturing line for paper embedded transducers was presented [23].
In this report, the low-cost mass printing approach was used as a manufacturing route for powerful large area multilayer piezoelectric actuators for haptic sensations. For the first time, haptic feedback devices on environmentally friendly flexible and lightweight paper substrate with individually controllable multilayer architecture are demonstrated to show an extremely high deflection of up to 213 µm at a resonance frequency of 320 Hz, perfectly fitting to the highest fingertip sensitivity. This novel approach generates a blocking force of 0.6 N for the haptic feedback, which is sufficient to generate an indentation on the human skin. To improve the user sensation, the haptic feedback can be combined with audible sound information as the transducers show a high sound pressure level (SPL) above two kilohertz. To complete the multifunctional approach, and to be able to combine feedback and perception, the device can also be used as a touch sensor.
Device Set-Up and Printing
The printed piezoelectric transducers were manufactured in a similar way as already described previously [24]. However, for the multilayer approach of this work, the printing design and the sequence of printing was changed. In a brief summary: the basis of the transducers is a conventional, glossy coated paper substrate with a thickness of 67 µm and a grammage of 90 g m −2 (Maxigloss, IGEPA group GmbH & Co. KG, Hamburg, Germany). The device set-up included a layer sequence with up to six electrodes and up to five piezoelectric layers printed alternately one upon the other. While the electrodes were made of a water-based poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) ink (Clevios SV4, Heraeus Deutschland GmbH & Co. KG, Hanau, Germany) resulting in a dry layer thickness of~500 nm each. P(VDF-TrFE) (75:25 mol%) (FC25, Piezotech Arkema, Pierre-Benite Cedex, France) powder was dissolved in a high-boiling point solvent to prepare a screen printing compatible ink with good film-levelling properties. Depending on the target layer thickness (~7 µm), the co-polymer concentration of the ink can be varied between 15 and 20%wt. The PVDF-co-polymer was selected as it creates much higher deflection at a comparable low electric field strength below 20 V µm −1 compared to ter-polymers, which is important for usability [25].
In contrast to other publications on printed piezoelectric multilayer devices [26,27], the layer design offers the opportunity for individual electrical connection and contacting of each electrode layer. To achieve this, the design included contact pads (15 × 15 mm 2 ), which are shorter than the edge length of the quadratic electrode areas (50 × 50 mm 2 ) themselves. Furthermore, the design is rotationally symmetric, which gives the possibility of printing all electrodes for transducers with up to five active layers with two printing forms only. The design of all piezoelectric layers was identical with a size of 60 × 60 mm 2 providing tolerance against misalignment errors and safety against shorts of the electrodes especially at the edges. A schematic illustration of the design and the layer configuration of the multilayer device is shown in Figure 1. a blocking force of 0.6 N for the haptic feedback, which is sufficient to generate an indentation on the human skin. To improve the user sensation, the haptic feedback can be combined with audible sound information as the transducers show a high sound pressure level (SPL) above two kilohertz. To complete the multifunctional approach, and to be able to combine feedback and perception, the device can also be used as a touch sensor.
Device Set-Up and Printing
The printed piezoelectric transducers were manufactured in a similar way as already described previously [24]. However, for the multilayer approach of this work, the printing design and the sequence of printing was changed. In a brief summary: the basis of the transducers is a conventional, glossy coated paper substrate with a thickness of 67 µm and a grammage of 90 g m −2 (Maxigloss, IGEPA group GmbH & Co. KG, Hamburg, Germany). The device set-up included a layer sequence with up to six electrodes and up to five piezoelectric layers printed alternately one upon the other. While the electrodes were made of a water-based poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) ink (Clevios SV4, Heraeus Deutschland GmbH & Co. KG, Hanau, Germany) resulting in a dry layer thickness of ~500 nm each. P(VDF-TrFE) (75:25 mol%) (FC25, Piezotech Arkema, Pierre-Benite Cedex, France) powder was dissolved in a high-boiling point solvent to prepare a screen printing compatible ink with good film-levelling properties. Depending on the target layer thickness (~7 µm), the co-polymer concentration of the ink can be varied between 15 and 20%wt. The PVDF-co-polymer was selected as it creates much higher deflection at a comparable low electric field strength below 20 V µm −1 compared to ter-polymers, which is important for usability [25].
In contrast to other publications on printed piezoelectric multilayer devices [26,27], the layer design offers the opportunity for individual electrical connection and contacting of each electrode layer. To achieve this, the design included contact pads (15 × 15 mm 2 ), which are shorter than the edge length of the quadratic electrode areas (50 × 50 mm 2 ) themselves. Furthermore, the design is rotationally symmetric, which gives the possibility of printing all electrodes for transducers with up to five active layers with two printing forms only. The design of all piezoelectric layers was identical with a size of 60 × 60 mm 2 providing tolerance against misalignment errors and safety against shorts of the electrodes especially at the edges. A schematic illustration of the design and the layer configuration of the multilayer device is shown in Figure 1. All contact pads were supported with respect to their electrical conductivity by printing a silver ink (Dupont 5028, Dupont Ltd., Bristol, U.K.) on top of the conductive polymer. The thickness of the silver pads was measured to be ~5 µm. All contact pads were supported with respect to their electrical conductivity by printing a silver ink (Dupont 5028, Dupont Ltd., Bristol, U.K.) on top of the conductive polymer. The thickness of the silver pads was measured to be~5 µm.
All printing steps were carried out at the semi-automatic sheetfed screen printing press Ekra X1-SL (Ekra Automatisierungssysteme GmbH, Bönnigheim, Germany) with video assisted but manual alignment of the samples before each printing step. After each printing step, the layers were dried in a hot-air convection oven. The drying conditions for PEDOT:PSS, P(VDF-TrFE) and silver ink were 130 • C/5 min, 135 • C/10 min and 130 • C/5 min, respectively.
Poling
The crucial advantage of the developed design is the possibility to control each active layer individually. This includes the poling procedure, which is necessary to get a high remnant polarization of the P(VDF-TrFE) layer. Poling was conducted with the help of a so called Sawyer-Tower circuit by applying a high-voltage (HV) signal with triangular waveform and a frequency of 1 Hz. For this, samples were connected with a HV supply (Trek 5/80-HS, Acal BFi Germany GmbH, Dietzenbach, Germany), which was controlled by using a function generator (TGA1244, Telemeter Electronic GmbH, Donauwörth, Germany) and an HV resistor of 987 Ω. The current flowing through the circuit was determined by measuring the voltage of the resistor using an oscilloscope (DSO-X 2004A, Agilent Technologies Inc., Santa Clara, CA, U.S.A.) via a 100:1 probe (10076B, Agilent). The poling set-up is depicted in Figure 2a. All printing steps were carried out at the semi-automatic sheetfed screen printing press Ekra X1-SL (Ekra Automatisierungssysteme GmbH, Bönnigheim, Germany) with video assisted but manual alignment of the samples before each printing step. After each printing step, the layers were dried in a hot-air convection oven. The drying conditions for PEDOT:PSS, P(VDF-TrFE) and silver ink were 130 °C/5 min, 135 °C/10 min and 130 °C/5 min, respectively.
Poling
The crucial advantage of the developed design is the possibility to control each active layer individually. This includes the poling procedure, which is necessary to get a high remnant polarization of the P(VDF-TrFE) layer. Poling was conducted with the help of a so called Sawyer-Tower circuit by applying a high-voltage (HV) signal with triangular waveform and a frequency of 1 Hz. For this, samples were connected with a HV supply (Trek 5/80-HS, Acal BFi Germany GmbH, Dietzenbach, Germany), which was controlled by using a function generator (TGA1244, Telemeter Electronic GmbH, Donauwörth, Germany) and an HV resistor of 987 Ω. The current flowing through the circuit was determined by measuring the voltage of the resistor using an oscilloscope (DSO-X 2004A, Agilent Technologies Inc., Santa Clara, U.S.A.) via a 100:1 probe (10076B, Agilent). The poling set-up is depicted in Figure 2a. More precisely, two positive cycles were followed by two negative cycles, which allow the polarization-electric field (P-E)-hysteresis loop to be calculated by measuring the More precisely, two positive cycles were followed by two negative cycles, which allow the polarization-electric field (P-E)-hysteresis loop to be calculated by measuring the total current during the first cycle, including the parasitic parts (leakage and charging currents) and subtracting them with the help of the current measurement (only the parasitic current remains) during the second cycle. A maximum electric field of 100 V µm −1 was applied to the individual layers, which is much higher than the coercive field and a typical value to achieve the high remnant polarization of printed P(VDF-TrFE) layers. It is important to mention that the stepwise individual polarization of single layers strongly reduces the risk of electrical breakdowns damaging the devices in contrast to the parallel polarization of all active layers within a standard multilayer device.
D-Vibrometry
To determine the vibration mode and the frequency of the first resonance frequency, non-contact scanning measurements using a Scanning Vibrometer (PSV-500-3D, Polytec GmbH, Waldbronn, Germany) were carried out. The expanded uncertainty for the measurements is 3% as stated by the manufacturer.
For this, a swept sine wave ranging from 100 Hz to 1000 Hz with 1 s in length and an amplitude of 0.2 V was produced by the integrated waveform generator of the measurement system. The signal was amplified with a factor of 70.7 to achieve a voltage with an RMS value of 10 V rms . The resulting surface displacement was measured using the laser scanning vibrometer system with a bandwidth of 4 kHz. Using three scanning heads, the vibration was investigated in all three spatial directions for a large part of the active area of the sample (approx. 36 × 40 mm 2 ).
For the displacement measurements, the same set-up was used. However, a fixed sinusoidal signal with the resonant frequency and a voltage of 10-50 V rms was applied. The individual layers are connected in parallel for all experiments.
Results and Discussion
To better evaluate the electrical and mechanical properties of the actuator, different experiments are carried out. Firstly, the polarization of each individual layer is measured to show the differences between them. Furthermore, the capacitance of each layer, and also the complete multilayer device as well as the dissipation factor, is measured. Secondly, for the mechanical properties, the resonance frequency of the actuator is determined and measurements for different driving voltages and a varying number of active layers are carried out. Lastly, to get a better understanding for the use of the printed multi-layered actuator as a haptic feedback device, experiments to determine the blocking force of the actuator are conducted.
Poling and Dielectric Spectroscopy
After finalizing all printing and annealing steps, all manufactured devices were polarized layer by layer to achieve high piezoelectricity of the P(VDF-TrFE) films. High remnant polarization of the piezoelectric layers is an important prerequisite to achieve high deflection of the actuator [24]. Figure 2b shows the individual P-E-loops of printed multilayer devices with five active layers. Interestingly, there is a clear trend visible from the top layer ( Figure 2c, Layer 5) to the bottom (Layer 1). This means, the individual P r values from the fifth layer (P r~7 2 mC m −2 ) to the first layer decreases. In particular, between the top layer and the fourth layer, there is a large decrease in P r by~10 mC m −2 . The further decrease is comparably small; from layer 4 to layer 1 the decrease was measured to be~4 mC m −2 . This result occurred for all tested multilayer samples independent of the number of functional layers. For single layer devices, the highest remnant polarization was measured, which can be explained by the single active layer also bring the top layer in this case.
It is well known that the crystallinity of P(VDF-TrFE) films is highly temperature sensitive [28]. To achieve high crystallinity, careful annealing slightly above the Curie temperature of the solution processed layers is necessary. For the selected co-polymer with a PVDF:TrFE ratio of 75:25, an annealing temperature of 135 • C is recommended [29]. However, the annealing temperature was the same for all samples prepared for this study. Further influence could come from differences in the total annealing time and the number of annealing cycles as multilayer devices were annealed as often as the number of printed active layers in addition to the number of printed electrodes. However, to the best of our knowledge, such a clear difference has not been published before. Hence, further investigations, especially with respect to the morphology of the involved layers, should be carried out in succeeding studies to achieve a maximized remnant polarization throughout all active layers, which would improve the overall performance of the multilayer actuator.
The frequency spectra of the capacitance after poling of all individual layers and the parallel connection of them is shown in Figure 3. However, the annealing temperature was the same for all samples prepared for this study. Further influence could come from differences in the total annealing time and the number of annealing cycles as multilayer devices were annealed as often as the number of printed active layers in addition to the number of printed electrodes. However, to the best of our knowledge, such a clear difference has not been published before. Hence, further investigations, especially with respect to the morphology of the involved layers, should be carried out in succeeding studies to achieve a maximized remnant polarization throughout all active layers, which would improve the overall performance of the multilayer actuator. The frequency spectra of the capacitance after poling of all individual layers and the parallel connection of them is shown in Figure 3. The small deviation of the curves clearly indicates the high layer-to-layer homogeneity with respect to the thickness and dielectric properties of the individual P(VDF-TrFE) layers and the conductivity of the electrodes, which is important to achieve highly efficient superposition of the forces generated by each individual active layer [24]. The decrease in capacitance for frequencies above ~2000 Hz can be attributed to the comparably low conductivity of the polymeric PEDOT:PSS electrodes as explained in detail in our former publications [30,31]. As the first electrode layer is printed directly on the paper surface, its conductivity is the lowest in comparison to the other layers. Hence, the decrease in capacitance occurs earliest for this layer. Basically, it should be noticed that the influence of this decrease is small for applications in the low frequency range like for the proposed haptic feedback and touch sensation paper. For the determined resonance frequency, a capacitance of 20.1 ± 0.4 nF per layer and a dissipation factor (DF) of 0.13 were measured. DF is defined as DF = tan(90°−φ) where φ is the phase shift between current and voltage.
Resonance Frequency and Deflection under Variable Conditions
The following section presents the results of the 3D vibrometer measurements to gain insight into the vibration behavior of a five-layer multilayer transducer as shown in Figure 1. At first, the resonance frequency of the device mounted and clamped between two stiff plastic frames (Figure 4a) was analyzed. The small deviation of the curves clearly indicates the high layer-to-layer homogeneity with respect to the thickness and dielectric properties of the individual P(VDF-TrFE) layers and the conductivity of the electrodes, which is important to achieve highly efficient superposition of the forces generated by each individual active layer [24]. The decrease in capacitance for frequencies above~2000 Hz can be attributed to the comparably low conductivity of the polymeric PEDOT:PSS electrodes as explained in detail in our former publications [30,31]. As the first electrode layer is printed directly on the paper surface, its conductivity is the lowest in comparison to the other layers. Hence, the decrease in capacitance occurs earliest for this layer. Basically, it should be noticed that the influence of this decrease is small for applications in the low frequency range like for the proposed haptic feedback and touch sensation paper. For the determined resonance frequency, a capacitance of 20.1 ± 0.4 nF per layer and a dissipation factor (DF) of 0.13 were measured. DF is defined as DF = tan(90 • − ϕ) where ϕ is the phase shift between current and voltage.
Resonance Frequency and Deflection under Variable Conditions
The following section presents the results of the 3D vibrometer measurements to gain insight into the vibration behavior of a five-layer multilayer transducer as shown in Figure 1. At first, the resonance frequency of the device mounted and clamped between two stiff plastic frames (Figure 4a) was analyzed. The transfer function of the resulting displacements in all three spatial directions when working with only one of the active layers (sine wave sweep signal with 50 Vrms applied to electrodes 1 and 2) is depicted in Figure 4b. The peak clearly shows the resonance frequency of the transducer system at approx. 320 Hz. Between the z-axis (out of plane) and the x-and y-axis, the difference in amplitude is 47 dB and 42 dB, respectively, which indicates a clear dominance of the vibration in the intended direction. The measured value is in good accordance with our own analytical calculations based on the following equations provided by Ju et al. [20] for the natural frequency ωn of multilayer piezoelectric transducers with N piezoelectric layers.
With ( ) and ( ℎ) being the effective flexural modulus per unit length and the effective weight per area, respectively. ρi and hi represent the density and thickness of the i-th layer. βn is the eigenvalue of the n-th vibration mode determined by the boundary condition.
For the realized device geometry and taking 5.7 GPa, 3.6 GPa [32] and 1.0 GPa [20] as the Young modulus and 1389 kg m −3 , 1800 kg m −3 and 1011 kg m −3 as the density for the used paper, the P(VDF-TrFE) layers, and the PEDOT:PSS layers, respectively, a natural frequency for the first natural mode of 344 Hz was calculated. Moreover, the Young modulus of paper is highly anisotropic due to the direction of the paper fibers [30], which was not considered in the calculations. As expected, no clear change of the resonance frequency in relation to the number of driven piezoelectric layers was observed.
In the following paragraph, the results of the analysis of the vibration measurements of such a multilayer piezoelectric transducer with five individually controllable active layers driven at the determined resonance frequency are given. The aim of the experiments was to investigate the influence of the applied voltage and the number of excited layers on the displacement achievable with the sample. For a better understanding of the influence of the voltage, the values varied from 10 Vrms to 50 Vrms in 10 V increments. Additionally, the measurements were repeated for 1-5 controlled active layers.
The resulting deflection shapes of the first (320 Hz) and second harmonic (640 Hz) oscillation of the sample excited at the resonance frequency are shown in Figure 5. The (1,1) mode in the z-direction, centered on the middle of the sample, is clearly visible for The transfer function of the resulting displacements in all three spatial directions when working with only one of the active layers (sine wave sweep signal with 50 V rms applied to electrodes 1 and 2) is depicted in Figure 4b. The peak clearly shows the resonance frequency of the transducer system at approx. 320 Hz. Between the z-axis (out of plane) and the xand y-axis, the difference in amplitude is 47 dB and 42 dB, respectively, which indicates a clear dominance of the vibration in the intended direction. The measured value is in good accordance with our own analytical calculations based on the following equations provided by Ju et al. [20] for the natural frequency ω n of multilayer piezoelectric transducers with N piezoelectric layers.
With (YI) e f f and (ρh) e f f being the effective flexural modulus per unit length and the effective weight per area, respectively. ρ i and h i represent the density and thickness of the i-th layer. β n is the eigenvalue of the n-th vibration mode determined by the boundary condition.
For the realized device geometry and taking 5.7 GPa, 3.6 GPa [32] and 1.0 GPa [20] as the Young modulus and 1389 kg m −3 , 1800 kg m −3 and 1011 kg m −3 as the density for the used paper, the P(VDF-TrFE) layers, and the PEDOT:PSS layers, respectively, a natural frequency for the first natural mode of 344 Hz was calculated. Moreover, the Young modulus of paper is highly anisotropic due to the direction of the paper fibers [30], which was not considered in the calculations. As expected, no clear change of the resonance frequency in relation to the number of driven piezoelectric layers was observed.
In the following paragraph, the results of the analysis of the vibration measurements of such a multilayer piezoelectric transducer with five individually controllable active layers driven at the determined resonance frequency are given. The aim of the experiments was to investigate the influence of the applied voltage and the number of excited layers on the displacement achievable with the sample. For a better understanding of the influence of the voltage, the values varied from 10 V rms to 50 V rms in 10 V increments. Additionally, the measurements were repeated for 1-5 controlled active layers.
The resulting deflection shapes of the first (320 Hz) and second harmonic (640 Hz) oscillation of the sample excited at the resonance frequency are shown in Figure 5. The (1,1) mode in the z-direction, centered on the middle of the sample, is clearly visible for the first harmonic oscillation. For the second harmonic oscillation, the (1,2) mode with one node is present. Its amplitude is strongly reduced, which is important to keep the harmonic distortion low. The vibration in the xand y-direction is negligible and no vibration at the edges of the sample was observed due to the plastic frame used for clamping the sample. It could be confirmed that the expected mode shape for a fully clamped square film occurs for our paper based design as well. The deflection shape did not change for different voltages or different numbers of parallel connected excited layers. the first harmonic oscillation. For the second harmonic oscillation, the (1,2) mode with one node is present. Its amplitude is strongly reduced, which is important to keep the har monic distortion low. The vibration in the x-and y-direction is negligible and no vibration at the edges of the sample was observed due to the plastic frame used for clamping the sample. It could be confirmed that the expected mode shape for a fully clamped square film occurs for our paper based design as well. The deflection shape did not change fo different voltages or different numbers of parallel connected excited layers.
(a) (b) The values are summarized in Table 1. The highest achieved vibration amplitude i 213 µm, which corresponds to an acceleration of 861 m/s 2 . A near linear relation between voltage and the number of active layers can be detected at least for a driving voltage of up to 40 Vrms. The incremental slope of the curves for 10 Vrms to 40 Vrms ranges from 0.55 to 0.78 µm V −1 per layer. The slope is only higher for the excitation with 50 Vrms, than for the other voltages with 0.85 to 1.36 µm V −1 per layer. For 50 Vrms a saturation of the resulting displacement seems to start in the case of four and five driven piezoelectric layers, prob the first harmonic oscillation. For the second harmonic oscillation, the (1,2) mode with one node is present. Its amplitude is strongly reduced, which is important to keep the harmonic distortion low. The vibration in the x-and y-direction is negligible and no vibration at the edges of the sample was observed due to the plastic frame used for clamping the sample. It could be confirmed that the expected mode shape for a fully clamped square film occurs for our paper based design as well. The deflection shape did not change for different voltages or different numbers of parallel connected excited layers.
(a) (b) The values are summarized in Table 1. The highest achieved vibration amplitude is 213 µm, which corresponds to an acceleration of 861 m/s 2 . A near linear relation between voltage and the number of active layers can be detected at least for a driving voltage of up to 40 Vrms. The incremental slope of the curves for 10 Vrms to 40 Vrms ranges from 0.55 to 0.78 µm V −1 per layer. The slope is only higher for the excitation with 50 Vrms, than for the other voltages with 0.85 to 1.36 µm V −1 per layer. For 50 Vrms a saturation of the resulting displacement seems to start in the case of four and five driven piezoelectric layers, prob- The values are summarized in Table 1. The highest achieved vibration amplitude is 213 µm, which corresponds to an acceleration of 861 m/s 2 . A near linear relation between voltage and the number of active layers can be detected at least for a driving voltage of up to 40 V rms . The incremental slope of the curves for 10 V rms to 40 V rms ranges from 0.55 to 0.78 µm V −1 per layer. The slope is only higher for the excitation with 50 V rms , than for the other voltages with 0.85 to 1.36 µm V −1 per layer. For 50 V rms a saturation of the resulting displacement seems to start in the case of four and five driven piezoelectric layers, probably due to the limited device elasticity. There is also a voltage linearity of the displacement Figure 6b. For higher voltages, the displacement does not follow the linear increase anymore and a stronger increase becomes obvious. Such a behavior was already visible in the work of Ju et al., though at a much smaller displacement level [20]. Table 1. The out of plane displacement for different driving voltages and the number of active layers at resonance frequency. Additionally, the incremental slope is given. To the best of our knowledge, the achieved out-of-plane displacement of more than 200 µm, at a supply voltage of 50 V rms , is the highest reported for polymer based piezoelectric haptic actuators, which is the result of the combination of a high performance of the printed layers, especially the piezoelectric one, the geometry of the substrate, and the low thickness and mechanical properties of the paper substrate. Table 2 gives a comparison to the work of other research groups and a commercially available device [32].
Blocking Force
To use the transducer as a haptic device the generated blocking force is an important factor. To measure this, the displacement of the actuator needs to be completely blocked, i.e., it should work against a load with an infinitely high stiffness. To achieve this, the sample was mounted as shown in Figure 7. Under the sample, a square-shaped piece of poly(methyl methacrylate) (PMMA) was placed between the work surface and the sample. For the force measurement, a 1-axis force sensor (KD40s, ME-Meßsysteme GmbH, Hennigsdorf, Germany) with a nominal force of 100 N was placed on the sample. In order to ensure the high stiffness of the setup, the force sensor and the sample were preloaded with a steel plate, which was adjusted with screws. By doing this, a preload of 10 N was applied to the transducer. The testing rig used for measuring the blocking force is not optimal and should be optimized in future studies, but it should give an insight into the magnitude of the blocking force of the actuator. The results show that the transducer generates a maximum blocking force of approx. 0.6 N at the resonance frequency of 320 Hz when using a voltage of 50 Vrms. Linear behavior between the voltage and the generated blocking force is evident (Figure 8). Furthermore, as was to be expected, the number of active layers has no effect on the generated blocking force. The reason for this is, despite the fact that the individual layers are connected in parallel electrically, they are connected in a series mechanically, so the resulting blocking force is independent of the number of active layers. The corresponding working graph, when all five layers are active, is shown in Figure 9. The two important parameters of the transducer, the free stroke as well as the blocking force, are shown for different driving voltages as the values at the intersections with the x-and y-axis, respectively. During the operation of the transducer, any displacement/force point on and below the curves is attainable. The results show that the transducer generates a maximum blocking force of approx. 0.6 N at the resonance frequency of 320 Hz when using a voltage of 50 V rms . Linear behavior between the voltage and the generated blocking force is evident (Figure 8). The results show that the transducer generates a maximum blocking force of approx. 0.6 N at the resonance frequency of 320 Hz when using a voltage of 50 Vrms. Linear behavior between the voltage and the generated blocking force is evident (Figure 8). Furthermore, as was to be expected, the number of active layers has no effect on the generated blocking force. The reason for this is, despite the fact that the individual layers are connected in parallel electrically, they are connected in a series mechanically, so the resulting blocking force is independent of the number of active layers. The corresponding working graph, when all five layers are active, is shown in Figure 9. The two important parameters of the transducer, the free stroke as well as the blocking force, are shown for different driving voltages as the values at the intersections with the x-and y-axis, respectively. During the operation of the transducer, any displacement/force point on and below the curves is attainable. Furthermore, as was to be expected, the number of active layers has no effect on the generated blocking force. The reason for this is, despite the fact that the individual layers are connected in parallel electrically, they are connected in a series mechanically, so the resulting blocking force is independent of the number of active layers. The corresponding working graph, when all five layers are active, is shown in Figure 9. The two important parameters of the transducer, the free stroke as well as the blocking force, are shown for different driving voltages as the values at the intersections with the xand y-axis, respectively. During the operation of the transducer, any displacement/force point on and below the curves is attainable.
Acoustical Performance
In addition to the vibration measurements, acoustical sound pressure level (SPL) measurements were performed in the audible frequency range from 100 Hz to 20 kHz. Figure 10a shows the frequency response of a multilayer transducer driven by a sinusoidal signal with 50 Vrms.
Acoustical Performance
In addition to the vibration measurements, acoustical sound pressure level (SPL) measurements were performed in the audible frequency range from 100 Hz to 20 kHz. Figure 10a shows the frequency response of a multilayer transducer driven by a sinusoidal signal with 50 V rms .
Acoustical Performance
In addition to the vibration measurements, acoustical sound pressure level (SPL) measurements were performed in the audible frequency range from 100 Hz to 20 kHz. Figure 10a shows the frequency response of a multilayer transducer driven by a sinusoidal signal with 50 Vrms. The SPL was recorded at a distance of 10 cm. It is worth mentioning that, despite the flat mounting of the device, which is in contrast to our former publications on printed piezoelectric loudspeakers [23,24,30,31,34] and the compact size of the transducer, a remarkably high SPL of more than 90 dB was achieved for a wide frequency range from~2 kHz to 17 kHz. For the haptics application, the low frequency range between 100 and 500 Hz is important. Here the SPL is much lower, which is actually beneficial for the proposed application as the audible noise is comparably small. However, the resonant frequency is clearly detectable via the SPL frequency response.
The SPL increases with increased driving voltage, as shown in Figure 10b, as expected. It should be noticed that the SPL includes harmonic distortions produced by higher mode oscillations (see also Figure 5). For the resonant frequency, the total harmonic distortion was measured to be~50%. This value is much higher than the total harmonic distortion measured for frequencies above 500 Hz for printed piezoelectric speakers on paper, which is typically below 1% [34].
Small differences in the resonance frequency between the results of the laser vibrometer and the SPL measurements are due to the non-uniformity of the ambient laboratory conditions. In particular, the hygroscopic nature of the paper substrate can result in changes of the resonance frequency as the water absorption depending on the humidity in the lab can influence the mechanical properties (e.g., the bending stiffness) of the paper. To get a clear image of this interesting effect, which is of importance for the addressed haptic application, the influence of the humidity on the resonance frequency and the SPL was evaluated in more detail. For this, the multilayer sample was placed in a climate chamber (KPK 200, Feutron Klimasimulation GmbH, Langenwetzendorf, Germany), keeping the temperature constant (23 • C) and varying the relative humidity between 25% and 80%. As can be seen in Figure 10c, the peak of the SPL indicating the resonance frequency shifted from~290 Hz at 25% relative humidity to~450 Hz at 80%. It is well-known that paper absorbs water effectively leading to a decrease in the Young Modulus. However, other material properties like the density and the geometry can be affected as well. Hence, further studies are required to explain this result in accordance with Equation (1). Future work should also incorporate water repelling coatings to avoid or reduce this effect and, e.g., by the influence of human moisture at the fingertip.
To increase the interaction with the user, an interesting approach could be the combination of tactile and audible sensation [35]. To test this, a bitonal signal was applied to the transducer combining 320 Hz and 5 kHz sinusoidal signals with an amplitude of 47 V rms . Figure 10d shows the Fast Fourier Transform (FFT) of the measured SPL. Clearly visible are the two first harmonics of the signals with SPLs of 68.1 dB and 84.9 dB at 320 Hz and five kilohertz, respectively. Hence, the five kilohertz signal is very dominant and appears quite loud, while the 320 Hz signal produces a clearly detectable vibration. A video demonstrating the dual effect is given as Supporting Information.
As the transducer can also be used as a touch sensor by utilizing the direct piezoelectric effect (see Supporting Information Figure S1), a multifunctional touch display button with only one single printed device can be created.
Conclusions
In summary, fully screen printed multilayer piezoelectric transducers on paper with individually controllable active layers were demonstrated. The individual behavior of the piezoelectric polarization and the dielectric properties of each layer were shown. Examples of promising application areas for such thin and flexible transducer include haptics and touch sensations. Thus, in-deep 3D vibration measurements were realized. The resonant frequency for our device design is in the range of the highest sensitivity of the human finger tips and the deflection is high enough to produce a strong sensation to the human skin at moderate voltage level due to the multilayer set-up. The generated blocking force of the transducer is suffice to cause a skin indentation and therefore a haptic sensation. Applying a bitonal signal combining the resonance frequency and an audio signal above two kilohertz creates a dual touch-sound sensation. Hence, the transducer can be used as an interactive touch button for flexible paper-based displays and HMIs. In view of using the technology for real-world applications, future work should deal with suitable encapsulation materials to reduce the influence of ambient conditions. Furthermore, HMIs typically consist of more than one touch button, hence sensor arrays are needed. For such arrays, the interaction of adjacent touch points on the flexible substrate, as well as the dependence of the lateral size on the resonance frequency, have to be analyzed in order to reduce coupling effects and to keep the good haptic properties, respectively.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/s22103796/s1, Figure S1: Sensing response of the multilayer device, stimulated by a human fingertip. | 9,948.4 | 2022-05-01T00:00:00.000 | [
"Physics"
] |
UNAM at SemEval-2018 Task 10: Unsupervised Semantic Discriminative Attribute Identification in Neural Word Embedding Cones
In this paper we report an unsupervised method aimed to identify whether an attribute is discriminative for two words (which are treated as concepts, in our particular case). To this end, we use geometrically inspired vector operations underlying unsupervised decision functions. These decision functions operate on state-of-the-art neural word embeddings of the attribute and the concepts. The main idea can be described as follows: if attribute q discriminates concept a from concept b, then q is excluded from the feature set shared by these two concepts: the intersection. That is, the membership q\in (a\cap b) does not hold. As a,b,q are represented with neural word embeddings, we tested vector operations allowing us to measure membership, i.e. fuzzy set operations (t-norm, for fuzzy intersection, and t-conorm, for fuzzy union) and the similarity between q and the convex cone described by a and b.
Introduction
There exist nowadays a number of arithmetic vector operations for computing word relationships interpreted as linguistic regularities. A very popular setting is solving word analogies (Lepage, 1998), which is mainly used to evaluate the quality of word embeddings (Mikolov et al., 2013). Recently other alternatives to solve word analogies have been proposed (Linzen, 2016), including supervised methods (Drozd et al., 2016).
Solving word analogies requires three word arguments, and a fourth one is inferred. Such an inference raises from the similarity between common or similar contexts shared by the two pairs of words. Thus, given words "queen", "woman", "king", "man", the following arithmetic operation holds for their corresponding embeddings x (·) : x king − x man + x woman = x queen .
In this work, we explore similar approaches for Discriminative Attribute Identification (DAI). This task requires tree word arguments a, b, q, and a binary label y ∈ {0, 1} is inferred from them (Cree and McRae, 2003;Lazaridou et al., 2016;McRae et al., 2005). Such a label indicates whether the third word, q, is identified as a discriminative (semantic) attribute between words (concepts) a, b. We observed that the task of identifying discriminative attributes between words, represented via word embeddings, evokes that of solving word analogies.
We propose geometrically inspired vector operations on word embeddings x a , x b , x q ∈ R n of the words a, b, q, respectively. The output of each of these operations is in turn operated by a unsupervised decision function aimed to predict the label y. The decision functions are based on the reasoning given originally in (Lepage, 1998) for solving word analogies. Under this reasoning, the important thing is to look for those items shared by the objects compared, and verify whether the item of interest is included among them.
In other words, in the case of DAI, if we are asked whether x q , the attribute embedding, discriminates x a from x b , then an idea is to verify whether the attribute is contained in the set shared by the two concepts in question, i.e. does the set operation q ∈ (a ∩ b) hold? Our hypothesis, is that x q discriminates x a from x b if the result of such an operation is false in terms of the subspace delimited by x a and x b , i.e. a convex cone. Thus, a number of vector operations and decision functions were tested as different vector versions of this set operation on state-of-the-art neural word embeddings.
The proposed method does not rely on language or knowledge resources (i.e. knowledge bases and graphs, PoS or any kind of taggers, etc.). Furthermore, with the help of the geometrical insight that our method provides, we also discuss the possibilities of it for being used to study measures of how concepts can be generated from attributes in the sense of vector space modeling of natural language. Thus, this study can be considered, e.g., for designing semantically driven word embedding methods or to explore alternatives for building knowledge resource applications.
Our results showed that the proposed approach hold coherence with respect to the semantic notions proposed in the DAI task. This approach reached 0.622 of F-measure in predicting discriminative attributes.
Literature Review
Up to our knowledge, there is not work proposing unsupervised methods for discriminative attribute identification or extraction with a direct link to word embeddings. Most related work deals with semantic relation extraction or with labeling semantic relations in lexical semantics, e.g. given a hypernym, to perform hyponym extraction (Fu et al., 2014).
There is also work on using semantic attributes to classify images of objects in a supervised fashion (Chen et al., 2012;Lazaridou et al., 2016). In this case, dictionaries of discriminative attributes of objects are used (e.g. fruits by their color or form), but experiments are not performed on text data, e.g., a snippet describing the object. In more applicative cases, the use of dictionaries of object attributes has shown to be a good approach in clothing recommender systems. These systems group images of items sharing attributes the customers are usually interested in, e.g. images of jackets with a hat (Chen et al., 2012;Kalra et al., 2016;Zhou et al., 2016).
Other contributions provide methods for object classification by using multiple data sources, including text. In (Farhadi et al., 2009(Farhadi et al., , 2010Lampert et al., 2009) it is proposed supervised learning of semantic attributes and textual descriptions of objects. Their methods are aimed to generalize recognition and (template) textual description of unseen objects with similar and shared attributes. In the particular case of (Berg et al., 2010), a supervised algorithm learns to label object attributes by fitting multinomial associations between text segments and recognized image segments as cooccurring objects within a Web corpus (Su and Jurie, 2012). After that, the learned attributes of the objects are detected and used as features for feeding an unsupervised method for categorizing images and text. In (Deeptimahanti and Sanyal, 2011;Overmyer et al., 2001) natural language descriptions of case uses of user requirements are parsed to obtain Unified Modeling Language (UML) diagrams including object attributes. This approach is aimed to facilitate design in software engineering by semi-automatically building code objects (Yue et al., 2011). Lepage (1998) proposed solving word analogies based on characters shared by words and sentences. We extrapolated such an idea for feature similarities, similarly to what (Mikolov et al., 2013) did on vectors for solving word analogies. Thus, our method attempts taking into account these two ideas in the following way. Thinking about neural word embeddings as vectors generated by axis of attributes, our approach is to observe the linear subspace A delimited by embeddings of words a and b, and to see how the embedding of q is contained in it. This linear subspace has the properties of a convex cone. Thus, in geometrical terms, to assess whether an attribute can generate a pair of concepts or not, we propose to measure the degree, λ, to which the embedding x q is a convex combination of the embeddings x a and x b . This measure can be derived from the convex combination
A Convex Combination
where λ ∈ [0, 1]. The embeddings x a , x b , x q ∈ R d represent nouns a and b and the query attribute q, respectively (see Figure 1). The requirement of x a , x b to describe a convex cone A is due to the fact that, geometrically, features shared by these embeddings would be enclosed within such cone. This can be observed by testing extreme values in Eq.
(1). Assume all embeddings are normalized in magnitude. Let us making q to be, simultaneously, as far as possible from a and b while keeping the volume of A greater than zero. Also make that ⟨x a , x b ⟩ to be small. In this scenario, embeddings x a , x b delimit a cone of less than 90 degrees. As the embedding x q is as far as possible from the x a and x b and it is contained in A, then it passes close to the center of the circular basis of the cone. Thus, we have This geometrical scenario indicates that q is shared equably by a and b, so it is not discriminative for them. In the case of ⟨x a , x b ⟩ ≈ −1 it means that the set A is not convex. This is because x a and x b describe a unique line and have opposite directions (they are anti-parallel). This geometrical scenario prevents the pair of word embeddings from being generated by linear combinations of attributes in common to them (see x a and x ′ b in Figure 1). In the case when a, b are semantically very similar, we have that 1 ⟨x a , x b ⟩ → 1. It means that both vectors are (almost) parallel, so they refer concepts sharing most of their attributes. In this case, if x q is far away from both x a and x b , then determining discriminativeness has not sense (probably x q is not an attribute of either of them).
The geometrical scenario of identifying a discriminative attribute q can occur when ⟨x a , x b ⟩ is small and either ⟨x q , x a ⟩ → 1 or ⟨x q , x b ⟩ → 1. For example, if ⟨x q , x b ⟩ → 1, it means that x q tends to be parallel to x b and we can see x b as a linear combination of x q . As ⟨x a , x b ⟩ is small (x a and x b are almost orthogonal), then ⟨x a , x q ⟩ is also small. Therefore, this analysis leads us to think that q discriminates a from b, and that q is an attribute of b rather than of a.
The Convex Cone Method
The scenarios depicted in Section 3 overall show how projections among word embeddings form convex combinations and how these projections can be exploited in DAI. Without loss of generality, these projections can be seen as distances. In this sense, the convex parameter λ in Eq. (1) indeed weighs distances involving x a , x b , x q . Now, notice that Eq. (1) expresses x q in terms of x a and x b . However, in DAI they are know and we would like to measure the relationship among them given they are d−dimensional vectors. This measure is can be given by λ, which now becomes into the unknown. In this case, λ acts as a bounded measure of how much a given pair of concepts a, b shares a given attribute q. Thus, by performing some comprehensive algebra starting from Eq.
(1), we arrive at Furthermore, in addition to (2), we consider an alternative distance criterion. That is, it is possible measuring distance in terms of arcs instead of doing it in terms of straight line segments. Therefore, we have the arc (arcone) version of the convex parameter: where ⟨x, x ′ ⟩ ∈ [−1, 1] given that ∥x∥ = 1 for all x ∈ S d (the unitary sphere). Both arcs cos −1 ⟨x, x ′ ⟩ in the numerator and in the denominator of (3) are in the interval [0, 2π].
The convex parameter λ measures the degree to which x q is a convex combination of x a and x b . Form the point of view of the combination degree, rather than from the point of view of the absolute value of λ, some function f (λ) must be maximum at λ = 0.5 (see Figure 2). When it occurs, x q passes close to the axis of the cone A (so it also passes close to the center of the shaded circular area of radius 0.5∥x a − x b ∥ in Figure 1). Therefore, λ → 0.5 indicates that the attribute q is highly shared by both concepts a and b.
The extreme values of λ must be interpreted contrarily by f , i.e. λ → 0 means that, on the one hand, the attribute q uniquely characterizes (or generates) the concept a, so x a is approximately parallel to x q . On the other hand, λ → 1 means that the attribute q uniquely characterizes the concept b. Thus, we need that some decision function f to take advantage of extreme values of λ for making decision on whether an attribute q is discriminative of a pair of concepts a and b.
Therefore, we define our decision criterion subject to some threshold δ ∈ [0, 1] (say δ = 0.7): where if upper inequality (4) holds, it means either that λ → 0 or that λ → 1.0, so f (λ) = 1 and therefore attribute q discriminates concepts a and b. Conversely, if lower inequality (4) holds, it means that λ → 0.5. Therefore, f (λ) = 0 and the decision function determines that q does not discriminate a and b. See Figure 2.
Other Geometrical Methods
In addition to the convex cone method, we also tested mean-based, sum-based and fuzzy methods for quantifying the containment q ∈ (a ∩ b).
Similarity with Respect to the Sum and to the Mean
The sum-based method computes the resultant vector of x a and x b . The similarity between such a vector and the candidate attribute x q should be smaller than some threshold δ so as to consider that q discriminates a from b, that is: Unlike to the convex cone method, Eq. (5) indicates that the sum-based method measures directly the similarity between the resultant vector x a + x b and x q . The motivation of this operation is similar to that of the convex cone method. That is, x a +x b is an embedding that embeddings x a and x b have in common. Therefore, probably such embedding is similar to x q if this latter also is common to x a and x b . The mean-based method follows exactly the same principle, but only requires multiplying x a + x b in (5) by 0.5.
Similarity with Respect to a Fuzzy Connective
The fuzzy method computes the connective: between the fuzzy intersection (min{·}) and the fuzzy union (max{·}) of the embeddings x a , x b (Zadeh, 1965). These set operations are known as the Gödel's t-norm and t-conorm (Klement et al., 2013), respectively and they are defined elementwise for vectors. α is known as the compensation parameter and controls the mixture between union and intersection. Thus, the connective acts as a convex combination of the fuzzy union and the fuzzy intersection operators, so if α → 0 it causes that the intersection (min{·}) vanishes whereas the union (max{·}) survives. The contrary effect can be induced if α → 1. Fuzzy set operations are conceptually more akin to the idea of observing whether the intersection set of concept attributes contains some query attribute. To contextualize word embeddings with fuzzy sets, we assume the embedding x a ∈ R d is given by a membership function x a = µ(A). Herein, A is the set of items in some subset (of cardinality d) of the contexts of the word a. We also assume that the subset of contexts was statistically estimated by the word embedding method, which is in this case though as the membership function defined on the set C ⊃ A of all contexts in the corpus µ : C → S d .
As a first attempt to explore a relationship between fuzzy sets and word embeddings, in this paper we induced bias α to a decision function f based on the inner product between the connective x {a,b} (a biased version of x a + x b ) and the query attribute x q . In this way, the decision of DAI is made according to the threshold δ, i.e.: where α is the tolerance parameter of the fuzzy connective and it must be manually set.
Experiments and Results
For our experiments, we computed our decision functions f (·) on tuples of the form {x a , x b , x q , y}. To this end, we used state-ofthe-art word embeddings, i.e. Glove (Pennington et al., 2014), FastText (Bojanowski et al., 2016), Word2Vec (Mikolov et al., 2013) and Dependency-Based Word2Vec (DBW2V) (Levy and Goldberg, 2014). We also explored embeddings tanking into account external knowledge. This is the case of ConceptNet embeddings (Speer and Lowry-Duda, 2017). DBW2V embeddings are W2V embeddings enriched by using syntactic dependencies and Conceptnet are embeddings enriched with both syntactic dependencies and knowledge graphs (Faruqui et al., 2015). We trained W2V and FastText by using the Wikipedia dataset 2 . In the case of Glove 3 , ConceptNet 4 and DBW2V 5 we downloaded pretrained embeddings from authors' websites. For Word2Vec and Fast-Text we trained models of 200, 300, 400, 500 and 1000 dimensions. In our results we only report the dimensionality that performed best. As our approach is unsupervised, we report experiments on the validation dataset available on the competition's repository 6 . We can see in Table 1 that the arcone operation defined in (3) provided the best results for all word embedding methods. Our general best result was obtained by using Glove embeddings of 300 dimensions. We expected a good result from these embeddings as they specifically learn from mutual information statistics of word pairs. This enables Glove to encode feature contrasts, which also allows it for being the state-of-the-art method in word analogy tasks. During the competition we submitted our best configuration as unique run (Glove 300d and arcone operation with δ = 0.4), which gave us F 1 = 0.60 (place 19/26). Regarding to the threshold δ of the decision functions f (·), we tested a set of values δ ∈ {0.0, 0.1, 0.4, 0.7, 1.0}. Our best result was obtained when δ = 0.4 for almost all embedding methods, excepting DBW2V. This means that the convex parameter λ can vary 60% around the maximum (0.5) in order to consider that an attribute q is shared by (or to generate) both concepts a, b. Thus, by evaluating δ in (4) we see either that x q is too biased towards x a if it holds that λ < 0.4(0.5) = 0.2, or that x q is too biased towards x b if it holds that λ > 0.3 + 0.5 = 0.8. In these cases we can say that q is discriminative for the concepts a, b as it is an attribute only (or mainly) of one of them.
f (·) δ F1-score Given that it was needed δ = 0.7 for DBW2V, we inferred that these embeddings allowed much less bias from the center of the cone and λ must be within 30% of its maximum in order to decide that an attribute is shared by two concepts. In other words, with DBW2V, it is more difficult to distinguish whether the attribute q is discriminative of a, b because it is allowed to be distant from both them even when it can be discriminative. This condition allows for much more feature overlapping and therefore the ranking on bottom of these embeddings can be explained.
Notice that the Euclidean version of the cone vector operation was the second best method for all word embedding methods. In fact, no difference was registered greater than 0.7% between cone and arcone operations.
The fuzzy approach did not show noticeable results. The variation of both, the threshold and the compensation parameter.
Discussion
We consider a bit surprising the difference in performance of Glove with respect to knowledgebased (ConceptNet) and Dependency-based (DBW2V) embeddings: 5.9% with respect to ConceptNet and 13.0% with respect to DBW2V. Such embeddings were expected to provide much more information about discriminative features because they are trained by taking into account semantic features explicitly by using knowledge and language resources for training.By using our arcone vector operation, W2V was ranked barely next to ConceptNet with a small difference of 0.17%. We think there are three possible motivations for this behavior. The first one is that the nature of our decision functions did not allowed to capture semantic features embedded into ConceptNet and into DBW2V. The second possibility is that semantic features are better embedded by Glove and, the third possibility, is that embedding semantic features explicitly can lead to overfitting of the resulting word representations. This latter possibility could be an additional explanation that DBW2V ended at bottom of our ranking.
In the case of FastText, these embeddings have been tested in word analogy tasks with success. However, as in the case of DBW2V, they are better than W2V or Glove mainly for syntactic analogies, which probably makes better FastText (and probably DBW2V) for NLP tasks other than DAI, e.g. sentence representation (Arroyo-Fernández et al., 2017).
Some assumptions were made for practical reasons in the case of fuzzy set operations. We are aware that this could affected drastically the results. The first assumption was that word embeddings were produced by membership functions, which take values in [0, 1] ⊂ R exclusively. This is not the case of word embeddings and they cannot directly mapped to identifiable textual items. Therefore, applying the t-norm and the t-conorm to these vectors is not completely intuitive. Nevertheless, with real-valued vectors we still had: as both embeddings tend to be in the same quadrant, the larger the magnitude of the connective embedding x {a,b} . This latter embedding is somewhat oriented to the direction of the resultant x a + x b , which can be regulated by α, inducing a bias with respect to that direction. Although, this interpretation was worth exploring it did not gave us interesting results. Thus a better version of this fuzzy approach is pending.
At this moment, we have not clear what was the reason several of our results were contradictory with respect to the F-measure. Particularly for distributed representations. That is, we have balanced binary labels in the gold standard, but some scores resulted less than 50%. It is difficult to figure out how it happened analyzing directly distributed representations. Therefore, it remains an open issue proposing an alternative geometrical approach to tackle this inconsistency with respect to the main hypothesis of this paper.
Conclusions
The results of our experiments showed that the arcone vector operation is a simple method for quantifying discriminativeness. This operation showed to be correlated with respect to human judgments annotated in the validation dataset when Glove word embeddings were used. From the vector operations presented in this paper, the arcone operation, Eq. (3), best represents the abstract operation between sets a ∩ b = A. Notice that the concept of cone is limited to euclidean metrics neither on R d nor on S d . Therefore, other kind of transformations and related theories can be explored.
The effectiveness of our approach can be further explored as part of a learning algorithm aimed to obtain specialized (or enriched) word embeddings such that their geometrical structure is fitted in sets of convex volumes. An immediate experiment is using vector operations proposed in this paper as restrictions or as objectives for learning such embeddings for building knowledge resources. | 5,446 | 2018-06-01T00:00:00.000 | [
"Computer Science"
] |
Economic agency of women in Islamic economic philosophy: going beyond Economic Man and Islamic Man
Purpose – The purpose of this paper is to provide a gender-sensitive analysis of economic agency in Islamic economic philosophy. Design/methodology/approach – Acriticalreviewofclassicalethicsliteratureandtheconceptof khilafah is undertaken and discussed in conjunction with the current understanding of homo Islamicus . Findings – Building on the principles of khilafah , the concept of homo Islamicus is a pious stand-in for the flawed homo economicus . Among its flaws is the complete absence of a discussion of women as economic agents. To remedy this the discipline must acknowledge explicitly the denial of women and gender from the discussion of moral agency and include gender as a category of analysis for economic agency. This is only possible by: (1) introducing a non-patriarchal reading of khilafah as the model of agency and (2) by operationalising taqwa as the cardinal virtue of the economic agent instead of neoliberal rationality. Research limitations/implications – If Islamic economic philosophy is to contend as an alternative mode of economics, it must consider gender and class dimensions in its micro-foundation discussion, economic agency is one of them. Originality/value – This studyrevealsthepatriarchalreadingsthatare partof the foundation ofthe concept of the economic agent in Islamic economics, problematising it and providing a gender-sensitive concept of economic agency.
Introduction
The last two decades of continuous global crises and increasing inequality have invited much intellectual discussion favouring diverse exploration in economic thinking in the field of political economy. An arena for such exploration is the discipline of Islamic economics. This paper aims at developing one of the micro-foundations of economic thinking in Islam, the economic agent. An idea of the economic agent or homo economicus, as termed in orthodox economics, already exists in Islamic Economic Philosophy (IEP) as homo Islamicus. However, it is more a discussion of the ideal "man" in Islam rather than a theoretical application of agency in the economy and it clearly neglects women or any idea of gender. The concept of the economic agent in Islamic economics is constructed on the principles of khilafah: the notion that human beings are representatives of God on earth. This paper explores first the subgenre of ethics in Islamic philosophy where the idea of khilafah is developed. It provides a feminist reading of khilafah to avoid the exclusivist account produced by medieval ethicists in Islam. al-Ghazali, Tusi and al-Davani begin their treatises with the Quranic dictate that all men and women are equal before God but the citizen and society that they formulate devolves into prescriptions of patriarchal dominance. For the ethicists, the perfect human is when one embodies the ethical perfection of a khalifah (the representative of God whereas khilafah, means representation), by disciplining one's character. However, women are excluded from this potential of ethical refinement due to the supposed inferiority of their intellects and the "complementary" role that they should play in the world. This is where the idea of "equals before God" is either entirely abandoned or corrupted by the ethicist's own patriarchal worldview.
This paper engages with this contradiction by utilising emancipatory reading of the concept of khilafah and tawhid (oneness with God or the Divine) as operationalised by scholars in Islamic feminism and racial emancipation within Islam. Through this analysis of ethics, theology and philosophy, a new framework emerges that is sensitive to gender. This framework provides a new understanding of the theory of the economic agent in IEP. When read in conjunction with the principles of tawhid, the khalifah epitomises the horizontal relationships that can be envisioned in the Divine One. Considering the Oneness of the divine and the fact that humans are to strive to become the microcosm of the macrocosm of the One and the universe. The principle of khilafah is cognisance of the fact that all human activity must be conducted within the principles of this horizontal relationship. The economy itself is a part of this human sphere of activities, created by humans in service of the fulfilment of human needs. Indeed, the subject of the economic agent has been marred by the masculinist understanding of rationality. As is clear in our time, rationality itself is not based on one's sex. However, what learned men have considered rational is a product of sexist understanding. Indeed, what they developed to be the rational mindset is more a product of historic misogyny, built on the notions of masculinist ideas of competition and power. Again, these are not particularly male ideals, but to perpetuate them as masculine excludes women. Developing a gender-sensitive economic agent will provide an avenue for further exploration of various economic elements, such as labour rights and the space for women in the economy. The lack of understanding of women's role in the economy means that there is little focus on their role in countries where Islamic philosophical thinking plays a role in the legislature and social organisation.
Agent in Islamic economics: homo Islamicus
Islamic economics is built on the ontological perspective of the Quran and philosophical deliberation in the Islamic tradition. The genre of ethics is critical for the development of practised Islam as well as philosophy. Hence, ethics dominate the discussions of the economy in Islam (Naqvi, 1981). Indeed, the differences between halal (allowed) and haram (forbidden) are key in economic decision-making however, ethical considerations are the principles on which the economy is built. Economic theory adopts the cultural background in which it is developed and neo-liberal economics has undoubtedly been shaped by the Western experience (Mitchell, 2002). Naturally, the academic urge towards posturing economics as a positivist science has led to a dismissal of any notion that may be contaminated by ethical norms. Within Islamic economics, discussion about the economic agent has been of two types, one on the nature of human beings in Islam and two, their behaviour in the economic realm. Within the literature in the second type of discussion, critics of homo Islamicus assert that the concept describes an imaginary ideal whose existence has no empirical backing (Kuran, 1995). It is said to be built on utopian concepts without any linkage between the current economic setting, resulting in a deep cleavage between rhetoric and reality (Farooq, 2011). Conversely, another line of thought in Islamic economics discusses homo Islamicus against the concept of homo economicus in orthodox economics. These studies (Chapra, 2000;Zarqa, 2003) follow the logic of the debate in orthodox economics and therefore end up merely "Islamising" the concepts related to homo economicus. This paper can be qualified as part of the third approach to the economic agent in Islamic economics, where scholars attempt to introduce new perspectives on the concept (Asutay, 2007;Furqani, 2015;Mahyudi, 2016).
It is imperative to recognise the centrality of human behaviour to the micro-foundation of Islamic economics (Wahbalbari et al., 2015). Asutay (2007) defines homo Islamicus as: . . . socially concerned God-conscious individuals who (a) in seeking their interests are similarly concerned with the social good, (b) conducting economic activity in a rational way in accordance with the Islamic constraints regarding social environment and hereafter; and (c) in trying to maximise his/ her utility seeks to maximise social welfare as well by taking into account the hereafter Clearly, the homo economicus is replaced by homo Islamicus in Islamic economics. This superficial Islamisation is not without critique. As noted, the goals of Islamic economics have been relegated to a utopian ambition, since its institutions barely meet any of them. A significant reason of this is that the central agent of these models is a replica of the homo economicus (Kuran, 1995). Mahyudi (2016) argues: . . . early contributors have committed two strands of mistake; first, they have given too much focus on the individual person's positive aspect of his innate being. Second, they have undermined the interplay of social dynamics in influencing actual expressed preferences Even a cursory glance at the foundational unit, that is the agent, reveals that it is more a religious, pious ideal rather than an economic agent. Of course, the normative construct of homo Islamicus does not remove the vices of excessive risk, speculation, hoarding or other unhealthy economic practices. However, a flawed economic agent has meant that there is a greater difference in what the aims of the agent are vs what the institutions are catering for. For instance, Farooq (2011) asserts that some economists would suggest that the homo Islamicus does not recognise the concept of time value of money, yet the banking practices of Islamic institutions reveal the application of the idea. Secondly, the over reliance on risk transfer instruments while economic teachings argue for a preference for risk sharing modes of finance is symptomatic of catering to a more selfish homo economicus rather than the virtuous homo Islamicus. This divide seems simple enough to wonder how such a flawed sketch of agency exists in Islamic economics. An ardent critique, Kuran (1995) argues that the goal of Islamic economics is not a better economy vis-a-vis orthodox economics rather it is to prevent the assimilation of Muslims into a particularly Western global economic culture. Indeed, Islamic economics was part of a revivalist movement with revolutionary goals, the case was never competing with prevalent norms.
This may be the case at the time of the initial surge in Islamic economic thinking as a part of the larger movement of Islamisation of knowledge. That is no longer the case. Academics over the last few decades have recognised the significance of Islamic economic thinking as its own discipline rather than a reactionary movement built against the foil of a vague concept of Western thinking. In the past decade, a rich discourse around maqasid al-sharia (objectives of Sharia or Islamic law)has emerged, highlighting the concept of maslaha or social good as understood by ethicists and theologians in the Islamic tradition (Dusuki and Abozaid, 2007;Laldin and Furqani, 2013). This indicates a trend towards a more holistic and socially aware approach to economics. Furqani (2017Furqani ( , 2015 highlights the discussion in the Quran on the nature of humans, their inclinations and purpose, and how men should engage with each other in the mortal realm. These elements in the Quran provide a theological framework on which to build the skeleton of the economic man. However, an element that is completely absent from these contemporary discussions, is that of women. The homo Islamicus does not touch upon the subject of women or gender, one cannot assume that it is gender neutral. This is because the economic agent has been developed from Islamic ethics and legislative literature. These make clear distinctions between men and women. Even though the Quran in many areas may address humans as humans, the philosophical and legislative disciplines in Islamic tradition have made women a particular "people". Following is an account of the removal of women from the public sphere in ethics literature followed by a feminist reading of moral agency in the same. 3. Islamic philosophy and ethics 3.1 Classical ethics and khilafah This section discusses the khilafah within the medieval ethics literature produced by al-Ghazali (2015), Tusi (1964) and al-Davani (1839). Texts produced by these scholars of the Islamic world continue to have an impact on Islamic thinking, directly and indirectly. Abu Hamid Muh _ ammad ibn Muh _ ammad at _ -T _ usiyy al-Ghazali (1058-1111 CE) was a Persian scholar. A polymath, al-Ghazali was influential as a theologian, jurist and mystic of Islam. Muhammad ibn Muhammad ibn al-Hasan al-Tusi, or more prominently known as Nasir al-Din Tusi (1201-1274 CE) was a Persian, philosopher, theologian and scientist. He was known for his work on prose and mysticism but was also an accomplished scientist. Muhammad Ibn Jalal Ad-din Davani (1427-1502/03 CE) was a jurist and philosopher in the Kazerun region of Iran.
The notion of khilafah is paramount in ethics literature. Similar to Aristotle and Plato, the question of human happiness was of prime importance for medieval Islamic philosophers (Butterworth, 1983). al-Ghazali believed that the ultimate happiness or sa'adat (flourishing) emerged from close adherence to the principles of adl (justice) and khilafah (vicegerency of God). The former two aspects were not only significant in al-Ghazali's work but the entire cosmology of ethics literature in Islam (Ayubi, 2019). For instance, Ibn-e-Sina (Avicenna), asserted that justice could be acquired through moral virtues of temperance, courage and practical wisdom (Parens and Macfarland, 2011). Virtues beyond religious dogma were significant for the spiritual development of individuals and in turn communal cohesion and flourishing. For the medieval ethicists, khilafah meant the vicegerency of God, in the sense of mirroring the macrocosm of the universe within the mortal realm. A man could only rise to the enlightenment of khalifah by engaging with other people in society. The soul of an individual, called nafs (the mystical driving force of life and the root of desire in Islamic theology) must engage with others in the mortal realm to fully actualise itself, or its ethical self. There are two realms of this engagement, the domestic and the public.
All three ethicists begin with the theological imperative of achieving enlightenment for the soul. The soul is, in Islamic theology, a non-gendered entity. It is in a way an eternal entity, entrapped in a mortal vessel, moving through the mortal realm. However, as their discussion progresses, they tend to move away from this central tenant, relegating the various souls to different spheres, classes and roles. This categorisation is visible across the lines of gender and class; thus, the ethicists categorise the souls based on the physical realities that these souls occupy in the world. The domestic sphere for instance is a gendered space and patriarchal. Here the divisions are not just between men and women but their roles in the patriarchal family, which is built around strict hierarchies. The hierarchy in marriage applies similar principles of governance that are applicable to running a state (Ayubi, 2019). al-Ghazali asserts that good behaviour in the household and towards the wife is key to achieving a man's purpose. Towards this a man must display moderation in the time spent between his family, entertainments and prayer and spiritual pursuits. For Tusi, in his chapter titled Siyasat wa tadbir-ahl (The governance of wife/family) and al-Davani, in his chapter titled Siyasat-I ahl (Governing the wife/family), the household is really about governance. Naturally, the ruler of this state is the man as the vicegerent of God (khalifa) and the foundation of this patriarchal state is marriage.
The three ethicists develop their chapters on marriage by describing the purposes and prerequisites of it. al-Ghazali notes that marrying allows a man to do more "good" as he strives to provide for his family. A wife frees the man form domestic chores to pursue higher ethical goals towards enlightenment. A woman does not need to participate in these as her inferior nafs will not be able to benefit from such lofty tasks and thus she can only serve as a companion to the goals of a man. Intelligence matters for an ideal wife however, a woman can only be comparable to other women in intelligence and never men. Particularly, intelligence does not mean that the woman will also have wisdom (hikmat), which exclusively remains the realm of men as it arises from the rational faculty. A woman's intelligence is more "general", in a way that can be harnessed towards the service of the household and protection of the man's property. al-Ghazali urges that before committing to marriage, a man should develop a balanced enough nafs so that he cannot be controlled or distracted by his wife from his spiritual duties. He cautions that any man who is not able to control his own nafs should not oversee another's. Tusi and al-Davani too, caution men with weak dispositions from committing to marriage. This stems from the notion that if one cannot deal with women properly that one should not deal with them at all. Thusly describing the institution of marriage as a possible challenge to the strength of a man's intellectual and spiritual control, the ethicists form the domestic sphere into a cite of exercising power. According to the ethicists, a wife cannot be relied upon to fulfil her own marital or religious duties and thus it is the responsibility of the man to ensure that she does. The successful or virtuous man would be one who has concurred this challenge and by extension the domestic woman.
Although al-Ghazali tends to go in more detail on his prescriptions for what an ideal wife should be like, all three describe almost the same woman as an ideal. What emerges thus, is the picture of an obedient, religious and fertile woman that takes pride in her home and devotes her own nafs to the enlightenment of her husband's. Men are told to rule over their wives with a combination of benevolence and strength. For Tusi and al-Davani, it is important that the man show his wife benevolence so much so that the wife begins to seek it via her obedience and domestic efficiency, as she would be dependent on it. Essentially, the ethicists invoke Quranic commands and Prophetic tradition to urge kindness to one's wife. However, the intent of this kindness is merely to serve as a charitable way for the man to rise above the supposedly inadept and weak character of the woman, to discipline her and maintain marital ethics. Another arena of patriarchal control is that of money. Legally speaking, in Islam a man has no right to his wife's earnings or property. A woman may be independently wealthy via her tribe or family and may retain this wealth after marriage. Women in the times of our ethicists also participated in paid labour in the form of production of goods as well as providing services to other women (Rapoport, 2008). Despite the legal literature on Islamic marriages asserting a woman's right to her property, Tusi argues that an ethical wife would actively give up control of her wealth to her husband. Our ethicists deemed money to be the arena of the rational male as well as a space for the enactment of marital ethics. Beyond the religious mandate of providing food, clothing and shelter to the wife, any other financial expenditure was accounted for as household expenditure or charity by the three ethicists. While the ethicists noted the economic relations in the household to be different than the marketplace, i.e. without the evils of money exchange, the reality was that women ensured cash payments throughout their marriage in the form of mandatory wedding gifts and maintenance. Husbands were contractually obligated to pay these and were put in prison for failing to do so (Rapoport, 2008). Hence, the domestic sphere was not as simple an organisation as purported by the ethics literature.
The city and societal ethics
Premodern Muslim societies were centred around urban spaces. Cities were the first source of identity and allegiance in Muslim societies (Euben, 2008). al-Davani uses the image of the city as an analogy for the home. For Tusi, the city is a macrocosm of the individual and household. Therefore, a virtuous city can only emerge from virtuous men and households. Both these ethicists draw from Ibn Rushd the idea that ethics is key in the discussion of politics as both involve social ordering and civic association among men (Butterworth, 1983). Eminent political philosopher al-Farabi illustrated that happiness in the city can be achieved by political ordering of virtuous cities by virtuous men (Mahdi, 2000). The city then is an important cite for exercising ethics and becomes as important for the virtuous man as the household. Men are imagined nested into the larger universe where personal ethics are part of the larger cosmic ethics. These conceptions read as universal values and ethics however, the perspectives of the ethicists were very much a product of their own civic realities (Ayubi, 2019). At the peak of their respective careers, all three ethicists were patronised by court funds to increase the intellectual, religious and ethical profile of their patrons. This meant that works produced by such authors were not entirely free of biases emerging form their own lived experiences. For the ethicists, ethics are not just formulated by social process but also institutions. As such that masculinities are defined in culture but they are to be sustained in institutions (Connell, 2000). It is clear from their writings that the ethicists imagined the public sphere to be masculine, after relegating women to the private sphere. Au contraire, women did exist in the social space in a variety of roles in the medieval period in the vast Islamic geographies (Katz, 2014;Hillenbrand, 2003).
Cities are further categorised in classes. Tusi describes four such classes, the philosophers being the first class and craftsmen and farmers being the fourth. Slaves are not mentioned at all in these descriptions while women are considered a class of their own. Tusi also does not mention the ruler of the city in his classification. He asserts that the head of the city is the one who brings together all the classes and ensures their correct political ordering. The ruler manages each class in a way that each group is located relatively to another in the hierarchy of the city and is placed to lead the one beneath itself. This arrangement of leadership is a key aspect of the ruler's role. For Tusi, being fit for authority is what locates a man in the hierarchy of the city, with the ruler being at the head and at the bottom "the people who have no aptitude for leadership, and these are the absolute servants". This aptitude for authority comes from knowledge for the upper class, and for the lower from their skillset (Ayubi, 2019). Similarly, to Tusi, al-Davani provides four categories, the first category includes philosophers and producers of knowledge, while the last includes tradesmen and people who arrange provisions of food and clothing for the higher classes. For al-Davani, the most significant factor that distinguishes men is the attention that they pay to their spiritual needs and pursuits. The ability of a man to perceive the Divine Unseen is what sets him apart from his peers. However, this ability too is equated with their mental capacity and intellect. Saints are the foremost in his list. They have not been contaminated by the "natural" relations, meaning with the female body, and have the most superior intellect. Additionally, the middle and lower class men are such as they are entirely unable to comprehend abstract ideas or the Divine Unseen. Further, a man's intellectual abilities also determine his space in the homosocial hierarchy beyond his profession. It is his capacity for rational thought that elevates a man in the society and grants him power and influence. Nevertheless, all three submit that, it is entirely possible that a lesser man may just be born into power, even though theoretically, intelligence should determine a man's social rank.
In terms of engagement between the classes, Tusi and al-Davani prescribe that a man should speak to others according to their intellectual level. Tusi argues that a man should always be cognisant of his own intellect and aim to improve his virtues. When he engages with his betters, he should struggle to achieve the higher rank and if he speaks to someone beneath him, he must endeavour to not slip to his level. Naturally, he should be aware to adhere to special etiquette when engaging with rulers or employers. When he speaks to those of a lower rank, he should behave accordingly. The principle being that the treatment of men lower than oneself will be decided on whether they can be corrected, uplifted, and taught (Ayubi, 2019). How well one performs in these relationships is what will define his virtues. Ultimately for Tusi and al-Davani, the privileged man is responsible to care for the lowest members of the community. A man's worldly affairs are connected to one's relationship with the Divine. This is the cornerstone of the ethics cosmology in Islam. A man cannot achieve the rank of vicegerency without virtuous conduct with fellow men.
Women in ethics literature
Note that the texts mentioned herein, tend to separate, or even ostracize women. As it is, the three ethicists begin with the ontological understanding that God has created human beings as his representatives on earth, but they continue to perpetuate hierarchies that were not created by God, even in their own world view. For them, speaking of ethics, the role of women is only instrumental in the realm of men. Their own spiritual refinement is not a concern as they are created weak, and their baser self comes under the control of men. By being able to control women in marriage, men fulfil their ethical roles. Throughout history, the relations between subject and the world, subject and the cosmic and in turn the microcosmic and macrocosmic have been written in the masculine form (Irigaray, 1993). Man has been taken to be the neutral gender even when the discussion claims to be universal. More problematically, the ideal wife's imagery is used to bolster the piety of men (Weitz, 2014). None of the characteristics highlighted in the discussed literature are supposed to assist women in achieving spiritual refinement for themselves, they are supposed to be part of the proper order of the household. The emphasis on secluding women in their homes seems nothing more than a fantasy of an ideal world where the public sphere will be governed only by men. This is far from the truth and was so even at the time when these texts were written.
Not only women but a particular class of men are also outside the sacred circle of spiritual enlightenment. Tusi very clearly is of the opinion that class differences are a natural, and even ideal, occurrence. For him, the discernment of individuals is the natural predilection that they have towards certain crafts. So, if a person chooses to be a philosopher, it is because they have a God given disposition towards it, they are more discerning than one who would choose to be cobbler. Therefore, if the philosopher is placed above the cobbler in the homosocial hierarchy, then this is a completely natural outcome. It is difficult to say whether Tusi was unfamiliar with the fact that people born in certain classes do not always have the opportunity of choice or the freedom to acquire "discernment". For the ethicists a man's choice for his son's career should be based on the child's aptitude (Marlow, 2002). A child who is more inclined towards mathematics should not be deviated from his path in favour of philosophy. However, they do not note how the class hierarchies they enlist map on to such choices. An elite man may be inclined to allow passion toward mathematics, but it is unlikely that he will be lenient towards an interest in pottery or leatherwork. Furthermore, women or girls are never seen through this lens of natural aptitudes. Indeed, childrearing and domestic life is presumed to be the only disposition they can have. Tusi and al-Davani both suggest that everyone should remain in their rank so that there is no aggression in the society. Tusi argued that a society should be carefully managed so that people can be content with their rights and not attempt to usurp others', this management he calls politics (Ayubi, 2019). By carefully ensuring that all are managed in their station and treated as per there just deserts, a society of unequal people is created. The ethicists then prescribe a society where elite men are the only ones with nafses that can achieve the rank of Gods vicegerency by utilising non-elite men and women as arenas of performing their virtue. Thus, their definition of justice is built on the equality of nafses in the spiritual sphere but inequality of individuals in the mortal sphere which I would argue creates an unjust world.
Khilafah: a gender inclusive reading
Khilafah, as noted in the earlier section is the central tenant of moral agency in Islamic philosophy and has a central space in political philosophy in Islam. It has also been central to the struggle for emancipatory readings of Islam for feminist academics. Many of the concepts in Islamic philosophy emerge from the ontological vision of the Quran. al-Ghazali (2000) was of the opinion that any philosophy that disagrees with the sacred text or even deviates from it is madness, he condemned philosophers inclined to deviate from the Quran as heretics. This led to a rich discourse on the relationship between philosophy and religion, the aims of philosophy and the role of a philosopher. Notwithstanding rigid traditionalism in religious sciences, many academics have opened avenues for reinterpretation of Quranic concepts considering current needs, much like the ancient interpreters who read the Quran through their socio-political lens. Lamrabet (2015) argues for an approach to the Quran that contextualises the text in the contemporary world while exploring its themes without forsaking spiritual reference. Indeed, philosophy of religion has been, for a long time, Eurocentric and Christian (Frankenberry, 2018). Engaging egalitarian readings via feminism within philosophy is paramount for breaking this impasse where women's contributions to the economy are seen as outlier in the "natural" order of things. I turn towards the Quran itself as the inspiration of egalitarianism, much like others noted in this paper. The central theological device for this exercise is the concept of tawhid.
Tawhid is central to the Islamic world view. It is the concept of monotheism which acts as the foundation for the larger structure of faith and religious practice. Utilising a concept from within the Quran serves two purposes, one, it creates space for working within the tradition of Islamic ontology and two, by engaging with native concepts we can ensure that the discussion is not subsumed by methodologies or language of foreign disciplines. The concept of tawhid specifically has been operationalised beautifully by amina Wadud (1999Wadud ( , 2008 through hermeneutics for a better understanding of khilafah. Hermeneutics, defined as the theory or philosophy of interpretation, constitutes the methodological principles of interpretation emphasising continuous engagement with a text (Bleicher, 1980). In his seminal text, Islam and Modernity, Rahman (1982) argues for a "double movement" in Quranic hermeneutics. This means that one must from the concrete cases discussed in the Quran to the general principles underlying the treatment of those cases, all the while being cognisant of the social condition of the past and current time. This approach opens the Quran for a more enriching engagement beyond the patriarchal interpretations drawn by some classical exegetes. What sets it apart from the traditional approach to interpretation is that it invites engagement from all peoples and classes, appreciating the universality of the Divine message thus reclaiming it from a closed circle of elite males. Hence, several female scholars have embraced this approach, notable ones being Barlas (2002), Hassan, 2013and Wadud (1999, 2004. Reading the Quran with a more holistic approach in the context of its ethical principle provides an egalitarian khilafah that is not a marker of masculine, patriarchal control but of moral agency and responsibility. This context allows one to see that the central principle of khilafah is the relationship between humans and their creator (Lamrabet, 2015). This relationship is reflected in the idea that men and women are stewards and guardians of each other and the common good.
Economic agency of women
The above discussion brings us to answering the central question of this paper: how can we theorise an inclusive economic agent in IEP? Furqani (2015) argues for a critical synthesis of Islamic philosophy and theological teachings of the Quran rather than an assimilation of neoclassical ideas with an "Islamic" prefix. His most relevant insight (for this discussion) is that of the concept of taqwa (literal meaning to guard or preserve, theologically consciousness of the Divine) as an alternative to the concept of rationality. Self-interest and the maximisation of utility makes the homo economicus a rational being. He argues that since self-interest or greed has no space in the Islamic ethical perspective, Islamic rationality should be redefined. This is an important insight, but I would like to take it in a different direction. Firstly, I disagree with the notion of neglecting certain elements of human behaviour that have supposedly "no place in Islam". This closes the discussion to innovative ideas not to mention creates a caricature of economic agency that is an ideal collection of morality and virtue. A theory of economic agency must take into account the nuanced matrix of human characteristics, even the Quran does not propagate man as simply a collection of haphazard virtues (Mahyudi, 2015(Mahyudi, , 2016. Secondly, my issues with the concept of rationality are the same as the above critique of medieval ethicists' analysis of virtue and piety, its masculinist interpretation of human behaviour. The concept of rationality in Western philosophy is constructed as a contradiction to any behaviour that is supposedly "feminine" (Lloyd, 1993;Rooney, 1991). Hence, if the idea is to propose an egalitarian concept of economic agency, then a masculinist understanding should also be substituted.
Taqwa then provides an excellent opportunity for deliberating decision making and agency within Islamic epistemology. From the purview of the Quran (59:19), taqwa is the cultivation of God consciousness in humans that ensures a focus on the purpose of life. Taqwa is the approach to existing in the mortal world that considers every action in its larger impact on the spiritual realm. It is not simply man as a consumer or seller rather a human being in their entire relationship matrix within the natural and social world. I agree with Furqani (2015), taqwa as a cardinal virtue of the economic agent provides a more holistic approach to agency. This concept finds grounding in both the theological and philosophical traditions in Islam. The Quran highlights the significance of taqwa as a guiding principle which keeps humans from harming each other or nature, harness the innate good in themselves while controlling that which is not, and redress the imbalance in their personalities. Taqwa is meant to be a guiding principle of life and morality regardless of colour, sex or class. With taqwa as the cardinal virtue of the economic agency, the khalifah can have a more defined role within the economy. Within the worldview of tawhid, all people are placed in a structure of horizontal hierarchy where none have the spiritual upper hand over the other. This Quranic principle is central to the Islamic epistemology, in essence anything that goes against it does not have room in Islamic practice. As the khalifah of God, humans are placed directly beneath him so the Divine can operate through them uninterrupted (Eaton, 1991).
The role of the woman in the household is the foil against which the role of the man is built. Finding space for her role in the economy will entail bringing her out of the home and into the homosocial sphere, within the theoretical context of khilafah. Women have and continue to provide for their households and communities even in societies with a more traditional perspective on gender roles. This is reflected in the space that economic questions have started to occupy in religious discourse. Larsen (2015), provides a discussion of fatawa between scholars and practitioners of Islam in Europe. A fatwa (plural: fatawa) is normative legal statement that is meant to answer a question about Islam. These are legal rulings that respond to lived realities where a questioner may not have found an answer in the Quran or has a unique situation. In a fatwa discussed by Larsen (2015) the mufti (religious scholar answering the question) notes that a woman is not obliged to spend from her income in her household. If she has become to sole breadwinner of the household she can provide for it, the caveat being that a homebound man must search for work continuously as a woman earning for her home is not the natural order of things. Where both the man and the woman work, the man must share household chores, other than the care of babies and children as the mother is more suited to it. To the question on how a man can remain in his role as a provider if the woman controls the finances of the house, the mufti argues that whichever party brings money into the household is irrelevant as the roles between the husband and wife are best fulfilled when there is harmony between them. Meaning, even if the woman is financially providing for the family, the man is still the head of the household. He notes that just because more women are now breadwinners, that does not mean that the basic duties of a husband and wife should be challenged.
In the earlier accounts of homo Islamicus, it was clear that none of the referenced scholars have discussed women or gender. It can only be speculated what the reasons for this could be. Perhaps the scholars have assumed that gendered considerations are unnecessary in such discourse around the economy. I am bound to consider here the role of woman in the economy in both the ethics literature and the Quran. The ethics literature considered herein refers to agency in the home and society. Economic agency does not seem to be a separate category of analysis. More contemporary literature in Islamic economics does consider the idea of agency within the economy but does not comment on women. According to the Quran and subsequent legislative literature in Islam, women have a solid space in wealth circulation. The Quran emphasises women's singular authority on their own earnings, rights to property of their kin, rights to mahr (assets or wealth include in a marital contract, paid either at the establishment or dissolution of it) in case of divorce and maintenance in the marital home. Hence, in all capacities of her allocated roles, a woman will engage with money and therefore the economy. Yet there is a contradiction between lived realities of women across Islamic societies and the conceptualisation of the woman as a powerful, willing agent in the economy. The ethics literature provides a picture of inequal relations between men and women. Two things are happening in this discussion, firstly, a woman is limited from her social responsibility and ethical refinement because of her deficient nafs. Thus, removing her from the purview of khilafah. Secondly, she is relegated to the domestic realm, idealised as a dependant on her husband for financial provision. This keeps her from establishing an official presence in the public sphere and hence her economic contributions are ignored. When Tusi and Davani encourage women to give up even their own wealth to the control of their husbands, they are populating the notion that a woman should have no relationship with money. Her needs are to be provided for by a man. In this way the woman fulfils her role as the docile creature to be cared for and allows the man to be the strong provider that he is meant to be. The fatwa discussed here briefly, highlights how these ideas pervade discussions around the household economy when posited against contemporary issues.
Conclusion
The ethics literature in Islam has been key in the development of not only philosophical thinking but has also contributed to the establishment and reinforcement of the ideal city. This article is part of a larger movement in Islamic thought to consider the ethics literature from a more egalitarian perspective. Specifically, this paper is an attempt to develop a gender inclusive notion of economic agency in Islamic economic philosophy. The concepts that form the crux of this paper are khilafah, tawhid and taqwa. The current form of the homo Islamicus is inadequate in its conceptualisation as it views man from the purview of religion without localising it in the economy. This localising becomes even more difficult as the characteristics of this agent are limited to an idealised version of religious piety. Additionally, it does not consider the place of women in the economy. It is thus paramount to juxtapose homo Islamicus against modern realities, where women are increasingly the providers in the family. The model of agency in Islam is the khalifah or the representation of the Divine in the mortal realm. Although a goal of spiritual enlightenment, khilafah has been limited to the masculine domain by medieval ethical literature. Engaging the concept with tawhid, feminist engagement with the Quran has revealed that khilafah is a relationship of responsibility between men and women and the Divine, thus khilafah in the economy must recognise women as the breadwinners, producers, consumers, and fully participating agents making decisions in the economy. I propose taqwa as the cardinal virtue of the economic agent in IEP. Taqwa as the engagement with the Oneness of the Divine and mankind (tawhid) is the consciousness of the Divine that guides decision making of the khalifah. This driving force is not limited to men of a certain class but is exercised by all individuals.
In sum, I argue that the discussion on economic agency in IEP must go beyond the idea of a pious believer and consider the dimension of gender. I propose that the woman should be operationalised as a moral agent working towards spiritual enlightenment as a khalifah of the Divine. This agent participates in the economy with taqwa as their cardinal virtue, working within the framework of tawhid where all agents are to be taken as equal in their pursuits. Inequality as it exists in the contemporary world is neither natural nor desirable and IEP should evolve to reflect this. | 9,695 | 2023-08-22T00:00:00.000 | [
"Economics",
"Philosophy"
] |
Stochastic analysis of the economic growth of OECD countries
Abstract This study examined the determinants of economic development for the 34 member countries of the Organization for Economic Cooperation and Development (OECD) and analysed efficient uses of economic development resource endowments. The methodology included econometric panel data modelling and stochastic frontier analysis, using the Cobb-Douglas production function and trans-logarithmic functional form to analyse data from 2003 to 2012. Economic growth was measured by the gross domestic product (GDP) of each economy. As a result, the determinants of economic development were presented and a ranking of efficiency was obtained for all OECD economies throughout the period of analysis. It was concluded that countries with higher economic growth levels have higher efficiency rankings. For example, countries with higher efficiency rankings were Luxembourg and the U.S.; Chile and Mexico were ranked lower. Finally, there was a positive relationship between growth levels and technical efficiency levels.
Introduction
Economic development is a topic that has been widely studied. Several authors and institutions are dedicated to analysing different models, methodologies and data management to measure development levels of countries and identify their determinants. Two main theoretical approaches are found in the literature: classical and neoclassical, which measure economic growth, as well as other schools of thought that also consider non-economic variables. These are used to design public policies aimed at increasing a population's well-being with a human-centred approach that considers sustainable behaviour as a parameter to ensure development beyond economic growth. Economic growth is still considered necessary in obtaining
Theoretical framework
The approach used in this study focused on a wide range of economic growth determinants. It was geared towards a relative measure rather than an absolute measure of economic performance, considering that many factors interact to attain GDP results (Bodenstein, Faust, & Furness, 2017;De la Fuente, Vallina, & Pino, 2013;Fouquet & Broadberry, 2015;Haavio, Mendicino, & Punzi, 2014).
Authors and institutions were reviewed to build a theoretical framework to classify and define indicators of economic development. The analysis focused on macroeconomic determinants that influence a country's economic performance, for example: income, savings and investment, population growth and unemployment.
GDP is a widely chosen indicator for evaluating the economic behaviour of a country, since it demonstrates income generated by different economic agents. It also measures the cost of goods and services production in the economy, which is measured in terms of factor payments and products produced in each economic sector. Thus, income and expenditure are equal at a macroeconomic level (Dornbusch, Fischer, & Starzt, 2004). The difference between GDP at market value and at factor cost is explained by indirect taxes (Dornbusch, Fischer, & Startz, 2004).
Economic growth can be used as a dependent variable as it enables governments to provide more and better public goods and services, such as education, health services and infrastructure (Acemoglu & Robinson, 2012;Mankiw, 2012). GDP is the market value of all final goods and services produced in a country during a given period (Dornbusch et al., 2004). The GDP adds different types of products together to obtain the value of economic activity at market prices. Its purpose is to include all items produced in the economy and sold on the market. However, certain products are omitted, such as those that are produced and sold illicitly and homemade goods that don't reach the market. This statistic is posted quarterly in order to analyse trends, and is seasonally adjusted to account for seasonal production changes inherent to some goods and services (Dornbusch et al., 2004, Jones, 2015. According to Chirwa and Odhiambo (2016), the most relevant variables in theoretical models are as follows: investment or increase in physical capital (Solow, 1956;Swan, 1956), savings (Ramsey 1928;Cass 1965;Koopmans 1965), new ideas and learning-by-doing (Arrow, 1962;Sheshinski, 1967;Uzawa, 1965), R&D (Romer, 1986) R&D and non-Pareto optimality in a competitive market, human capital (Lucas 1988), human capital plus investment (Mankiw, Romer & Weil, 1992), R&D and imperfect competition (Romer 1990;Aghion & Howitt 1992;Grossman & Helpman 1991).
Savings and investment are two macroeconomic aggregates that are important to GDP growth in the long run (Dornbusch et al., 2004). These factors relate resource allocation among different periods of time. One method of increasing future productivity is to allocate current resources to increase capital stock, which is accomplished by saving part of their current income to finance investment in order to grow (Cojocaru, Falaris, Hoffman, & Miller, 2016;De Gregorio, 2016). Although there is a demonstrated correlation between growth and investment, causality is uncertain (Cole, 2004). Nonetheless, there is evidence that capital accumulation increases productivity and a consensus that higher investment levels accelerate economic growth. Increasing capital stock raises productivity and accelerates GDP growth. However, for a certain scale, capital shows diminishing returns. Thus, an increase in savings raises productivity and income, but does not necessarily accelerate the growth rate of these variables. Nevertheless, investment in physical capital is important, and countries that save/invest more of their GDP grow faster (Cole, 2004).
Investment is also known as gross capital formation, which is composed of expenditures on fixed assets of the economy plus net change in inventories (Dornbusch et al., 2004). Fixed assets include land improvements, buildings, machinery and equipment purchases, as well as construction of roads, railways, and similar infrastructure. Inventories are stocks of goods held by firms to meet temporary or unexpected fluctuations in production or sales. Hence, changes in inventories represent the differences between expected and current expenditure in the economy. Accordingly, gross capital formation contributes to growth through real investment, measured in relation to GDP, which reflects the physical quantity evolution of capital and output (Acemoglu & Robinson, 2015;Sala-I-Martin, Doppelhofer, & Miller, 2004;Simionescu, Laz anyi, Sopkov a, Dobe s, & Balcerzak, 2017). Chirwa and Odhiambo (2016) define the following as determinants: savings and investment efficiency (Acemoglu & Robinson, 2015;Cole, 2004;Easterly & Wezel, 1989), macroeconomic stability (Fisher 1992), institutions, human capital, international openness and public investment (Acikgoz & Mert, 2014;Barro, 1990Barro, , 1999Barro, & 2003Burnside & Dollar, 2000;Easterly & Levine, 1997;Knight, Loayza, & Villanueva, 1993).
The economic development strategy has other key aspects, including job creation and the improvement of a country's business environment to promote business creation. Both determine the opportunities offered to people by society. New businesses imply job creation, and employment is important for social cohesion, since full employment helps reduce dissatisfaction among the population (Kerr, Kerr, € Ozden, & Parsons, 2016;Peri, 2016;Stiglitz, 2002).
According to the International Labour Organization (ILO), the conceptual framework for measuring a labour force was adopted by the International Conference of Labour Statisticians in 1982. This Conference defined classification standards according to a person's activity during a short reference period, such as a week or a day, in three exhaustive and mutually exclusive categories: employed, unemployed, and economically inactive. The labour force is composed by those that are employed and unemployed, as they are part of the economically active population that are either working or looking for work. Thus, the criteria to measure a labour force is threefold: to have a job, to be looking for a job or to be available for work. The labour force includes both nationals and immigrants because both produce goods and services that are considered in the GDP (Kerr et al., 2016;Peri, 2016).
The economically active population includes people from 15 to 65 years old if they meet the physical and intellectual conditions to have a job (Dornbusch et al., 2004). The inactive population includes those in the same age range that have no job, are not looking for work and are not available for work (Dornbusch et al., 2004). People under or over the minimum age are passive, as it is assumed that they are not in a proper work condition, or that they have already retired from the work force. Those considered unemployed do not have a job but are either looking for one or are available to take a job offer (Dornbusch et al., 2004).
The unemployment rate is the percentage of the economically active population seeking a job but that are not yet occupied. The unemployment rate is widely used as an overall indicator of a country's economic health (Stiglitz, 2002). Vergara (2005) includes the participation of women in the labour force as an economic growth indicator, and concludes that low participation has at least two negative consequences. First, the skills of a significant fraction of the population are not utilized. Second, lower income women have lower labour force participation, which deepens income inequality. Aragon-Mendoza, Pardo del Val, and Roig-Dob on (2016) incorporate this gendered perspective with a focus on the creation of quality entrepreneurship.
Another important factor for economic growth is electricity. Electricity plays an essential role in modern life, providing benefits and progress in various sectors such as transportation, manufacturing, mining and communication. Electricity is vital for economic growth and quality of life, not only because it increases productivity, but because it raises energy consumption, which increases exchange opportunities, thus increasing economic welfare (Blazquez-Fernandez, Cantarero, & Perez, 2014;Ciarreta & Zarriaga, 2007;Jumbe, 2004).
Concerning electricity, there are several studies that have established the relationship between energy consumption and economic growth. When a nation uses more energy, production increases, since the use of this energy to operate technology in manufacturing processes increases productivity. In some cases, the availability of electricity enables the incorporation of the above mentioned technology in processes (Acemoglu & Robinson, 2012;De la Fuente et al., 2013;Magazzino, 2014).
Econometric strategy
Productivity represents the conversion of the inputs of a process (labour and capital) into desirable outputs (sales, profits, etc.) (Solow, 1956). The term productivity is related to the efficient use of resources when producing a good, and can be defined as the relationship established between production and consumption of productive factors measured in physical units. In the input/output relationship, output can represent any established purpose or anything that the company generates, whereas input can be considered all that is consumed by achieving the output (Di eguez & Gonz alez, 1994). Gronroos and Ojasalo (2004) defined productivity as the degree of effective transformation of a process' input resources into (i) economic results for the provider of goods/services and (ii) value for consumers. Technical efficiency involves maximizing the level of output that can be obtained from a given combination of inputs, and indicates the degree of success in the use of productive resources. Therefore, inefficiency is the difference between the observed values of production and the maximum achievable values given a certain technology (Albert, 1998). According to the classic paper of Farrell (1957), the level of efficiency of a company can be viewed from two different measures: (i) technical efficiency, which reflects the ability of a company to achieve maximum outputs depending on a set of inputs; and (ii) allocative efficiency, which reflects the ability of a firm to use inputs in optimal proportions, depending on their respective prices. These two measures are combined to measure economic efficiency (Battese & Coelli, 1992;Coelli, Rao, O'Donell, & Battese, 2005).
Economists typically use an enterprise production function to summarize technically efficient production methods available to each company.
The production function of a firm shows the maximum amount of output that can be obtained with a given number of factors, and shows the results of different technically efficient production methods (Nicholson, 1997).
The production frontier shows the maximum production level regarding technology and resource endowment, which provides the highest level of utility or satisfaction that can be reached, given the resource constrains. The relationship between inputs and output shows the opportunity costs relevant for the economy (Alvarez & Delgado, 2005;Nicholson, 1997).
As the production frontier is the limit of what is possible to produce given the factor endowment, any point situated beyond the boundary is unattainable, while those located inside the frontier represent inefficient situations characterized by idle resources.
There are two alternative methods for estimating production frontiers. One is the Data Envelopment Analysis, a deterministic and nonparametric method that eliminates production function assumptions. The other methodology is the Stochastic Frontier Method, which allows for random shocks even though two alternative production functions are used to estimate the frontier and efficiency ranking. The first method was used by Medved and Kavcic (2012) in a study regarding efficiency in Croatian and Slovenian insurance markets.
The production frontier is modelled under two alternative production functions in the related literature (Aigner, Lovell, & Schmidt, 1977;Nicholson, 1997): the Cobb-Douglas function (Meeusen & van Den Broeck, 1977) and the trans-logarithmic function.
To determine the level or existence of technical inefficiency according to the production models used, a test of technical inefficiency was applied (Coelli, Prasada, & Battese, 1998;Kodde & Palm, 1986).
The selection of the functional form is important when estimating technical inefficiency (Tran & Tsionas, 2009). As in most frontier studies, the Cobb-Douglas model and the trans-logarithmic model are evaluated as a technological representation (De la Fuente, Bern e, Pedraja, & Rojas, 2009). In most studies on the manufacturing sector, the trans-logarithmic model is the most popular due to its flexibility (Tran & Tsionas, 2009). The following are general forms of both models after linearization (Eqs. 1 and 2): [The trans-logarithmic model] In both models, Y it is the output of company i during period t; X mit and X nit are the inputs m and n of company i during period t; t it is the random disturbance assumed as normally distributed, with a zero mean and constant variance; and l it is a nonobservable and non-negative random error associated with technical inefficiency.
Results
The present study applied the efficiency analysis to the 34 countries that currently belong to the OECD, using data from 2003-2012.
Efficiency is measured by utilizing the real GDP of each economy in U.S. dollars of 2012 as an output variable measured through purchasing power parity. Inputs are expressed by the following variables: labour, measured as the number of workers in each economy; savings, expressed as gross savings in U.S. dollars; capital, stated as gross capital formation in dollars; and finally, electricity consumption, which measures the production of power plants and cogeneration plants, minus losses in transmission, distribution and processing, plus the consumption of cogeneration plants, (expressed in kWh). The data from the output and input variables were obtained from the statistical database of the World Bank.
To measure the efficiency of OECD member countries, the first functional form used is the Cobb-Douglas, which is expressed as follows (Eq. 3): where: PIB it ¼ Gross Domestic Product of country i, for period t. L it ¼ Number of occupied workers of the labour force in country i, for period t. S it ¼ National savings of country i, for period t. K it ¼ Gross capital formation of country i, for period t. kWh it ¼ Electricity consumption of country i, for period t.
where, t it is the random disturbance assumed to be normally distributed, with a zero mean and constant variance; and l it is a non-observable and non-negative random error associated with technical inefficiency.
The results of the maximum likelihood estimation of the previous Cobb-Douglas model are presented in Table 1. This table shows that, based on the total frontier deviation, 90.3% is due to technical inefficiency, with a technical efficiency that increases over time. The results of the maximum likelihood estimation of the previous trans-logarithmic model are presented in Table 2. This table shows that, based on the total frontier deviation, 89.9% is due to technical inefficiency, with a technical efficiency that increases over time.
To determine whether or not technical inefficiency is present in the former models, the test of technical inefficiency (Kodde & Palm, 1986) was applied. For the Cobb-Douglas function, a likelihood ratio test of 303,605 was obtained with 3 restrictions, and had a critical value of 7.045 with 95% confidence. Therefore, the null hypothesis of no technical inefficiency was rejected, and technical inefficiency was found within the OECD countries.
Regarding the trans-log functional form, a likelihood ratio test of 405.74 was found with 3 restrictions. A critical value of 7.045 with a 95% confidence level was obtained. Thus, as in the Cobb-Douglas function, the null hypothesis of no technical inefficiency was rejected, and technical inefficiency was found within the OECD countries.
These results were used to determine which of the two functional forms better represented the data's behaviour. The generalized likelihood ratio was used, and the null hypothesis indicated that the Cobb-Douglas was the appropriate functional form. The alternative hypothesis stated that the function would be better represented by the trans-log function.
With 10 degrees of freedom, given by the number of parameters in the second order trans-log function, the critical value was 18.31, which resulted in 0.577 according to the Chi-Square table with 95% confidence. This value does not reject the null hypothesis.
Although the most suitable functional form proved to be the Cobb-Douglas function, the efficiency ranking was calculated under both functional forms for comparative purposes (Table 3).
According to the Cobb-Douglas functional form, the U.S.A had the highest level of efficiency, followed by the U.K., France, Italy and Germany. Iceland had lowest level of efficiency within the 34 economies. Figure 1 demonstrates the evolution of technical efficiency for OECD countries, as measured by the Cobb-Douglas functional form. A growing trend in efficiency was observed with an average efficiency of 55%. The minimum achieved efficiency was 53% and the maximum achieved efficiency was 58%.
Discussion
Inefficiencies were detected in all OECD economies, implying that even advanced economies have room to grow. Nevertheless, average efficiency showed a strong positive trend, even in 2008 despite the Wall Street financial crisis that affected countries around the world. One surprising result was the inefficiency found in Iceland's economy. This could be explained by the economic crisis of 2008-2010 due to the collapse of its banking system, where the three largest banks in the country declared bankruptcy and their combined debt exceeded more than six times the country's GDP (BBC News, 2009).
Luxembourg was the second most inefficient economy, although it had one of the highest incomes per capita. This could imply that income per capita measurements include bias in well-being.
In addition, Turkey lies in the middle of the ranking list even though it has faced institutional issues such as the coup d' etat. Corruption, a variable related to institutions, does not appear to totally explain efficiency, considering that Mexico has a higher ranking than economies with less corruption, such as New Zealand. With that said, the top 15 economies in the ranking belong to countries that are perceived as having strong institutions.
Innovation is thought to be one of the variables that explain development. The top two economies enforce a strong protection of intellectual property rights and an agile patent system, such as the USA. Additionally, Great Britain is a leader in terms of innovation theory and advances in knowledge generation. Nevertheless, South Korea, one of the countries perceived as being very innovative, is ranked 19 of 34 countries. The same can be said about Ireland, whose strategy in recent years has also been strongly centred on innovation. This could imply that innovation and knowledge generation have an accumulative effect.
Germany is another interesting case. It is syndicated as having earned the most within the European Union, yet is the fourth most efficient of the European countries. By contrast, Great Britain is highlighted for its accomplishments in efficiency, although it may be leaving the EU in the short term.
In general, the results comply with the behaviour that was expected: that more advanced economies are more efficient than emerging economies, and that there are factors that must be further analysed, as some aspects had not been examined by previous empirical studies.
Conclusions
Technical efficiency is achieved when economies maximize output using all available inputs. Determining a country's level of efficiency provides a valuable insight into economic behaviour, and enables comparisons to other economies. If countries are not using their resources properly, they can make adjustments to increase production and improve efficiency.
The results obtained in this study identified the OECD economies that have idle resources and compares their behaviour to other countries. Countries with lower efficiency rankings have a much greater potential for increasing productivity considering the current combination of productive factors.
Moreover, the study revealed that the top ten most efficient OECD countries are from North America and central Western Europe, as well as Australia and Japan.
These results coincide with their relationship as trade partners. On the other hand, Latin American representatives occupy some of the lower rankings, such as Mexico (24) and Chile (28). Finally, countries with higher levels of efficiency have higher GDPs, and vice versa.
These results should influence public policies to focus on increasing the productivity and competitiveness of individual economies. In terms of the relevant literature, these results demonstrate an alternate method for analysing economic development. The results also provide investors and managers with a new mechanism for evaluating potential for investment, or for examining better business climates, which can influence public policies in other countries.
The objective of this study was to utilize the Stochastic Frontier Methodology to analyse economic performance, and a limitation of the analysis was only using traditional factors of production, labour and capital. However, a second stage of this study will assess other economic growth determinants to widen the scope of the present study. Also, other dependent variables that must be considered in future research are differences between income per capita and GDP, or alternative definitions to measure development growth rather than economic growth.
Future studies should apply the methodology to continental contexts, or regional blocks, i.e., measuring the efficiency of different economies in a continent or region, or to create a world efficiency ranking. It could also be applied to local contexts, measuring the efficiency of different regions/states of a country in order to determine which are the most efficient and to identify specific problems. Studies of this nature will help develop focused public policy, lay out specific conditions, and extend regional growth, in order to work toward more equal conditions and increased efficiency in every country.
Disclosure statement
No potential conflict of interest was reported by the authors. | 5,143.6 | 2019-11-08T00:00:00.000 | [
"Economics"
] |
Tenofovir Alafenamide for HIV Prevention: Review of the Proceedings from the Gates Foundation Long-Acting TAF Product Development Meeting
The ability to successfully develop a safe and effective vaccine for the prevention of HIV infection has proven challenging. Consequently, alternative approaches to HIV infection prevention have been pursued, and there have been a number of successes with differing levels of efficacy. At present, only two oral preexposure prophylaxis (PrEP) products are available, Truvada and Descovy. Descovy is a newer product not yet indicated in individuals at risk of HIV-1 infection from receptive vaginal sex, because it still needs to be evaluated in this population. A topical dapivirine vaginal ring is currently under regulatory review, and a long-acting (LA) injectable cabotegravir product shows strong promise. Although demonstrably effective, daily oral PrEP presents adherence challenges for many users, particularly adolescent girls and young women, key target populations. This limitation has triggered development efforts in LA HIV prevention options. This article reviews efforts supported by the Bill & Melinda Gates Foundation, as well as similar work by other groups, to identify and develop optimal LA HIV prevention products. Specifically, this article is a summary review of a meeting convened by the foundation in early 2020 that focused on the development of LA products designed for extended delivery of tenofovir alafenamide (TAF) for HIV prevention. The review broadly serves as technical guidance for preclinical development of LA HIV prevention products. The meeting examined the technical feasibility of multiple delivery technologies, in vivo pharmacokinetics, and safety of subcutaneous (SC) delivery of TAF in animal models. Ultimately, the foundation concluded that there are technologies available for long-term delivery of TAF. However, because of potentially limited efficacy and possible toxicity issues with SC delivery, the foundation will not continue investing in the development of LA, SC delivery of TAF products for HIV prevention.
Introduction
M ore than 4 years ago, the Bill & Melinda Gates Foundation (the foundation) initiated investments designed to develop long-acting (LA) delivery of antiretroviral (ARV) drugs for the prevention of HIV transmission in both men and women. This strategy was driven by the challenges of required daily adherence to an oral pill regimen like Truvada [FTC/tenofovir disoproxil fumarate (TDF)] for oral preexposure prophylaxis (PrEP), 1,2 particularly for adolescent girls and young women (AGYW). There were two top line requirements necessary to achieve this goal: (1) identification of an appropriate active pharmaceutical ingredient (API), and (2) a compatible LA delivery technology that could achieve safe and effective levels of the priority API. As will be explained hereunder, tenofovir alafenamide (TAF) became the priority API at the foundation for LA HIV prevention. These new foundation-supported efforts focusing on TAF were conducted independently and in parallel with development of other LA prevention products, also supported in partnerships with the foundation. These included, the injectable suspension formulation of the integrase inhibitor, cabotegravir 3 (CAB-LA; GlaxoSmith-Kline; ViiV), that had originally been developed as an LA treatment option to be used in combination with the LA injectable suspension formulation of rilpivirine ( Janssen) 4 ; and, the 30-day dapivirine vaginal ring (International Partnership of Microbicides). 5,6 These products were in advanced development at the time the foundation initiated its investments in the LA ARV portfolio described here. Of importance, both the injectable CAB and the dapivirine vaginal ring provide useful advantages for HIV prevention. Consequently, the goal of this foundation initiative was to determine if there were other LA drug-device strategies that had additional or alternative advantages, and could provide end users, particularly AGYW, with more choices.
In January 2020, the foundation convened a meeting of its active LA TAF product development grantees, along with additional experts in the field who were independently working on similar projects (see Table 1 for participants). Through the course of this meeting, data were provided by the individual foundation grantees on their specific product development efforts, including preclinical safety and pharmacokinetic (PK) findings, as well as similar product development summaries from independent groups. This article is a summary of that meeting. Owing to observations reported by several groups regarding preclinical findings of local toxicity with subcutaneous (SC) delivery of TAF, as well as potentially low efficacy as observed in nonhuman primate (NHP) oral dosing studies, [7][8][9] the foundation concluded that continued investment in LA TAF products for HIV prevention was unjustified. This article summarizes the product preclinical development studies along with the data analyses conducted by the meeting participants that led to this conclusion.
Background
Why LA PrEP? The optimal way to control a pandemic is to prevent infections with an effective, durable, safe, and acceptable vaccine that is made available to and successfully used by at-risk populations on a large scale. Unfortunately, the nature of HIV has made vaccine discovery and development extraordinarily challenging. Of importance, alternatives to vaccine strategies for HIV PrEP have also been investigated and successfully developed. Although HIV treatment has proven to be very effective in preventing HIV transmission, 10 implementing this approach involves challenges, particularly in resource-limited settings, including availability of ARV drugs, testing capacity, clinical followup, and cost.
Another effective option for the prevention of HIV infection is oral PrEP. HIV oral PrEP efficacy was first demonstrated with the use of oral truvada (FTC/TDF) in men who have sex with men (MSM) and transgender women (TGW), 11 and in sero-discordant couples. 12 However, consistent adherence to daily oral PrEP has been challenging for many end users over a prolonged period of time, even in the context of controlled clinical trials. 1,2 Similar adherence issues have been identified with end-user-controlled vaginal microbicides. 13 It has been shown that an effective option for addressing the end-user adherence challenge is the use of a LA injectable ARV. Results were recently reported from the HPTN083 trial conducted in MSM and TGW. This was a double-blind, double-dummy, noninferiority trial of CAB-LA versus FTC/TDF oral PrEP. 14 CAB-LA was shown to be three times more effective than oral Truvada in preventing HIV infection. Similar results were recently reported from the HPTN084 trial conducted in women. 15 The apparent benefit of injectable and possibly implantable prevention products, regarding end-user adherence is supported by a number of user preference studies that have demonstrated that end users, particularly women, prefer LA product options for prevention. [16][17][18] Despite the compelling results of the HPTN083 and HPTN084 trials and the supporting data from the end-user studies, there is still a need for additional LA products for HIV prevention. Because of the potency of most ARV drugs currently available for PrEP, high doses and drug loads are likely required for LA prevention, which could present feasibility challenges with conventional LA delivery technologies (injectables, implants, etc.). Another potential issue to address with LA injectable ARV products and degradable implants includes the need to mitigate possible safety risks with the potential inability to remove such products once administered.
Strategy
In light of the potential benefits and challenges associated with current LA HIV prevention products, the foundation initiated an effort to develop new LA prevention products in 2015. This effort involved an assessment of API options for potency and compatibility with novel delivery technologies, drug release/duration potential, toxicity/safety, and PK evaluations.
API selection
The key elements to identifying a suitable API candidate for LA prevention products were as follows: potency; safety; compatibility with potential LA delivery technologies (physicochemical properties); efficacy potential; availability for development, and, if at all possible, regulatory approval for treatment already established (note: the foundation was not pursuing new molecular entities at that time). After an internal survey and evaluation of API options that included nucleoside reverse transcriptase inhibitors [e.g., tenofovir (TFV), TAF], non-nucleoside reverse transcriptase inhibitors (e.g., rilpivirine), integrase inhibitors (e.g., CAB), and a limited number of specific earlier stage compounds that possibly met the selection criteria, TAF (Gilead) 19 was selected. The selection of TAF was based on the potential for long duration of effect and its relatively higher potency to support its SC delivery with LA technologies.
During initiation of the project, several delivery technologies were screened for the probability of success with TAF. LA prevention products require high potency drugs because it is difficult to formulate and inject large amounts of drug under the skin or into muscle, and typical implants have limited drug-loading capacities. For example, Gunawardana et al. 20 estimated that a 1-year TAF implant would need to release 51 mg of TAF (0.14 mg/day for 365 days). However, this estimate was based on scaling the active metabolite, tenofovir diphosphate (TFV-DP) PK from dog to human and targeting the estimated EC 90 for TFV-DP in peripheral blood mononuclear cells (PBMCs) at *40 fmol TFV-DP/10 6 in lysed PBMC. Other studies 19-21 indicated a more conservative daily release rate for efficacy (e.g., 0.4-1 mg/day release), or durations of release (e.g., at least 6 months for an injectable or biodegradable implant) with the amount of drug dosed dependent on the targeted duration of effectiveness.
For example, drug needs for a 6-month duration injection or biodegradable implant can be calculated as follows: 180 days · (0.4-1 mg/day) = 72-180 mg total; and, a target duration of release of 12 months for a more durable implant: 360 days · (0.4-1 mg/day) = 144-360 mg total. Thus, it was important as part of the preclinical development programs to assess the safety of SC delivery of TAF across a wide range of exposures, as well as to gain further insights into TAF's efficacy for HIV PrEP using preclinical models to enable data-driven decisions for the progression of TAF in LA technologies into clinical studies.
LA target product profile
Key elements of the TPP were defined as a means of identifying potential delivery technologies (injectables or implants) and assessing development feasibility and progress. These elements are summarized in Table 2.
Candidate technologies and development partners
After a lengthy survey effort to identify potential LA TAF delivery technologies, the foundation identified the following partner grantees: (1) University of Washington Department of Bioengineering: Injectable ''Drugamer'' Technology, 22 Principle Investigator: Dr. Patrick S. Stayton (2) RTI International: Tunable, biodegradable reservoir implant device. 23 Principle Investigator: Dr. Ariane van der Straten (3) Intarcia Therapeutics Inc.: Osmotic mini-pump, SC titanium implant. 24 Principle Investigator: Dr. Paul L. Feldman These three partners all used different technologies to achieve LA SC delivery of TAF: injection, degradable implant, and nondegradable implant. In addition to the efforts of these partners, other groups who were working independently of the foundation developing LA TAF delivery products with other sources of funding were also invited to the 2020 LA TAF Meeting. The purpose of this meeting was to review the current status of LA TAF product feasibility, and determine if any specific additional investment in these efforts was warranted. Those additional partner groups and technologies include the following: ( This ''drugamer'' technology involves conjugating a TAF molecule through a linker to a monomer, which is then polymerized into alternative final architectures (homopolymer, di-block and hyperbranched polymers, and polymer micelle configurations) using reversible addition fragmentation transfer, a controlled form of free radical polymerization. 27 The alternative configurations generated by this process provide different levels of control over drug release from the polymer. This polymer serves as an injectable SC hydrogel reservoir, where TAF is released through hydrolysis or enzyme-mediated cleavage at the linker. This technology allows for relatively high drug loads, and the depot is held together by the hydrophobicity of the polymer formulation. The hydrophobicity and molecular architecture of the polymer formulation also helps keep the TAF drug substance stable inside this SC hydrogel depot, which is important because TAF is susceptible to degradation in aqueous environments. 28 Alternative versions of these formulations were evaluated in mouse PK studies (e.g., Fig. 1). 29 No formal safety assessments were conducted in this program with this delivery system. The goal was to identify formulations for future evaluation in the dog model for PK and safety. The group reported in vivo PK results from mice using alternative alkyl linkers in the TAF drugamer formulations. The homopoly TAF alkyl linker formulation demonstrated consistent delivery in the mouse study for 60 days (*0.005 nM TAF/mL plasma for a total of 6.77 mg TAF per mouse over the 60-day study). This was the first time TAF levels in plasma in vivo could be reported using this technology.
RTI: biodegradable polycaprolactone TAF implant
The technology used by RTI involves the fabrication of a cylindrical implant through hot melt extrusion of polycaprolactone (PCL). The cylinder is sealed at one end and then loaded with TAF and any additional necessary excipients (e.g., polyethylene glycol; sesame, castor, or alternative oils; buffering agents) as needed. Candidate SC implants were evaluated for in vitro release and stability and selected for in vivo evaluation.
The RTI group reported a number of advances with their technology development. For example, they reported API compatibility and successful delivery of a number of different drugs besides TAF. They were also successful in adapting their implant so that it can be inserted in vivo with an existing trocar device, the Sino II contraceptive implant trocar (Shanghai Dahua Pharmaceuticals Co. Ltd). This implant also demonstrated adequate shelf-life stability through formulation optimization efforts. They also demonstrated maintaining physical integrity and successful removal of the device up to 6 months after insertion in vivo, allowing for analysis of residual drug recovered from the device.
Some challenges that were observed with this device in vitro and in vivo with TAF included: degradation of TAF inside the implant owing to TAF sensitivity to water/phosphate-buffered saline (PBS) in the in vitro experimental set up and the increasing acidity of the implant microenvironment; if this level of drug degradation would be similar in vivo and acceptable from a regulatory perspective; the need to identify all degradation products in vivo; an observed discoloration of the implant over time; and, some local reactivity of TAF in vivo.
They also reported a tunable release rate of TAF free base that ranged from 0.20 to 1.0 mg/day with implants produced from research grade PCL. Primary variables for controlling release included implant wall thickness, the oil excipient (e.g., sesame vs. castor oil), and the physical properties of the PCL 30 (e.g., molecular weight, % crystallinity). They were also successful at identifying formulation additives that helped with the stabilization of TAF in aqueous environments (e.g., control of pH with the inclusion of sodium citrate in the drug formulation). 31 The advances made in terms of the product optimization allowed the group to select three candidate formulations for delivery of TAF in a dog model.
In vivo animal studies. RTI conducted PK and local safety studies in three animal models: rabbit, dog, and NHP. The dog studies were carried out with the three optimized formulations, which were engineered for three different release rates through tube wall thickness, PCL molecular weight, and excipient selection. Results of the 6-month dog study were produced for three different delivery doses per day: dose no. 1, 0.16 mg/day; dose no. 2, 0.26 mg/day; dose no. 3, 0.36 mg/day. 32 Two of the three dogs in the placebo arms of doses no. 1 and no. 2 and all three dogs in the placebo arm of dose no. 3 completed the 182-day study, as did two of the three dogs in the active arm of dose no. 1. None of the active arm animals for dose no. 2 completed the study, and one of the three dogs in the active arm of dose 3 completed the study. 32 Important PK and safety findings in this study were as follows: (1) stable and low plasma levels of TAF and TFV were observed throughout study duration; sustained TFV-DP levels measured at >200 fmol/10 6 PBMC for up to 6 months with rapid drop in PBMC concentrations within 2 weeks of implant removal; (2) site lesions/abscesses associated with drug dumping and poor device integrity early in the study (formulation 2); and long-term, chronic exposure (formulations 1 and 3). No animals were killed prematurely and all animals were cleared to return to stock as lesions were reversible within 2 weeks of implant removal. 31 A second in vivo study was conducted with these devices in rhesus pigtail macaques.
A quantitative summary of safety findings in this NHP study is provided in Table 3. 33 Key PK and safety findings from this NHP study included the following: (1) low sustained TFV exposure in plasma (i.e., below limit of quantitation [BLOQ]); however, high sustained levels of TFV-DP in PBMC were observed; (2) all the high-dose animals completed the PK study, showing mild-to-moderate skin irritation/toxicity with long-term use. Hematoxylin and eosin (H&E) staining of tissue surrounding the low-and mediumdose implants revealed moderate to marked deep dermal inflammation.
Intarcia Therapeutics Inc.: titanium osmotic mini-pump implant
This technology (originally developed by the ALZA Corporation and is based on the DUROS delivery technology) is a sterile, nondegradable implant designed to achieve zero-order release kinetics of drugs for up to 1 year. 24 The body of the device is a cylindrical titanium alloy capped at one end by a water permeable membrane, and at the other end by a control diffusion modulator (DM), from which drug is expelled (Fig. 2). The mini-pump has an engine compartment that contains sodium chloride (NaCl) to create an osmotic gradient across the membrane. Through the processes of osmosis, fluid is absorbed from the outside environment through the membrane and into this osmotic engine compartment. The increasing volume of water entering the minipump expands the osmotic engine compartment and pushes a piston that drives formulated drug in the reservoir through an open channel in the DM. There is enough NaCl content in the mini-pump's engine compartment to maintain a saturated salt solution throughout the in-use period, thus maintaining a constant pressure gradient and steady release of drug into the SC space. When drug has been exhausted from the device, it must be removed and replaced by a new one to maintain necessary drug exposure levels.
The Good Laboratory Practices (GLP) safety and PK studies performed by Intarcia delivered TAF in an aqueous vehicle by a continuous infusion through a SC cannula and, thus, were relevant to the overall preclinical assessment of SC delivery of TAF for HIV prevention by any LA SC delivery technology. Of importance, these specific studies were informed by nonclinical PK and pilot safety work conducted by Intarcia and other data sources. 20,21,34 For example, an Intarcia pilot study in rats in which the hemifumarate salt of TAF was administered by continuous SC infusion for 14 days led to no signs of systemic toxicity with very slight edema local to the administration site being observed in two of four animals treated at 1.08 mg/kg/day, as compared with similar observations in just one of four animals in the vehicle group.
Based on the TFV exposure levels measured in Intarcia's 14-day pilot tolerability study at 1.08 mg/kg/day (TFV AUC 24h at steady state = 0.237 lg · h/mL), the high dose of 1 mg/kg/day was selected for the GLP toxicology study and was expected to achieve exposures below that measured at the no adverse effect level (NOAEL) in rats after 28 days of daily oral exposure of TAF (6.25 mg/kg/day; day 28 TFV AUC (0-t) = 0.340 lg · h/mL).
Dose levels for dog infusion studies were similarly informed by earlier studies conducted by Intarcia and Gilead. The high dose in the Intarcia GLP dog 28-day toxicity study (833 lg/kg/day) was expected to produce similar systemic exposure to that observed at the oral dose NOAEL in the 9month oral dog toxicity study reported by Gilead. 34 In the Gilead study, a number of findings were reported at doses 3 · the NOAEL level, and included the following: body weight loss, minimal renal toxicity, slightly prolonged PR intervals in the heart, pulmonary changes, and minimal bone loss. Therefore, it was expected that the highest dose in Intarcia's planned SC infusion GLP toxicity study may produce local effects but that there would be no systemic findings, and that the lowest dose would not generate local or systemic findings.
Rat toxicity results: Intarcia 28-day infusion study. The design of this rat study 35 involved the administration of TAF hemifumarate to rats by continuous SC infusion for 28 days and resulted in findings at the infusion site including: (1) exacerbation of infusion site lesions in males at ‡30 lg/kg/day, (2) macroscopic finding of a mass (all males at 1,000 lg/kg/day), (3) dose-related increased incidence and severity of mixed cell inflammation (most males, ‡30 lg/kg/day), (4) increased incidence and/or severity of fibrosis and mononuclear cell inflammation (most males, ‡300 lg/kg/day), (5) increased incidence and severity of necrosis (all males at 1,000 lg/kg/day), and (6) presence of Gram+cocci bacteria within the infusion site (some males and females in both control and treatment groups).
The presence of bacteria within the infusion sites was considered secondary to skin ulceration (opportunistic infection) and unrelated to the administration of hemifumarate TAF. At the end of the recovery phase (day 57), macroscopic and microscopic findings noted at the infusion site were of reduced incidence and/or severity compared with the treatment phase. This suggests ongoing resolution of TAF-related exacerbation of infusion site lesions. Local and systemic NOAELs were considered to be 1,000 lg/kg/day. Based on these NOAELs, the estimated clinical margin for local inflammation was twoto three-fold (total dose at 1,000 lg/kg/day was 300 and 500 lg/day in females and males, respectively), and the estimated clinical margin for systemic exposure was 87.5-fold for TFV, but only 3.3-fold for intracellular TFV-DP. TAF concentrations were below the limit of quantification (<0.01 ng/mL) in all samples from this study.
Beagle dog toxicity results: Intarcia 28-day infusion study. This preclinical in vivo study was conducted in beagle dogs. 35 Preterminal killing of several animals, including two controls and all animals administered 0.833 mg/kg/day, was conducted because of swelling at the infusion site, discharge at the infusion site, and deteriorating conditions of the animals. Clinical observations noted at ‡0.025 mg/kg/day were associated with inflammation at the infusion site, supported by hematology, coagulation, and clinical chemistry changes confirming an inflammatory process. Macroscopic and microscopic observations confirmed mononuclear cell inflammation, mixed cell inflammation, and necrosis at the infusion site.
Because of the severity of the observations noted, a NOAEL for local findings could not be established for this study, essentially providing no clinical safety margin for local inflammation, given that the lowest dose of TAF tested in FIG. 2. The osmotic mini-pump is placed in the subdermal space. Interstitial fluid flows consistently and predictably through the semipermeable membrane into the osmotic engine compartment that contains NaCl tablets. Mixing of the salt with fluid causes expansion of the osmotic engine compartment that pushes a piston resulting in formulated drug being expelled through the diffusion moderator into the SC space where drug is absorbed into the systemic circulation. NaCl, sodium chloride; SC, subcutaneous.
this study was 0.025 mg/kg/day or 0.18 mg/day total dose. It should be noted that results from this study were confounded by: (1) the need to treat the same animals during the study with anti-inflammatory, antibacterial, and/or opioid agents to enable the animals to complete the study; and (2) the use of the continuous SC infusion set up led to local tolerability issues that were observed in vehicle and treated animals. However, the local toxicities were exacerbated by TAF administration. There were no TAF-related effects that resulted from systemic exposure to TAF at the highest remaining dose group tested. The NOAEL for the systemic exposure to TAF, excluding any infusion site-related findings, was considered to be 0.25 mg/kg/day, which provided estimated clinical exposure margins 1,523-fold for TAF, 65.5-fold for TFV, and 93-fold for intracellular TFV-DP.
The development efforts of these three foundation-funded partners all used different technologies to achieve LA delivery of TAF: injection, degradable implant, and nondegradable implant. In addition to the work of these partners, other groups who were working independently of the foundation to develop LA TAF delivery products with other sources of funding were also invited to the 2020 LA TAF Meeting. Those groups, their technologies, and their study findings are summarized in the following sections.
NW (SLAP-HIV grant; funded by NIH/DAIDS): nondegradable reservoir implant
The SLAP-HIV program is an NIH/DAIDS-funded grant at NW, focused on the development of LA HIV prevention products. The SLAP-HIV group had been involved in the development of a dry pellet TAF reservoir implant comprising a polyurethane (PU) tube, originally sealed using an adhesive but eventually sealed with heat welding. Differing configurations and materials of these implants leads to different in vitro and in vivo release rates. This device is designed with 150-170 lm thin PU tubing that is loaded with TAF hemifumarate plus small amounts of NaCl and magnesium stearate. This design functions as a ''physical capsule'' with the tube wall composition, wall thickness, and overall size of the implant regulating the rate of drug release. There were two ''generations'' of implants (A and B) used in the following rabbit and NHP studies. Technical differences and in vitro performance data were provided in Su et al. 25 Multiple configurations of the generation A device were evaluated in New Zealand white rabbits and the rhesus NHP model for PK and safety (primarily histopathology) and the results of these studies were published. 25 This rabbit study design involved surgical implantation of devices in 12 rabbits behind the neck (6 placebo, 6 treated), 2 of each were killed at 28 days and the remainder were killed at 3 months for histopathology assessments. Blood samples were drawn weekly for 3 months for analysis of plasma and PBMC PK, and bimonthly mucosal samples were obtained by vaginal and rectal biopsies to measure tissue levels.
At day 28 the treated rabbits demonstrated focal granulomatous inflammation around the area of the implants. By 90 days, liquefactive and coagulative necrosis was observed in the treated animals at the sites of implantation. These findings were not observed in rabbits with placebo devices. All rabbits containing TAF-loaded plug-sealed implants had inflammation around the implant site, often closest to one or both polar regions of the implant. Animals with ''capped'' implants, which were prone to sporadic leaking, showed necrosis and perivascular and perineural inflammation with cuffs of lymphocytes and macrophages surrounding the implant sites, and outside the implant and tissue capsules. A second rabbit study was conducted that involved some ''fixes'' to the original study (i.e., placement of implants to avoid self-removal, reduced drug delivery, heat sealed ends, barium pellets to find implants lost in animals, and institution of a histopathology scoring system). The study involved dose ranging of TAF that resulted in TFV-DP levels ranging from 68 to 391 fmol/10 6 PBMC. At 28 days postimplantation, histopathology findings were similar to those seen in the first rabbit study: multicell infiltration inflammation and necrosis at the sites of TAF implants, and little to no findings with placebo implants.
Two generations of implants (A and B) were evaluated for PK and safety in the NHP model with a 12-week study. 25 Although there was a slightly higher tissue response associated with the placebo implant in the NHP relative to the rabbit model, meaningful adverse histopathology findings were associated with TAF implants in the NHPs. In the cases of many animals, fibrosis, hemorrhagic abscesses, and severe granulomatosis were observed with TAF implants. In some instances, implants were lost (via extrusion from the animals) or fell apart in vivo. However, some NHPs had only a minimal increase in adverse response to the TAF implant relative to the placebo. This was illustrated utilizing a new semiquantitative histopathology scoring system to evaluate histopathologic responses to the placebo and TAF implants that were placed in the same animal. 25 For example, in one animal the placebo implant had a thin capsule and mild pericapsular infiltrates of lymphocytes and plasma cells, but the periphery of the thin capsule was associated with multifocal aggregates of lymphocytes, plasma cells, edema, and hemorrhage giving a score of 16. The TAF implant in this animal had a thick capsule filled with proteinaceous fluid, heterophils, plasma cells, macrophages as well as extensive fibrosis and lymphoplasmacytic inflammation extending into adjacent tissues generating a score of 24. The adverse histopathology score was greater in the TAF implant relative to placebo in all animals. A subsequent pilot study where the TAF implants were inserted utilizing a trocar likewise resulted in adverse histopathological results.
In light of the findings of these animal studies, the SLAP-HIV program abandoned further development of TAF implants and switched to the use of CAB.
Houston Methodist Research Institute: nondegradable transcutaneously refillable nanofluidic implant
This is an implantable nanofluidic technology that regulates diffusive drug release using silicon nanochannel membranes. The silicon membranes are nanofabricated, adopting technologies from the semiconductor industry 36 achieving drug delivery through slit-nanochannels etched perpendicularly to the membrane surface. These nanochannels are densely stacked in a titanium reservoir implant as regular square arrays coated with silicon carbide for long-term bioinertness and biocompatibility 37 (Fig. 3A).
The nanochannel membrane is assembled within a titanium reservoir implant and serves as the rate-limiting component for drug release 26 (Fig. 3B). Unlike other implantable delivery system, the technology is ''drug and formulation agnostic'' 38 and can be used with both liquid and solid formulations of drugs irrespective of their molecular properties (charge, hydrophobicity, hydrophilicity, molecular weight, and structure). 39 In the case of TAF, initial drug loading of the device is achieved by packing drug powder into the implant. 40 Once the implant reservoir is depleted, drug can be refilled transcutaneously through palpable ''refill'' ports, which avoids repeated surgical insertion and retrieval procedures. Implant refilling is performed using a loading and a venting needle and can be operated with both liquid and solid drug formulations. The drug is injected through the loading port and the venting needle allows for flushing the reservoir and device refill. 26 Once the nanofluidic implant is inserted subcutaneously, the drug release is initiated by the influx of interstitial fluids penetrating into the device by capillary wetting of the membrane and solubilization of a portion of the drug powder formulation. Then solubilized drug molecules diffuse across the membrane into the surrounding tissues. This establishes a continuous mechanism of solubilization and release that allows for high drug loading efficiency and promotes formulation stability, long term. Sustained and constant rates of drug release are achieved through electrostatic and steric interactions between drug molecules crossing the nanochannels and confining channel walls. 41,42 No pumps or valve and actuators are needed for drug elution. Rate of release is controlled by the nanochannel membrane configuration 39 (i.e., nanochannel size and number).
Although solid drug loading limits drug stability issues, TAF presents poor stability in the presence of water. One strategy to achieve enhanced stability of TAF released from the nanofluidic implant is with the use of a buffering agent for pH control in the range of 5.0-5.5. 43 Because of their greater solubility than TAF, common buffers such as citrate buffer are not able to sustain the pH in the desired range long term, as they are depleted by diffusion out of the implant reservoir at a much higher rate than TAF. To address this, a viable approach is using a low solubility buffer such as urocanic acid, which is released from the implant at a similar or slower rate than TAF. By leveraging the buffering properties of urocanic acid, this group achieved extended stability of TAF released from the nanofluidic device for over 9 months. 43 The additional formulation volume of urocanic acid is smaller than the volume gained by removing the fumarate group from TAF. In other words, the TAF-urocanic acid formulation enables the loading of 500 mg of formulation, which establishes a longer duration of release for this device.
The primary focus from this group was their NHP efficacy study, which was conducted with the support of Gilead and NIH. 44 The study involved 14 rhesus macaques (7 females and 7 males). Eight animals (four females and four males) received the SC TAF implant (PrEP group) in the dorsum. The control group (three females and three males) received a vehicle (PBS-loaded implant). An interesting element of this study is that rectal viral challenge was initiated only after the animals had reached levels of TFV-DP believed to be consistent with protection (TFV-DP preventive level considered to be 100 fmol/10 6 PBMCs).
The dose of TAF achieved in the animals with this device was *1.4 mg/day, which resulted in sustained levels of TFV-DP in PBMC of *500 fmol/10 6 PBMCs, well above the anticipated level required for protection against HIV infection. After 10 weekly rectal exposures, two PrEP animals remained uninfected. An additional animal in the PrEP group displayed transient infection with undetectable viral load 5 weeks after removal of the implant and cessation of TAF administration. The control cohort was 3.04 times more likely to be infected after the fourth rectal challenge dose. 44 These results are similar to those obtained with TDF alone in the human Partners PrEP trial, which possibly explains the lower efficacy observed in this study with TAF monotherapy. One of the hypotheses offered to explain the result of this NHP study was that SC delivery of TAF may not achieve sufficient drug concentration in the rectum. An additional hypothesis is that a drug combination such as FTC/TAF would be required for enhanced synergistic efficacy in a rectal challenge model. In this context, this group is exploring the use of the nanofluidic implant for the sustained long-term delivery of different ARV including FTC, CAB, 45 and islatravir. Although this study was not a comprehensive toxicity evaluation of this technology and TAF delivery, blinded pathology assessment of tissues surrounding the TAF implant (4 months of implant use) by three independent clinical laboratories displayed a normal foreign-body response with no inflammatory cell infiltration. 43 Specifically, histopathological examination was performed according to the scoring system used by the SLAP-HIV group. 25 Tissue response to the nanofluidic TAF implants was qualified as ''slight reaction.'' Of note, these results are in significant contrast with other results obtained with polymeric implants (e.g., Su et al. 25 ), for which ''severe'' tissue response was observed despite an order of magnitude lower TAF release rate.
Oak Crest: silicone reservoir implant
This group developed a silicone tube reservoir TAF implant based on a cylindrical silicone scaffold that achieves linear drug release through a controlled number and size of specific delivery channels in the impermeable sheath as well as an outer PVA membrane that covers the channels. The implant is packed with solid TAF powder, or microtablets, and controlled drug release is achieved through the delivery channels as in vivo fluids enter the device and dissolve the drug. The implant scaffold comprises medical grade, platinum catalyzed silicone tubing with an inter diameter of 1.5-2.0 mm and an outer diameter of 1.9-2.4 mm. Additional details of this device were provided in a publication from this group. 20 They observed better drug release control with the free base form of TAF versus the hemifumarate and release was linear in vitro for 6 months.
Although the first in vivo PK study was conducted in dogs for 40 days, they later evaluated a variety of implant prototypes in dogs, mice, and sheep over multiple studies 46 primarily for PK evaluation. The team did not have concerning toxicity findings in any species with doses £1.0 mg/day. The question of TAF metabolites distribution in the dermal tissues adjacent to the implant possibly being responsible for the safety findings found in other studies was raised, particularly in terms of dose relative to animal size. It was noted that there is difficulty in performing scaling between different body weights with respect to local dermal concentrations of TFV and TFV-DP as evidenced by the observation of significant differences between sheep and mice. Results indicate that there is no equivalent local buildup of drug (i.e., exposure) for implants of the same release rate.
In addition, it was observed in these studies that there were high levels of TFV and TFV-DP in dermal tissues close to the implant with little or no local tolerability issues. An important difference between these mouse and sheep PK studies and that of some of the other groups is that this device was delivering the free-base form of TAF, whereas some other studies with more significant toxicological findings were delivering TAF hemifumarate and used implants of different materials and manufacturing processes. Consequently, this development team believes it is possible that TAF or one of its metabolites, possibly in conjunction with the fumaric acid, are responsible for the observed toxicity in those models demonstrating more significant findings. Addressing these hypotheses would require additional preclinical studies. The questions of safety and PKs of the Oak Crest TAF free-base implants will be addressed to some degree in a planned trial in humans (the protocol for this trial, CAPRISA 018, is available at CAPRISA.org (file:///D:/Downloads/CAPRISA%20018_Study%20protocol %20V2.0_12% 20Aug%202019.pdf).
The Oak Crest group also reported some safety findings in dose escalation studies in dogs for their implant device. As noted previously, there were no safety observations made with doses <1.0 mg/day, which is encouraging. However higher doses did lead to safety observations, as given in Figure 4. 47 Other results and data comparisons. Of interest, different products were evaluated for PK in the same animal models. Results from the use of common models (rabbit and dog) that were provided at the meeting indicated that different products have somewhat differing results in terms of PK in the same animal model, which was not unexpected given the differences in TAF used (hemifumarate vs. freebase) and the release differences observed with each delivery device.
Preclinical assessments of TAF efficacy. The most relevant preclinical efficacy data comes from the group at Houston Methodist. Their refillable SC implant that delivers TAF with apparent zero-order release kinetics was used in a rectal challenge study in rhesus macaques in partnership with Gilead. 44 The dose estimated to be delivered from this device in this study was 2.0 mg/day. The PK clearly demonstrates sustained steady-state plasma concentrations of TFV and sustained levels of TFV-DP in PBMCs at *500 fmol/million cells. An interesting part of the study design was that the challenges were not started until the TFV-DP levels exceeded what is thought to be an effective level (*100 fmol/million cells). Despite the apparently appropriate PK profiles and high levels of TFV-DP, infection was delayed but the TAF was not fully protective (*70% efficacy). The control animals were all infected after four challenges.
[Centers for Disease Control and Prevention (CDC)] generated NHP challenges data for TAF and FTC/TAF dosed orally. In their original study, FIG. 4. Stage 1, scab, slight erythema; stage 2, slight swelling at dose site, scab, slight edema; stage 3, mild-tomoderate edema, scab, swelling, ocular discharge, emesis; stage 4, macroscopic descriptions of swelling and/or firmness in the interscapular implant sites, no expressible fluid; stage 5, slight swelling, emesis, red-tinged material at dose site and yellow discharge; mild edema, mild erythema. rhesus macaques were given a high dose of TAF and challenged rectally with simian HIV (SHIV) 3 days later. This study showed no protection. 7 More recently they tested the efficacy of oral TAF and F/TAF in pigtail and rhesus macaques using rectal 8 and vaginal 9 challenges. In these more recent experiments, animals were dosed 24 h prior and 2 h postchallenge at 1.5 mg/kg. TFV-DP levels of >100 fmol/million PBMCs were generally achieved. Five of nine pigtail macaques treated with TAF and exposed to SHIV vaginally became infected during the study (15 challenges). Two of these animals, however, did not achieve protective levels of TFV-DP for reasons that are not clear. Excluding these animals, 4 of 7 of treated animals were protected, but 20 of 21 controls became infected (58%-73% efficacy). A rectal challenge study in which rhesus macaques received FTC/TAF (TAF dose 1.5 mg/kg) orally 24 h before and 2 h after SHIV challenge showed complete protection. This same combination of drugs tested in the pigtail macaque vaginal challenge model conferred 91%-100% efficacy. A summary of these data is given in Table 4. These results suggest that TAF in combination with another drug may be necessary to prevent infection (e.g., FTC/TAF), which would be consistent with the reduced efficacy reported with TAF alone in the Methodist Hospital results, summarized earlier.
Conclusions
The studies and efforts summarized at the foundation meeting on the development of LA products for the delivery of TAF for HIV prevention led to a number of important conclusions. Despite the fact that multiple technologies demonstrated that LA delivery of TAF in the form of SC implants was viable, there was also clear demonstration in nonclinical animal models that SC delivery of TAF could lead to safety and/or tolerability issues during clinical development. There are several possible explanations for the observed toxicity, including: differences related to delivery of TAF hemifumarate versus free base; rate of local drug release; the potential impact of TAF metabolite release or production in the local tissue; the local release of excipients; and combination effects because of local deposits of excipients and metabolites. Furthermore, it was also shown that TAF alone may not be adequate to achieve an appropriate level of protection from HIV infection in comparison with other products currently in development. Consequently, it was concluded that additional investment in LA TAF product development efforts was not appropriate. However, the performance of the delivery technologies suggested that LA delivery of other drugs with better safety and efficacy profiles could be viable going forward. | 9,662.8 | 2021-04-29T00:00:00.000 | [
"Biology"
] |
Mabuchi spectrum from the minisuperspace
It was recently shown that other functionals contribute to the effective action for the Liouville field when considering massive matter coupled to two-dimensional gravity in the conformal gauge. The most important of these new contributions corresponds to the Mabuchi functional. We propose a minisuperspace action that reproduces the main features of the Mabuchi action in order to describe the dynamics of the zero-mode. We show that the associated Hamiltonian coincides with the (quantum mechanical) Liouville Hamiltonian. As a consequence the Liouville theory and our model of the Mabuchi theory both share the same spectrum, eigenfunctions and - in this approximation - correlation functions.
Introduction
As a first step towards understanding four-dimensional quantum gravity, one may consider studying twodimensional gravity as a toy model since many computations can be carried out exactly. Since the partition function contains an integral over all metrics, this is equivalent to the study of statistical models on random geometries. As such it displays rich connections with other fields of physics (statistical models, string theory) and mathematics (probability theory, random matrices and differential geometry).
In two dimensions, diffeomorphisms can be gaugefixed by adopting the conformal gauge where g 0 is a fixed metric and φ is called the Liouville mode (any quantity given in the metric g 0 has an index 0). The description of the problem is simplified since the gravity dynamics is captured by a single scalar field. The latter is described by an effective action S grav [g 0 , φ] that arises from quantum effects and which provides dynamics to the metric which otherwise *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>‡<EMAIL_ADDRESS>is non-dynamical at the classical level. The cosmological constant already present in the classical action contributes as In the simple case when the matter theory is described by a conformal field theory (CFT), the effective action can be obtained by integrating the conformal anomaly and it is proportional to the Liouville action Since its introduction by Polyakov [1], the Liouville theory has been widely studied: important steps in its definition have been the computations of the critical exponents, the computations of the spectrum and the definition of physical operators, and finally establishing that it defines a consistent CFT. The reader is referred to the reviews [2][3][4] for more details. It was shown recently in [5,6] that other functionals contribute to the effective action when the matter is massive Each of the functionals comes together with its coupling constant that can be computed from the parameters in the classical action and from the Liouville central charge c = 26 − c m where c m is the matter central charge. In this letter we ignore the terms denoted by the dots and focus on the Mabuchi action S M [g 0 , φ]. For example it arises as the leading term in a small mass expansion in the case where the matter is a massive scalar field with a coupling to the curvature [6]. This action has been largely studied in differential geometry, starting with the seminal work [7], but it did not appear in physics until recently [5,6]. Despite the fact that non-conformal matter is more relevant for describing the four-dimensional world, this topic has been mostly ignored in the 2d gravity literature (see [8][9][10] for some exceptions). For this reason it is primordial to study in more details 2d gravity with massive matter and to reproduce the analysis that has been done for the Liouville theory. In particular the 1-loop correction to the KPZ relation [11] due to the Mabuchi functional have been computed in [12]. Moreover, a recent series of work put forward that the very same functionals that constitute S grav also appear in the context of the fractional quantum Hall effect, as the leading terms in the the development of the generating functional in the number of flux quanta [13][14][15]. Hence understanding the physical properties of these functionals would shed light on this phenomena.
In this letter we propose a (1 + 0)-dimensional action that reproduces the main features of the Mabuchi action in order to describe the quantum mechanics of the Liouville zero-mode for the Mabuchi theory. We find that the Hamiltonian of this model is equal to the (minisuperspace) Liouville Hamiltonian. As a consequence both theories share the same spectrum and -in this approximation -the 2-point and 3-point functions are identical. Additional considerations on the Mabuchi theory (including the derivation of our model) and the coupling of massive matter to 2d gravity will be presented elsewhere [16].
Mabuchi action
Defining the area the Mabuchi function is more conveniently expressed when the Liouville field is parametrized by the Kähler potential K and the area [5,6] The Liouville field φ is uniquely determined by the pair (A, K) and positivity of the exponential implies the inequality In terms of functional integration, one needs to work with the partition function at fixed area: metric variations are restricted to the ones that preserve the area and the cosmological constant (2) does not contribute. The full partition function can be recovered from the Laplace transform which amounts to introducing the cosmological constant (2) and effectively replace the area A by the cosmological constant µ.
In this parametrization, the Mabuchi action reads (in Lorentzian signature) [6] Since this functional is bounded from below and convex its (Euclidean) functional integral is well-defined.
Solutions to the equation of motion correspond to constant curvature metric The latter is identical to the equation of motion of the Liouville action (3). At variable area this equation becomes (upon adding the cosmological constant term (2)) which suggests the replacement for switching between fixed and variable area. This relation also follows by integrating (10) over the manifold and from the Legendre transformation of the cosmological constant [16].
Minisuperspace analysis
The minisuperspace approximation truncates the Liouville field to its time-dependent zero-mode on flat Lorentzian space. It is well-suited for determining the spectrum and the associated operators. Indeed the Hilbert space can be constructed from the knowledge of what happens at one spatial point: adding space dynamics just provides multiparticle states (i.e. the Fock space) that are built from this Hilbert space that sits at every point. For example the energy levels of a free scalar field are determined by the point particle approximation, which is just the quantum harmonic oscillator. Let's consider time-dependent fields on flat spacetime The spatial dimension is compactified into a circle. We propose the following minisuperspace action for the Mabuchi theory at variable area (in Lorentzian signature) along with the following relation between φ and K e 2φ =K 4πµ .
Since the Mabuchi action at variable area is not known, it is not clear how to perform rigorously the Wick rotation in order to obtain the Lorentzian action at variable area, from which the Hamiltonian formalism is sensible. In analogy with the minisuperspace of the Liouville action, the action (13) reproduces for the zero-mode K(t) the main features of the full Mabuchi action (8): it contains a kinetic term for the Kähler potential and the potential term is proportional to φ e 2φ . Moreover the linear term in K is not present since it vanishes for R 0 = cst and the area is replaced by the cosmological constant µ through the Laplace transform of the path integral. For these reasons, even if the action (13) does not correspond exactly to the minisuperspace of (8), it is expected that it captures the main features of the dynamics of the zero-mode and that it can be used to determine the spectrum. Nevertheless the action (13) can be derived in different ways under (different) mild assumptions: 1 a detailed explanation of these various possibilities is outside the scope of this letter and we refer the reader to the companion paper [16].
As a consistency check it is straightforward to verify that the variation of (13) agrees with the minisuperspace approximation of (10) The second-order derivatives in the action (13) cannot be removed by integration by parts. Fortunately the action does not depend on K, and the field redefinition J ≡K (16) brings (13) to an action which is first-order in time The canonical momenta P associated to J is seen to correspond to the Liouville mode φ by comparing the previous equation with (14) written in terms 1 The simplest one consists in taking the limit R 0 → 0 and A 0 → ∞ such that χ = cst (and keeping A = cst). The Laplace transform of the resulting Hamiltonian is equivalent to the replacement (11) and yields (20). Other methods include using the Ostrogradski formalism or the fact that the kinetic and potential terms of the Mabuchi action are respectively given by the Legendre transformation of the Liouville kinetic term and of the cosmological constant action [17].
of J. It is well-known that a canonical transformation can be performed in order to exchange the position and momentum where Π is the conjugate momentum of the Liouville field. In terms of these variable the Mabuchi Hamiltonian reads It is straightforward to check that it is equivalent to the Hamiltonian of Liouville theory (3) in the minisuperspace approximation.
Quantization
Since Mabuchi and Liouville theories have identical Hamiltonians, they also share the same spectrum. For comprehensiveness we recall the canonical quantization of the Hamiltonian (20).
The eigenvalue equation for the Hamiltonian
reduces, upon the replacement to the differential equation with the definitionμ = πµ. The latter corresponds to the modified Bessel equation and the solutions that are well-behaved as φ → ∞ are The eigenfunctions have been normalized such that the incoming plane wave has a unit coefficient. The development for φ → −∞ indicates that the waves are reflected by the potential with a reflection coefficient As a consequence wave functions with ±p form a superposition and are not independent, as can be seen from the relation This divides the number of states by two.
Wave functions are normalizable only for p ∈ R, and they form an orthogonal set Hence physical states are associated to the eigenvalues p ∈ R + and to the wave functions (24a). Moreover it can be seen that the reflection coefficient is a pure phase for these states, indicating that the potential is totally reflecting. Finally, a semi-classical approximation of the correlation functions can be computed from integrals involving the wave functions (24a). In particular, the 3-point function in the minisuperspace approximation is [18]
Conclusion
The main result of this letter is the computation of the spectrum of the Mabuchi theory. We have shown that it coincides with the spectrum of Liouville theory. This fact is striking since both actions have a very different origin and their forms differ vastly beyond the minisuperspace approximation (in particular the Mabuchi action is non-local in terms of the Liouville field). On the other hand, it is not known whether the Mabuchi action defines a CFT but arguments from consistency of 2d gravity in the conformal gauge indicate that it should not be. Indeed the sum of the gravity and matter actions should define a CFT of vanishing central charge in terms of the metric g 0 : since the massive matter is not invariant its transformation needs to be compensated by the gravitational sector, which would not be invariant by itself as a consequence. It would be very intriguing to have two theories with the same spectrum, but one being a CFT and not the other. From the previous comments one may fear that the Liouville and Mabuchi actions describe a unique theory in two different but equivalent languages since they share many properties. This is certainly not the case because they do not contribute in the same way to the string susceptibility exponent [12]. This question deserves more investigation.
Obtaining a variable area formulation of the Mabuchi action is crucial in order to provide a rigorous proof of the minisuperspace action (13). Moreover such a formulation would be useful for addressing other problems since it is more intuitive.
The Mabuchi theory is a key element for understanding two-dimensional quantum gravity with nonconformal matter, and for this reason it is important to study better its physical properties. Furthermore, this would also be important in the context of condensed matter and it may even provide connections to differential geometry. | 2,861.6 | 2015-11-19T00:00:00.000 | [
"Physics"
] |
Bis(1,10-phenanthroline-κ2 N,N′)(sulfato-κ2 O,O′)nickel(II) propane-1,2-diol monosolvate
In the title compound, [Ni(SO4)(C12H8N2)2]·C3H8O2, the NiII atom exhibits a distorted octahedral coordination by four N atoms from two chelating 1,10-phenanthroline ligands and two O atoms from an O,O′-bidentate sulfate group. A twofold rotation axis passes through the Ni and S atoms and the mid-point of the hydroxyl C—C bond of the propane-1,2-diol solvent molecule. The dihedral angle between the two chelating N2C2 groups is 85.61 (8)°. The [NiSO4(C10H8N2)2] and propane-1,2-diol units are held together by a pair of symmetry-related intermolecular O—H⋯O hydrogen bonds involving the uncoordinating O atoms of the sulfate ion. Due to symmetry, the solvent molecule is equally disordered over two positions.
Comment
The self-assembly of coordination polymers and the crystal engineering of metal-organic coordination frameworks have recently attracted great interest, owing to their interesting structural topologies and potential application as functional materials (Batten & Robson, 1998;Zhang et al., 2010;Zhong et al., 2011). The neutral bidentate ligand 1,10phenanthroline (phen) as an auxiliary ligand has been widely applied in constructing interesting coordination polymers.
Recently, we have obtained unexpectedly some nickel-phen complexes (Zhong et al., 2009;Ni et al., 2010;Zhong & Ni, 2012) with interesting four-membered chelating rings during attempts to synthesize mixed-ligand coordination polymers with phen as auxiliary ligand via an alcohol-solvothermal reaction. We here report the title compound, [NiSO 4 (C 12 H 8 N 2 ) 2 ]. C 3 H 8 O 2 , which is the part of our systematic investigation of transition metal nickel complexes with bidentate bridging sulfate ligands. It is isostructural to the previously reported cobalt(II) analog (Zhong, 2013).
The single-crystal X-ray diffraction experiment revealed that the crystal structure of the the title compound consists of a neutral monomeric [NiSO 4 (C 10 H 8 N 2 ) 2 ] complex and a solvent propane-1,2-diol molecule. A two-fold rotation axis (symmetry code: -x + 1, y, -z + 1/2) passes through the Ni and S atoms, and the mid-point of the hydroxyl C-C bond of the propane-1,2-diol solvent molecule is likewise located on a the same crystallographic axis. The Ni II metal ion has a distorted NiN 4 O 2 octahedral geometry, with four N atoms from two chelating phenanthroline ligands and two O atoms from an O,O′-bidentate sulfate anion (Fig. 1). The Ni-O bond distance of 2.107 (2) Å, the O-Ni-O bite angle of 67.95 (9)°, the Ni-N bond distances in the range of 2.076 (2)-2.082 (2) Å, the N-Ni-N bite angle of 80.09 (7)° and the dihedral angle of 85.61 (8)° between the two chelating NCCN groups are in good agreement with those observed in the previously reported nickel complexes (Zhong et al., 2009;Ni et al., 2010;Zhong & Ni, 2012) (Table 1).
The solvent molecule is disordered over two positions and was refined with a site-occupancy ratio of 0.50:0.50. The metal complex and the solvent molecules are held together by a pair of intermolecular O-H···O hydrogen bonds, which help to further stabilize the crystal structure ( Fig.1 and Table 2).
Experimental
Green block-shaped crystals of the title compound were obtained by a procedure similar to that described previously (Zhong, 2013), but with NiSO 4 ·7H 2 O in place of CoSO 4 ·7H 2 O.
Refinement
The non-hydrogen atoms were refined anisotropically. The H atoms of phen were positioned geometrically and allowed to ride on their parent atoms, with C-H = 0.93 Å and U iso (H) = 1.2U eq (C). The H atoms of propane-1,2-diol were placed
Figure 1
The molecular structure showing the atom-numbering scheme and with displacement ellipsoids drawn at the 30% probability level. The light broken lines depict O-H···O interactions. Unlabeled atoms are related to the labelled atoms by the symmetry operator (-x, y, -z + 1/2). Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 994 | 2013-08-21T00:00:00.000 | [
"Chemistry"
] |
Diagnostic accuracy of deep learning using speech samples in depression: a systematic review and meta-analysis
Abstract Objective This study aims to conduct a systematic review and meta-analysis of the diagnostic accuracy of deep learning (DL) using speech samples in depression. Materials and Methods This review included studies reporting diagnostic results of DL algorithms in depression using speech data, published from inception to January 31, 2024, on PubMed, Medline, Embase, PsycINFO, Scopus, IEEE, and Web of Science databases. Pooled accuracy, sensitivity, and specificity were obtained by random-effect models. The diagnostic Precision Study Quality Assessment Tool (QUADAS-2) was used to assess the risk of bias. Results A total of 25 studies met the inclusion criteria and 8 of them were used in the meta-analysis. The pooled estimates of accuracy, specificity, and sensitivity for depression detection models were 0.87 (95% CI, 0.81-0.93), 0.85 (95% CI, 0.78-0.91), and 0.82 (95% CI, 0.71-0.94), respectively. When stratified by model structure, the highest pooled diagnostic accuracy was 0.89 (95% CI, 0.81-0.97) in the handcrafted group. Discussion To our knowledge, our study is the first meta-analysis on the diagnostic performance of DL for depression detection from speech samples. All studies included in the meta-analysis used convolutional neural network (CNN) models, posing problems in deciphering the performance of other DL algorithms. The handcrafted model performed better than the end-to-end model in speech depression detection. Conclusions The application of DL in speech provided a useful tool for depression detection. CNN models with handcrafted acoustic features could help to improve the diagnostic performance. Protocol registration The study protocol was registered on PROSPERO (CRD42023423603).
Background and objective
Depression disorder is a common mental disorder, involving a low mood, loss of interest in everyday life, and other symptoms, which lead to burden, disability, and even suicide. 1orld Health Organization reports that 280 million people were diagnosed with depression in 2019, including almost 10% of children and adolescents. 2Early recognition of depression reduces the complication of treatment, shortens the course of the disease, and provides positive treatment outcomes. 3urrently, clinical symptoms, supplemented with objective physiological indicators and questionnaires, are considered to diagnose depression.Clinical symptoms must last for 2 weeks at least to confirm a diagnosis of depression, leaving patients with limited care or treatment during the early stage of the disorder. 4Moreover, subjective factors, such as patients' expressions, cultures, and attitudes, may make the diagnosis of depression more complex with a greater probability of misdiagnosis.Therefore, recent studies suggest using signal processing methods, including audio, 5 videos, 6 and electroencephalogram (EEG), 7 to increase the diagnostic accuracy of depression.
Speech has been proven as an important biomarker for depression detection since people with depression turn out to speak at a lower rate, give more prolonged pauses, and change less pitch than normal people. 8,9Compared with other biomarkers, such as videos, EEG, and skin conductance, speech has many advantages.First, it is easy and noninvasive to collect using smartphones or computers.Second, it contains various information related to depression symptoms and this information is difficult to hide.Third, it reduces privacy exposure for patients. 5A neural network is a series of connected weighted nodes that models the biological nervous system function of the human brain. 10Neural networks provide effective tools in speech processing since they have the ability to automatically learn available features from raw speech, reducing the subjectivity in manual feature selection. 11The successful applications of DL algorithms in speech signal processing and classification present a novel opportunity to improve the performance of automatic depression detection. 12ecent reviews addressed various psychiatric disorders and artificial techniques, and they gave a comprehensive explanation of the importance of applying artificial intelligence to support clinical diagnosis. 5,11,13,14However, to the best of our knowledge, few reviews focused on the use of deep learning (DL) algorithms to detect depression in speech till now.Thus, we aim to provide a systematic review and meta-analysis to evaluate the diagnostic performance of the DL algorithms in detecting and classifying depression using speech samples.
Methods
This review was conducted according to Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies statement. 15,16The study protocol was registered on PROSPERO (CRD42023423603).
Search strategy
We searched the following datasets: PubMed, Medline, Embase, PsycINFO, Scopus, IEEE, and Web of Science databases up to January 31, 2024, using the following keywords including, but not limited to, combinations of the following: depressi � , depressive disorder � , deep learning, machine learning, artificial intelligence, neural network, automat � , sound, speech, voice, acoustic � , audio, vowel, vocal, pitch, prosody.The complete search strategy is presented in the Supplementary Material.
Inclusion and exclusion criteria
This review includes studies evaluating the diagnostic accuracies of DL algorithms in depression using speech samples.After screening, we excluded the studies published with no full text.
Study selection and data extraction
Titles and abstracts of the retrieved literature were screened for eligibility.Relevant articles were read in full, and data were extracted from the articles that met all inclusion criteria.Two authors (L.L. and L.L.) conducted all these steps individually, and a third researcher (Y.W.) was included to solve the disagreements and uncertainties in the study selection process and data extraction process by discussion.
The following data was extracted from the included studies independently: title, authors, year of publication, diagnosis standard (scales), features, classification methods, model structure, and diagnostic test results (TP, TN, FP, and FN).
Statistical analysis
Sensitivity, specificity, and accuracy were calculated with a 95% CI based on the TP, TN, FP, and FN values that were extracted from the included studies for meta-analysis.The accuracy can be calculated as: The interpretation of accuracy could be affected by the prevalence of depression because, in cases of very high or very low prevalence, accuracy might not provide a complete picture of a test's performance.In our included studies, the prevalence is neither too high nor too low.Therefore, our pooled estimates of accuracy, especially their interpretation, are unlikely to be affected by the varying prevalence of the condition.We used the I 2 to measure the heterogeneity across studies and subgroups, with 25%, 50%, and 75% being considered as thresholds to indicate the low, moderate, and high heterogeneity, respectively. 17A P-value was used to measure the statistic, and P < .05 was considered statistically significant.A funnel plot was used to assess publication bias.All the analyses were performed using RStudio version 12.0 with the meta package. 18ooled estimates of depression detection in speech using DL algorithms were obtained.Leave-one-out method and subgroup analysis were used to evaluate the sensitivity and reduce the heterogeneity among the studies.SROC curve which represents the performance of a diagnostic test was also built to describe the relationship between test sensitivity and specificity. 19sessment of bias QUADAS-2 recommended by the Cochrane Collaboration was used to evaluate the risk of bias in each study by two authors (L.L. and L.L.), and the uncertainties were discussed with a third researcher (Y.W.).QUADAS-2 evaluates 4 key domains including patient selection, index test, reference standard, and flow and timing.Each domain is analyzed in terms of risk of bias, with particular attention given to concerns about applicability in the first three domains.The assessment of bias was conducted using the Review Manager Software version 5.3. 20
Literature search
A total of 2013 records were selected after duplicate removal.After screening the title and abstract, 1804 articles were excluded, with 209 articles being assessed for full-text review.Of these, 56 articles investigated speech signal processing in detecting depression but were excluded because they did not use DL algorithms.Another 57 articles excluded because not only did they apply speech samples but also other formats of data, including texts, sentiments, images, videos, and EEG.Finally, 71 articles were excluded since they did not report TP, TN, FP, and FN values used in the meta-analysis, and the remaining 25 articles were included in the systematic review.Some of them used the same dataset to explore the performance of different DL models, so we selected the studies with the highest accuracy score of each dataset to do the metaanalysis, and 8 studies were included (Figure 1).Of all included studies, the Distress Analysis Interview Corpus-Wizard-of-Oz (DAIC-WOZ) set is the most-used (n ¼ 16) dataset, and we used these studies to do the qualitative analysis from a technical perspective. 21An upward trend for publications was shown in the past 3 years.20 papers (80%) were published since 2022, and all eligible papers were published after 2019.Table 1 summarizes the main characteristics of all eligible papers, the ones in bold font were selected for the meta-analysis.
Datasets and languages
Several speech depression datasets were used to train models, and the speeches were generally recorded during the diagnosis conversation between clinicians and participants.DAIC-WOZ, which is a part of Distress Analysis Interview Corpus developed in 2016 Audio-Visual Emotion Challenge (AVEC), is the most commonly used dataset in speech depression detection. 47Besides, the Multimodal Open Dataset for Mental-disorder Analysis (MODMA), 48 Hungarian Depression Speech Database, 42 Sonde Health Free Speech (SH2-FS), 49 three Mandarin datasets, 40,43,44 and one Thai dataset recruited by researchers were also used in the included studies. 39Some studies used more than 1 dataset to test the performance of their proposed model.Among all included studies, 17 studies (68%) used English datasets, 5 studies (20%) used Mandarin datasets, 2 studies (8%) used Hungarian datasets, and only one used Thai dataset.
Diagnostic scales
In speech depression datasets, diagnostic scale scores are set as training labels.PHQ-8 scores of each participant were recorded in DAIC-WOZ, and the score of 10 was set as the threshold to decide whether the participant was diagnosed with depression or not. 50Besides, other questionnaires, such as the Hamilton Rating Scale for Depression (HAMD), 51 Beck Depression Inventory-II (BDI-II), 52 and PHQ-9, 53 were also used as the assessments to detect depression.
Speech processing
In the development of an automatic speech recognition system, preprocessing is considered the first phase to train a robust and efficient model. 54Fourteen studies (87.5%) mentioned at least one speech preprocessing procedure with 50% of the papers applying various methods to tackle data imbalance as shown in Figure 2A.This result is unsurprising, given that the DAIC-WOZ dataset consists of 146 depressed subjects and only 43 healthy participants, highlighting the critical issue of data imbalance in achieving good performance.Speech segments of varying lengths are used as inputs to the model, enriching the dataset and accommodating DL models (Figure S12A).Yin and colleagues segmented the speech into 9-second fragments, achieving the highest performance across all studies with 0.94 in accuracy and 0.92 in sensitivity. 31Moreover, fragments over 10 seconds in length exhibited the highest specificity (Table 2).In Figure S12B, we can find that all studies employed either a train-test split or a train-validation-test split to prevent overfitting to the training data and accurately assess the model's performance.
Feature engineering is one of the most crucial steps of traditional machine learning based speech depression detection research and the main purpose of many studies considered in this review is to avoid this step by developing DL for automatic feature learning. 26,27,35,55Based on the results shown in Table 2, it is evident that LLDs and MFCCs-based models achieved over 80% accuracy, surpassing other types of features.Besides, 3 included studies compared the performance of speech depression detection with multimodal depression detection, and acoustic features. 24,25,35All these 3 studies present that using multimodal features enhances the performance of speech depression detection models.
Deep learning methodology
Compared with clinical diagnosis, DL algorithms can learn high-level features automatically.In this review, we divided the DL models used in the included studies into the following groups: convolutional neural network (CNN), CNN-long short-term memory (LSTM), CNN-support vector machine (SVM), LSTM, and CNN-Transformer.Figure 2B shows CNN is the most commonly used DL algorithm, with 56.25% of the studies using it directly as the depression detection model.Additionally, 25% of studies employed CNN as a feature extraction or dimension reduction method, followed by the use of LSTM (12.5%) or SVM (12.5%) as a classifier for depression detection.The CNN-Transformer architecture shows the highest performance among all studies (Table 2), which indicates that the transformer holds promising potential for depression detection using speech data. 31n Figure 3, we present a visualization of the distribution of hyperparameters in DL models.As shown in Figure 3, 50% of studies did not report batch size, 50% of studies did not report epochs, 50% of studies did not report learning rate, 56.25% of studies did not report loss functions and 43.75% of studies did not report optimizers.These hyperparameters may affect the model's performance to some extent, highlighting the importance of selecting appropriate hyperparameters.The number of neural network layers does not necessarily exceed 5 in most studies (62.5%) under consideration (Figure 3A), and such kind of studies achieved the highest accuracy, which is 0.79 (Table 2).Cross-entropy was the most commonly used (31.25%)loss function, outperforming mean square error in terms of accuracy (0.84), sensitivity (0.70), and specificity (0.91) (Figure 3E and Table 2).
Evaluation measures
The types of performance metrics used by the included studies focusing on speech depression detection are shown in Figure 4A.Most studies (over 80%) used F1-score, accuracy, recall, and precision which were derived from confusion matrices to evaluate the performance of the DL models, but these metrics were not commonly used by clinicians in evaluating diagnostic tests.Instead, sensitivity (recall), specificity and ROC AUC, which are also derived from diagnostic test results, are clinically relevant and commonly used performance measures for diagnostic tests.Based on Figure 4B, we found that the included studies achieved good results in terms of accuracy and specificity (over 75%), but slightly lagged in sensitivity.
Comparison between deep learning and machine learning
Four studies compared their proposed methods with machine learning methods. 23,27,31,36SVM is the most commonly used machine learning algorithm, and 3 studies compared their proposed methods with SVM. 23,27,36In all these 4 studies, the proposed DL methods performed better than machine learning methods.
Summary
The papers reviewed displayed varying degrees of speech depression detection.(1) Data preprocessing: segmentation of speech varied in length across papers, with notable performance achieved through longer segments (more than 5 s).(2) Features: the preference for DL models suggests a shift away from traditional feature engineering, with promising results observed particularly with LLDs and MFCCs.(3) Models: CNN emerges as the predominant choice among DL architectures for depression detection, with CNN-Transformer demonstrating the highest performance.While hyperparameters significantly impact model performance, many studies lack specificity in their selection, underscoring the importance of fine-tuning for optimal results.(4) Evaluation: Overall, the models using the DAIC-WOZ dataset generally achieved good accuracy and specificity (over 75%), and the sensitivity lagged slightly.
Diagnostic accuracy of deep learning in depression detection
Overall, 8 studies with 670 585 preprocessed speech samples in the test sets were included in the meta-analysis, and all these studies were published in the last 5 years (2021-2024).Our study reports the evaluation parameters of accuracy, sensitivity, and specificity.The pooled estimate of classification accuracy for depression detection models was 0.87 (95% CI, 0.81-0.93,I 2 ¼ 99%).Meta-analysis showed the pooled estimate to be specific (0.85, 95% CI, 0.78-0.91,I 2 ¼ 99%), but with a lower sensitivity (0.82, 95% CI, 0.71-0.94,I 2 ¼ 100%).The random-effect model was used due to the high heterogeneity in the meta-analysis.Figure 5 represents the accuracy forest plots of all included studies.The forest plots of the pooled sensitivity and specificity can be found in supplementary files (Figures S1 and S2).SROC curve for the test is also shown in supplementary files (Figure S3).
Sensitivity analysis
Considering the high heterogeneity, subgroup analysis was undertaken to excavate the potential factors.I 2 dropped significantly in specificity (from 100% to 0%), accuracy (from 100% to 78%), and sensitivity (from 100% to 88%) of the end-to-end group in the model structure subgroup.In this situation, the accuracy and the specificity of the handcrafted group (accuracy: 0.89, 95% CI, 0.81-0.97,I 2 ¼ 100%; specificity: 0.87, 95% CI, 0.78-0.96,I 2 ¼ 99%) was higher than the end-to-end group (accuracy: 0.82, 95% CI, 0.75-0.90,I 2 ¼ 78%; specificity: 0.80, 95% CI, 0.75-0.85,I 2 ¼ 0%), but the sensitivity of the end-to-end group (0.84, 95% CI, 0.73-0.95,I 2 ¼ 88%) was higher than the handcrafted group (0.81, 95% CI, 0.64-0.99,I 2 ¼ 100%).The forest plot of the pooled accuracy for the model structure subgroup is shown in Figure 6, and the other forest plots for the pooled sensitivity and specificity for the model structure subgroup can be found in supplementary files (Figures S4 and S5).Since speech samples in some included studies were segmented from audios, the sample size in one study (n ¼ 663 978) was extremely larger than the others.Leaveone-out test was conducted to minimize the influence of the particular study. 34While omitting each study, the pooled estimates of accuracy (0.85-0.89), sensitivity (0.80-0.87), and specificity (0.82-0.87) changed a little.The plots for the leave-one-out results of the pooled accuracy, sensitivity, and specificity can be found in supplementary files (Figures S6-S8).
Quality assessment
QUADAS-2 was used to rate the overall methodological quality in our study, and the Figures S9 and S10 present the plots illustrating the risk of bias and applicability concerns.The included studies achieved an average score of 3.3 out of 4 in the risk of bias section, and 3.1 out of 4 in the applicability concerns section, thereby affirming the high quality of the studies.The funnel plot (Figure S11) was slightly asymmetric, indicating modest publication bias in all the included studies.Specifically, the shape of the plot suggests that smaller data sizes with low accuracy were less likely to get published.
Summary of key findings
To our knowledge, our study is the first review on the diagnostic performance of DL for depression detection from speech samples providing both systematic review (narrative summary of 25 studies) and meta-analysis (quantitative assessment of a subset of 8 studies).We found that across all included studies, the pooled estimate of the accuracy for depression detection was 0.87, and the specificity (0.85) was higher than the sensitivity (0.82).The handcrafted model obtained better evaluation results (accuracy: 0.89) in the subgroup analysis than the end-to-end model (accuracy: 0.82).
Speech features for depression detection
A recent review found that a set of bio-acoustic features, including source, spectral, prosodic, and formants, could improve the classification performance for depression detection. 56In addition, Zhao et al reported that acoustic characteristics were associated with the severity of depressive symptoms and might be objective biomarkers of depression. 57The findings are consistent with the present study that handcrafted model structure gave better performance than end-to-end model structure.This is because the handcrafted model structure contains various kinds of selected acoustic information, such as source and formants.Besides, our results showed acoustic features were promising, reliable, and objective biomarkers to support depression diagnosis using DL.
Superior performance of deep learning in depression detection
A recent systematic review (but not meta-analysis) suggested that SVM was the most popular classifier used among all machine learning (ML) methods in depression detection. 58hadra and his colleagues merged DL techniques into a single classifier group to compare with other ML algorithms owing to the limited studies accessible, which gave a comprehensive description of all ML algorithms but remained extensible for further research on DL. 58 In the present review, some included studies confirmed that DL surpasses previous ML methods for automated diagnosis of depression, such as SVM, Random Forest, and Gradient Boosting Tree. 27,36,40, As mentioned in the present review, the prevailing emphasis lies on CNN models, and it may be beneficial to explore more DL methods in depression detection.Although DL has less interpretability than other computational methods, it has shown great potential to assist in the diagnosis of depression.
Deep learning model structure strategies
Wu and colleagues summarized in their systematic survey that applying DL in depression detection could be built in two structures: (1) extract hand-craft acoustic features, and then implement classification methods; (2) put raw audio or spectrograms into an end-to-end DL architecture to do both feature extraction and classification by itself. 13To explore the performance of these two structures, we applied the subgroup analysis of the model structure in the meta-analysis.The pooled estimates of depression detection performance in the handcrafted structure were higher than the end-to-end structure, which provided evidence that the good performance of DL might rely on the strategies of model structures.Since its lack of interpretability, it is still limited to applying the end-to-end deep model to solve real-world clinical problems.
Future development of deep learning
Applying DL algorithms on speech samples to support clinical diagnosis for depression disorders was novel, but still needs further development.First, the performance of the automatic speech depression detection models may be influenced by different languages, cultures, and environments.0][61] Second, due to the difficulties and privacy issues of collecting depression speeches, issues of small sample size and data imbalance need to be solved before training a DL model.Third, the outperformance of CNN related model may be partly explained by the common interest in CNN, since most studies included in the systematic review focused on optimizing parameters for the CNN-related algorithms.Therefore, the performance of other DL algorithms remains to be deciphered.Fourth, the explainability of DL models is a limitation in speech depression detection.It is difficult to understand how decisions are made by DL, which is crucial for gaining trust and acceptance in clinical settings.
Clinical and research implication
The increasing prevalence of depression is a significant burden that could overwhelm mental health services capacity.Although automated depression detection allows wide screening of a larger population and ameliorate the increasing demand placed on health services, these techniques should still be used as supplementing methods to detect early signs of depression.Despite the positive attitudes of clinicians toward diagnosis-supported techniques, rolling out such novel applications on a wider scale remains challenging until knowledge of DL is obtained and experience is acquired in using those techniques in the diagnosis of depression. 62Therefore, future research should better involve physicians to improve the feasibility of techniques and require clinical trials to further explore the utility of diagnosis-supported tools.Besides, since speech is easy to collect using smartphones, future research can focus on implementing remote monitors on smartphones to obtain valuable information from real-time response and relapse, support physicians' decisions, and generate immediate diagnosis feedback.
Source of heterogeneity
The pooled results in the meta-analysis represented significant heterogeneity among the studies.There may be many reasons, including the various sample sizes based on speech segmentation, different speech languages and cultures, and different methodologies.In this study, we analyzed subgroup and leave-one-out results to explore the sources of heterogeneity.I 2 dropped significantly in specificity when dividing studies based on model structure (from 100% to 0%), which indicated that model structure might be the major cause of heterogeneity.Besides, heterogeneity was slightly lower in specificity when omitting the study with the biggest sample size, 34 providing evidence that the speech segmentation methods and the speech sample sizes also influenced the heterogeneity.
Limitations
Our study has several limitations.First, only a limited number of studies were included in the systematic review because most studies did not report the original TP, TN, FP, and FN scores, and this may lead to underpowered pooled estimates.An updated meta-analysis could be performed in the future when source studies are sufficient to make the results more robust.Second, most studies included used the same dataset, so we selected the best performance model from each dataset to ensure the validity and reliability of the meta-analysis.The limited number of studies in the meta-analysis made it difficult to stratify the studies into different subgroups to explore the source of heterogeneity.Third, we did not do the metaanalysis based on AUROC scores which were usually used to describe the performance of classification models since only 3 studies reported AUROC scores in all included studies. 29,41,44
Conclusions
We conducted a comprehensive systematic review and metaanalysis reporting the application of DL algorithms in speech to detect depression.The review confirms that using DL in speech to support the clinical diagnosis of depression is a promising method with excellent performance.CNN model with handcrafted acoustic features training on an appropriately balanced dataset was shown to be the best method in depression detection.Further studies could focus on multilingual and cross-lingual speech depression detection, DL algorithms exploration and optimization, and multimodal features combination.In addition, researchers should report diagnostic evaluation measures, such as sensitivity and specificity, to interpret DL results in real-world clinical settings.
Figure 1 .
Figure 1.PRISMA flowchart.Study selection for systematic review and meta-analysis.
Figure 2 .
Figure 2. Speech preprocessing and deep learning models.(A) The number of studies that used preprocessing steps, such as removing silence.(B) The number of studies that used different types of DL models.
Figure 3 .
Figure 3. Hyperparameters choices.(A) Distribution of the number of neural network layers.(B) The number of studies that used different batch sizes.(C) Distribution of the number of epochs.(D) The number of studies that used different learning rates.(E) The number of studies that used different loss functions.(F) The number of studies that used different optimizers.
Figure 4 .
Figure 4. Model performance evaluation.(A) the number of studies that used different evaluation methods.(B) The boxplot across studies of accuracy, sensitivity, and specificity.
Figure 5 .
Figure 5. Forest plot for the pooled accuracy.
Figure 6 .
Figure 6.Forest plot of the pooled accuracy for the model structure subgroup.
Table 1 .
Characteristics of the included studies.
Table 2 .
Performance comparison of characteristics of the studies using the DAIC-WOZ dataset.Journal of the American Medical InformaticsAssociation, 2024, Vol.31, No. 10 27Notes: Acc., Accuracy; Sens., Sensitivity; Spec., Specificity.The best performance for each characteristic of the studies is shown in bold font. | 5,901 | 2024-07-16T00:00:00.000 | [
"Computer Science",
"Psychology",
"Medicine"
] |
INFLATION FORECASTING IN THE WESTERN BALKANS AND EU: A COMPARISON OF HOLT-WINTERS, ARIMA AND NNAR MODELS
The purpose of this paper is to compare the accuracy of the three types of models: Autoregressive Integrated Moving Average (ARIMA) models, Holt-Winters models and Neural Network Auto-Regressive (NNAR) models in forcasting the Harmonized Index of Consumer Prices (HICP) for the countries of European Union and the Western Balkans (Montenegro, Serbia and Northern Macedonia). The models are compared based on the values of ME, RMSE, MAE, MPE, MAPE, MASE and Theil's U for the out-of-sample forecast. The key finding of this paper is that NNAR models give the most accurate forecast for the Western Balkans countries while ARIMA model gives the most accurate forecast of twelve-month inflation in EU countries. The Holt-Winters (additive and multiplicative) method proved to be the second best method in case of both group of countries. The obtained results correspond to the fact that the European Union has been implementing a policy of strict inflation targeting for a long time, so the ARIMA models give the most accurate forecast of inflation future values. In the countries of the Western Balkans the targeting policy is not implemented in the same way and the NNAR models are better for inflation forecasting.
Introduction
Price stability is one of the goals of all countries, especially the countries of the European Union and its potential members, as documented through the political agendas of the European Union and the Maastricht convergence criteria (Golinelli and Orsi, 2002). Predicting the future value of inflation is of particular importance for all countries, whether or not they have clearly declared inflation targeting policies. Historically, the European Union has successfully pursued a policy of targeting inflation and has not had high inflationary developments in the past, while the countries of the Western Balkans aspiring to become members of the European Union had high inflation during the late twentieth century caused by the numerous factors. The countries of the Western Balkans have diverse foreign exchange systems, Montenegro is a dollarized (euroized) economy, while Serbia and Macedonia have their own currencies. The Harmonized Index of Consumer Prices (HICP) monitored for the European Union comprises a number of countries and their diverse foreign exchange systems.
The paper examins the possibilities for application of Auto-Regressive Integrated Moving Average (ARIMA), Holt-Winters and Neural Network Auto-Regressive (NNAR) models in the inflation forecasting for the three countries of the Western Balkans (Montenegro, Serbia, North Macedonia) and full member countries of the European Union during the observed 2010-2020 period. The dual possibility of comparing the models is considered. Firstly, models are compared within their own class, and then the best models from each class are mutually compared.
There are many models that can be used for inflation modeling and forecasting. In this paper the possibility of inflation modeling and accuracy of its forecasting with univariate models are tested. The forecast of future values is based only on historical data on inflation. This type of models often give a faster and more accurate forecast compared to more complex factor models. This is a consequence of the unpredictability and impossibility of accurate measurement and evaluation of numerous factors, as well as their erroneous specifications in the models of traditional econometric analysis. An additional limitation is the need to predict the values of all determinants of the observed series, which further complicates the work and leads to inaccuracies in forecasting.
The paper is organized as follows. A literature review is presented in the next section. The third section reviews the basic methodological bases of development and specifications of econometric models. The fourth section presents the empirical analyses and comparison of estimated models. Finally, conclusions are presented in the fifth section.
Literature review
Today, in modern monetary theory and central banking practice price stability is usually associated with moderate price growth (Ascari and Sbordone, 2014). The level and degree of change in inflation have always been an interesting research topic for many researchers. Researchers' interest in the field reflected the current level of development of empirical apparatus for forecasting time series. Meyler, Kenny and Quinn (1998) forecast inflation in Ireland by comparing the ARIMA models obtained using the Box-Jenkins methodology and objective penalty function methods. Pufnik and Kunovac (2006) give a forecast of short-term inflation based on the ARIMA model by observing the consumer price index (CPI) in Croatia. Other papers also deal with inflation forecasting using the ARIMA model with the most frequent use of Box-Jenkins methodology in model evaluation (Alnaa and Ahiakpor, 2011;Okafor and Shaibu, 2013). A comparison of the power to predict inflation by using the Holt-Winters and ARIMA models is presented by Omane-Adjepong, Oduro and Oduro (2013).
In the literature special attention is given to comparisons of more advanced prognostic models, such as comparisons of ARMA, ARIMA and GARCH models (Nyoni, 2018), comparison of VAR and ARIMA models in HICP prognosis in Austria (Fritzer, Moser and Scharler, 2002), comparison of ARIMA, VAR and ECM models (Uko and Nkoro, 2012). Suhartono (2005) compares the prognostic performances of the Neural Networks, ARIMA and ARIMAX models in inflation forecasting in Indonesia, where it is concluded that the Neural Networks model gives a more accurate inflation forecast compared to traditional econometric time series models. Sari, Mahmudy and Wibawa (2016) give an inflation forecast using the Backpropagation Neural Network method. McNelis and McAdam (2004) apply linear and neural network-based "thick" models for inflation forecast in the USA, Japan and in the euro area. Hubrich (2005) studies inflation in the European Union measured by the change in HICP and investigates whether the forecasting accuracy of forecasting aggregate euro area inflation can be improved by aggregating forecasts of sub-indices of the HICP as opposed to forecasting the aggregate HICP directly.
ARIMA models
ARIMA models are particularly suitable for short-term forecasts and the model evaluation methodology is the result of the work of Box and Jenkins (1976 (1 − 1 ) (1 − Φ 1 12 )(1 − )(1 − 12 ) = (1 + θ 1 ) (1 + Θ 1 12 ) In seasonal ARIMA models p represents the number of autoregressive elements, d is the level of series differentiation, q is the number of moving average elements, while P is the number of seasonal autoregressive elements, D is the number of seasonal differences and Q is the number of seasonal moving average elements.
Holt-Winters models
Holt (2004) and Winters (1960) have developed a method for forecasting time series that can successfully capture the level, trend and seasonality in the series. When forecasting
Inflation Forecasting in the Western Balkans and EU: A Comparison of Holt-Winters, ARIMA and NNAR Models
future values of a time series, values that are closer to the current one are more important than previous values that are further away. The method can be used for short, medium and long term forecasts. There are two types of methods in relation to the nature of time series, and these are additive and multiplicative models. The additive method is used when seasonal fluctuations are at approximately the same level during a time series, while the multiplicative method is used when seasonal variations change in proportion to the time series level.
Equations for the additive method: Equations for the multiplicative method: where is the observed series, s is the length of the seasonal cycle, gives the level of the series, represents the trend, 0 ≤ α ≤ 1, 0 ≤ β ≤ 1, 0 ≤ ≤1 and presents forecast for h-periods ahead.
Neural Network Auto-Regression models
NNAR models are a newer way that allows modeling of complex connections between inputs and outputs. In the case of the NNAR model, the previously lagged values of the observed time series serve as inputs for forecasting future values. Due to the seasonality of the observed HICP time series, the NNAR (p, P, k)m model will be used. Figure no. 1 represents an NNAR model with one hidden layer, k hidden neurons, also known as a multilayer feed-forward network where each layer of nodes receives inputs from the previous one and sends it on to the next one (Hyndman and Athanasopoulos, 2018). The inputs of each node are obtained based on the linear combination function. The results are then modified by a nonlinear function and forwarded. The linear combination function for node j is formulated as: , it is then modified by a non-linear function, such as a sigmoid, and it is sent to the next layer. This aims to reduce the effects of extreme values and to make networks more resistant to extreme values. The notation p represents the number of lagged autoregressive components of the model, while the notation P represents the number of lagged seasonal autoregressive components of order m. The k mark represents the number of nodes in the hidden layer.
Comparasion of forecast accuracy
The comparison of the evaluated models for each of the observed time series will be performed on the basis of 7 criteria: Mean error: Root mean square error: Mean absolute error: Mean percentage error: Mean absolute percentage error: Mean absolute scaled error: = mean(|q |) Theil's U statistic: = √ where +1 −̂( ) represents the forecast error, i.e. the difference between the actual and the predicted value of the time series. Smaller values of forecast accuracy statistics correspond to a better forecast model. In theory there are no exact limits for the values of forecast statistics that separate good from bad models. Therefore, we take the criterion that the best model has lower values of all statistics compared to competitors. The best model minimizes all ME, RMSE, MAE, MPE, MAPE, MASE and Theil's U values.
Inflation Forecasting in the Western Balkans and EU: A Comparison of Holt-Winters, ARIMA and NNAR Models
Data base for this research consist of time series of Harmonized Index of Consumer Prices (HICP). The data are presented on a monthly basis, for the period from January 2010 to March 2020. R software package was used for analysis purposes.
Results and discussion
The Based on the values of ADF and PP test statistics and calculated p probabilities for all four data series, it can be concluded that the null hypothesis is confirmed and that the series have at least one unit root, i.e. that they are not stationary. We reach the same conclusion on the basis of KPSS test statistics and p values, where we reject the null hypothesis that the series is stationary and provide an alternative that the series have at least one unit root. In order to obtain a stationary time series that can be used in further analysis, the first difference of all series was determined. The results of ADF and PP tests proved that the first differences of all series are stationary, and that as such they can be used in the further Box-Jenkins procedure. The results of the applied KPSS test in the case of Serbia are in contradiction with the obtained results of other unit root tests. In order to meet the requirements of all three unit root tests, the time series needs to be differentiated once again. Simultaneous use of a larger number of tests leads to better results due to the elimination of the shortcomings of the use of individual tests. In this research the strictest criterion is used for fulfilling the conditions of all three unit root tests.
The ACF and PACF given in figure no. 4 are used in determining the order of the AR and MA models for each of the observed series. The results of the unit root tests of the first difference data are given in table no. 2. From the correlogram of all series the presence of seasonal components is clearly noticed, which will influence the further evaluation of seasonal ARIMA models, i.e. SARIMA models. Depending on the observed series, SARIMA models will be of different order, with or without seasonal differentiation of the series. The appropriate SARIMA model is evaluated for each of the observed series, and information criteria is used as a criterion in selecting the optimal model. The best model for each of the
Table no. 3: Identification and evaluation of the model in the case of Montenegro
In the case of data for Serbia, we previously concluded that the series on the first difference is not stationary, so the second difference of the original data series was determined and its stationarity examined. Based on the statistics values of all unit root tests, it is confirmed that the second difference of the series is stationary. (
. Identification and evaluation of the model in the case of European Union
After selecting the best models for each country separately, minimizing the values of the information criteria, it is necessary to examine the existence of an autocorrelation between the residuals and the normality of their distribution. Based on the value of Ljung-Box statistics and the corresponding p value with a risk of error of 5%, the null hypothesis can be confirmed that all autocorrelation coefficients are statistically equal to zero, even up to 22th lag. The residual distributions from all the described models can be approximated by the normal distribution. Hence, the models meet both required characteristics of model validity (absence of autocorrelation and normality of residual distribution), and can be used in further analysis as a benchmark in model comparison. (Table no. 8) Models that meet all the required properties can be used to predict future time series values. Figure no. 5 gives 12-month forecasts of all four selected ARIMA models with 80% and 95% confidence intervals. Forecasts of the future value of the logarithm of the Harmonized Index of Consumer Prices represent the basis for calculating the statistics of the evaluation of the forecast in comparison with the actually realized values. The predicted values will be used not only for comparison with the actual values but also for comparison with the predicted values obtained by the Holt-Winters method and the use of neural networks. A comparison of prognostic methods will be given separately for each of the observed countries.
Holt-Winters forecasting
The type of time series is of special importance for the forecast of future values of the observed time series when using the Holt-Winters method. Forecasts for both types of methods (additive and multiplicative) are given in the analysis and the choice of the better method was made on the basis of forecast statistics. Data for all countries were observed separately, and the selection of the best forecast model was made based on minimizing the forecast error. (Table no.
Inflation Forecasting in the Western Balkans and EU: A Comparison of Holt-Winters, ARIMA and NNAR Models
prognostic performance. Previously calculated forecast statistics and the choice of a better forecast method can also be clearly seen in the figure no. 6 because these values are closer to the actual observed data that serve as test data.
Neural Network Auto-Regression forecasting
Using artificial neural networks, i.e. a special type of Neural Network Auto-Regression model (NNAR), the model was evaluated, and then a forecast was made for out of the sample periods. NNAR (1,1,2) [12] models were evaluated for all observed time series, with one autoregressive element, one seasonal autoregressive element of order 12 and two neurons in the hidden layer. The estimated models are presented in table no. 10.
Table no. 10. Selected NNAR models
Data from January 2010 to March 2019 are used to evaluate the model, while data for the next 12 months until March 2020 are used to test predictive power of the model. For the forecast of one period out-of-the sample, all available data are used, for the forecast of two periods in advance, in addition to the data from the sample, the first forecast values are used. A twelve-month forecast was formed in the same way. The distribution of the residuals of the evaluated models can be approximated by the normal distribution, so the models can be
Conclusions
Inflation targeting is one of the key goals of all European Union countries, as well as of the countries aspiring to become future members of EU. It is of particular importance to all Western Balkan countries that experienced severe hyper-inflation at the end of the twentieth century. Having in mind the convergence criteria that EU has put before its future member states, the dynamics of inflation measured by the change of the harmonized index of consumer prices (HICP) and its forecast is very important topic.
The purpose of this paper was to estimate adequate models for inflation forecasting and to compare their forecasting performances for the case of the three Western Balkans countries (Montenegro, Serbia, North Macedonia) and EU countries. This analysis has been carried out considering three methodologies, ARIMA, Holt-Winters and NNAR models. A comparison of the models for forecasting monthly inflation at several levels was performed. When comparing the models from the ARIMA class, the AICs information criterion was used. The Holt-Winters method comparison was performed based on the forecast error. An automatic evaluation procedure was used to evaluate the NNAR model. After selecting the best model from each of the model classes, a final comparison of the prognostic performances of the models is done on the bases of the forecast errors. For all analyzed countries of the Western Balkans, the NNAR models give the best results in forecasting inflation. In case of the European Union the evaluated ARIMA model gave the best results. The Holt-Winters (multiplicative or additive) method is the second best in forecasting for the case of all analyzed countries.
The results of the research represent a framework for further analysis and do not provide final solutions to this problem in the observed countries. It is interesting to compare the possibilities of forecasting inflation for the countries that have different legacies that act through psychological factors, regardless of the fact that they are not included in the analysis empirically. The models do not take into account other factors that determine inflation, and therefore represent the forecast of future values only on the basis of previous values of the observed phenomenon. Although the models give a very accurate forecast, for long-term forecasts they may show certain shortcomings due to the univariate nature of the model. In the world that is characterized by numerous, rapid and sudden changes and a large number of factors and influences, even forecasts of the best model that can be evaluated can only give a rough picture of the always uncertain and challenging future. | 4,305 | 2021-05-01T00:00:00.000 | [
"Economics"
] |
Extended and local structural characterization of a natural and 800 °C fi red Na-montmorillonite – Patagonian bentonite by XRD and Al/Si XANES
A structural characterization of a Patagonian bentonite and its corresponding heating product (800 °C) was carried out. The nature of aluminum and silicon atoms was investigated using Al and Si K-XANES spectroscopy and compared with other well-known Al and/or Si containing materials. The studied material comes from Lago Pellegrini area-Rio Negro Province, Argentina. The main crystalline phase was con fi rmed to be a Na-montmorillonite. The thermal behavior was studied by DTA-TG-DTG and XRD. A Rietveld based quanti fi cation was performed as complementary study. About 90% Na-montmorillonite (clay mineral of smectite Group) content was con fi rmed and different impurities (quartz, gypsum and feldspars) were also quanti fi ed. The smectitic interlayer thermal displacement was determined between 15.49 and 9.81 Å. The XANES results allowed obtaining the Al IV /Al VI ratio and the local symmetry and distortions at Si and Al-sites. Thetetra-coordination of siliconand thealuminum tetraandhexa-coordinatedwasfoundinbothmaterials.The expected high Al IV /Al VI ratio was con fi rmed and pondered for the thermally treated bentonite.
Introduction
Clay minerals are widely used in various industrial applications owing to their physicochemical properties such as high surface area and porosity, low specific gravity, adsorption and ionic exchange capacities, crystal morphology, composition hydration and swelling abilities, as well as their catalytic and other properties. These minerals are widely used in many industrial processes such as effective sources of protons in paper and ceramic industry; as suspending medium in saltwater drilling fluids, paints, and pharmaceuticals; as absorbents in pet litter, agricultural chemicals, water, and oil sorption industries; as well as in the cosmetic sector, among many more (Murray, 2000;Grim and Güven, 1978). The application of these materials in polymer industry is also very important (Liu, 2007). Most of these features of clay minerals can be improved and changed by making use of acid activation, soda activation, ion exchange, and thermal treatment processes (Barrer, 1989;Besq et al., 2003;Christidis, 1998;Grim and Güven, 1978;Komadel, 2016;Mahmoud, 1999;Reichle, 1985;Reis and Ardisson, 2003;Sarikaya et al., 2000;Tan et al., 2004).
Bentonite is a colloidal, alumino-silicate rock derived from weathered volcanic ash, which is composed by more than 70% smectite. Accessory minerals quartz, opal, mica, feldspar, gypsum, calcite and zeolites are frequents (Grim and Güven, 1978). Smectite group minerals (2:1 layer phyllosilicate clay minerals) have a structure composed by one sheet of octahedral (O) Al placed between two sheets of tetrahedral (T) Si, with substitutions of some tetrahedral Si atoms by Al atoms and/or of octahedral atoms (Al 3+ or Mg 2+ ) substituted by atoms with lower oxidation number (Grim and Güven, 1978). The net negative charge of the 2:1 (TOT) layers is balanced by the exchangeable cations such as Na + and Ca 2+ located between the layers and around the edges (Komadel, 2016;Önal and Sarıkaya, 2007). Basal spacing, d(001), for air dried smectites, changes from 12.6 to 15.4 Å depending on type and valence of the exchangeable cations.
Thus, the thermal treatment changes in the physical, chemical and mechanical properties of these clays depend on their mineralogy and crystal structures. Therefore, the utilization of these minerals as raw materials in possible applications requires knowledge of the detailed response of their variations with temperature. This fact determines the requirement of multiple characterization techniques to fundamentally understand both the native system and its thermal evolution, and consequent changes in the physicochemical properties.
X-ray Absorption Spectroscopy (XAS) is a powerful technique for obtaining electronic and structural information. It consists in an element-specific method capable of providing quantitative information on the local coordination environment around absorbing atoms in crystalline or amorphous systems (Fendorf et al., 1994). The XANES (X-ray Absorption Near Edge Structure) region of the spectrum is sensitive to the valence state of the central atom and the geometry and types of surrounding atoms (Henderson et al., 2014). XANES, unlike XRD though complementary, is a technique of local-scale order.
These sets of techniques have optimum potential for the characterization of complex systems such as bentonite and its heated product prepared by thermal processing of the same bentonite. We propose, in this paper, to study the structural conformations in commercial industrial grade material (rock) as local and long-range changes induced by thermal effects. Particularly in this work, assess the structure of a fired Na dioctahedral montmorillonite present in a Patagonian bentonite and the resultant material treated at 800°C (meta-montmorillonite) in air using Differential Thermal Analysis (DTA), Thermogravimetric Analysis (TG), conventional X-ray powder diffraction (XRD) and aluminum-silicon near edge K XANES.
It must be considered that the bentonite quality varies according with the exploited working front. There are other bentonite localities in Patagonia. The structural characterization refers exclusively to Namontmorillonite that is the major mineral phase at the Lago Pellegrini bentonite.
Particularly the studied material presents a specific weight of 2.21 g/cm 3 and an apparent specific weight of 0.95 g/cm 3 . It is commercialized in standard mesh #200. A 6 wt.% water suspension presents an 8.5 pH; the Mohs hardness is between 1 and 1.5. The de-agglomerated powder mean particle size is below the micron. The chemical composition of a dried sample (evaluated by Inductively Coupled Plasma Atomic Emission Spectroscopy -ICP-EAS) together with the mass loss after heating at 1000°C are shown in Table 1. A fired Bentonite was also characterized, setting the heating program in order to ensure complete dehydration and dehydroxilation. The sample was fired in an electric furnace in air atmosphere up to 800°C with 10°C/min as heating rate and with 15 min of dwell. A porcelain crucible was employed for treating 20 g. Making an analogy with kaolinitic clays, the phase obtained by firing the montmorillonite clay mineral can be designated as meta-montmorillonite (Zivica and Palou, 2015). The original and fired samples were labeled Bent-0 and Bent-800 respectively.
Identification and quantification of crystalline phases in the clays and fired materials were carried out by X-ray diffraction (XRD) (Philips 3020 with Cu-Kα radiation, Ni filter, at 40 kV-35 mA); with 0.04°and 2 s steps in the 3-80°range. The employed quantification method was fully described in two previous works (Conconi et al., 2014;Serra et al., 2013). The XRD patterns were analyzed with the program FullProf (Version 5.40, March 2014) which is a multipurpose profile-fitting program (Rodríguez-Carvajal, 2001), including Rietveld refinement to perform phase quantification (Rietveld, 1969).
The effect of heat treatment was also evaluated by thermogravimetric analysis and differential thermal analysis (DTA-TG) simultaneously carried out on a Rigaku Evo2 equipment with 10°C/min as heating rate in Pt crucibles in air atmosphere. The derivative curve of the TG (DTG) was also employed for this purpose.
XANES experiments were performed in the Soft X-rays Spectroscopy (SXS) beamline of the Brazilian Synchrotron Light Laboratory (LNLS, Campinas, SP, Brazil). The beam focalization was performed using a Ni mirror. For Al K-edge, the monochromator employed was YB66, with a resolution of about 2 eV with a slit aperture of 2 mm. For Si K-edge, the monochromator employed was InSb(111), with a resolution of about 1 eV with a slit aperture of 1 mm. The I 0 incident photon flux intensity was measured using a mesh of Au located before the main chamber. The photon energies were calibrated using an Al (Si) metallic foil and setting the first inflection point to the energy of the K absorption edge of Al 0 at 1559 eV (Si 0 at 1839 eV). The spectra were acquired at room temperature and the pressure chamber was about 10 −4 Torr. The aluminosilicate minerals were ground into fine powder (standard mesh # 200), and the powder samples were pressed uniformly on electric carbon tape supported on a stainless-steel sample holder for XANES measurements. The angle between the sample holder and the detector was 45°.
All spectra were processed by standard methods from I f /I 0 signal analysis, where I f is the detected fluorescence intensity. The pre-edge and the normalization background were realized by Athena (Ravel and Newville, 2005). To exclude self-absorption effects were compared the spectra obtained by fluorescence mode with those obtained by Total Eletron Yield (TEY) mode at Si K-edge. After background subtraction and normalization, the characteristic resonance peaks in the XANES region was fitted using Gaussian functions and the continuum step was fitted using arctan functions (Outka and Stöhr, 1988) using WinXAS3.1 (Ressler, 1998).
Results and discussion
3.1. Mineralogical analysis of the studied industrial Na-montmorillonite-Patagonian bentonite and the heated (800°C) material by XRD Fig. 1 shows the XRD patterns of both materials studied, Bent-0 and Bent-800 in the 3-80°2θ range; principal reflections are labeled. The corresponding identified crystalline phases are shown in Table 2, together with the ideal formula and the scored PDF card from the International Centre for Diffraction Data. For the as received bentonite (Bent-0), the principal crystalline phase corresponds to a Smectite, more specifically to Na-montmorillonite. As expected for this kind of mineral (Grim and Güven, 1978), it is accompanied by quartz, gypsum and feldspars (in this case, plagioclase). The results of the Rietveld based quantification are shown in Table 3. The amount of montmorillonite (Mt) quantified was in accordance with expected values. The Rwp parameter resulting values were in all cases below 30, sustaining the goodness of the refinements for this kind of materials.
The detected crystalline phases were in concordance with similar studies (Volzone and Sanchez, 1993;Vallés and Impiccini, 1999). The Gypsum dehydration into anhydrite occurs at intermediate temperatures (150-300°C), hence, anhydrite will be expected in the heated bentonite.
The 001 smectitic reflection was analyzed. This evolved from 5.7°to 9.0°in samples Bent-0 and Bent-800. This can be explained by a decrease in the distance between the interlayers of the clay mineral, due to the water losses observed by the performed thermal analysis. The interlayer distances, estimated from Bragg law, are 15.49 Å and 9.81 Å for montmorillonite in Bent-0 and the dehydrated montmorillonite in Bent-800, respectively. These values are in concordance with reported values (Sarikaya et al., 2000;Sarı Yılmaz et al., 2013). The intensity and shape of the dehydrated montmorillonite (Mt*) principal reflection in the Bent-800 diffraction pattern sustains the fact that the meta montmorillonite structure remains with high enough crystalline grade. This crystalline (lower) structure can be described by the local characterizations of the X-ray spectroscopy showed in the next section.
Thermal behavior analyses: DTA-TG-DTG.
These analyses revealed that the dehydration and the dehydroxylation stages proceed in two regions: 38-150°C and 150-720°C at 10°C/min of heating rate (see Fig. 2). The corresponding mass losses were 15.46% and 3.74% (total mass loss 17.70%). The mass loss of dehydroxylation reaction is in a good agreement with the previous study (Önal and Sarıkaya, 2007;Wang et al., 1990) and with the evaluated mass loss (Fig. 1). The dehydration of the studied bentonite was complex with three overlapping stages, while its dehydroxylation was observed in a single stage reaction, all of which were represented in the DTG, the local minimums are indicated in the figure. The third step in the dehydratation process could be associated to gypsum dehydratation while the two initial stages correspond to smectite interlamellar water loss.
The 2.40% content of SO 3 (Table 1) implies a higher gypsum content than that found with the Rietveld quantification. This methodology might present significant uncertainties when quantifying minor minerals.
The endothermic dehydration stages can be observed in the DTA curve; temperatures shown in the figure correspond to simultaneous analyses and to the one observed in similar materials (Önal and Sarıkaya, 2007;Wang et al., 1990). The "s" shape signal observed in the DTA between 920 and 1020°C corresponds to the recrystallization of the bentonite and recrystallization of new phases, respectively, no mass loss would be associated to these processes. Considering this analysis, it can be ensure that a sample heated at 800°C with the mentioned heating program would be dehydrated and dehydroxilated. These processes would result in important changes from the structural point of view; the interlayer distance between TOT would decrease because of the loss of both hydroxyl groups and interlayer water. The TOT layers structure of the clay mineral remains in the meta-montmorillonite phase of the heated bentonite. This change was observed by performed XRD analysis (see Fig. 1).
Si-K XANES analysis
Fig . 3 shows the normalized Si-K XANES spectra of references (quartz, kianite, kaolinite and pyrophyllite), and Fig. 4 shows the XANES spectra corresponding to samples Bent-0, Bent-800 and amorphous silica (Am). For Bent-0 and Bent-800, the obtained spectra are similar to montmorillonite and bentonite spectra reported by Shaw et al. (2009). The absorption edge is at 1845.4 eV, 0.3 eV shifted to higher energy with respect to the corresponding one for amorphous silica.
The main features of the spectra are analyzed by least square fitting, following similar analyses already reported in the literature (L. Andrini et al., 2016a;Li et al., 1994). Fig. 4 shows the fits for the spectra of amorphous silica (Am-SiO 2 ), Bent-0 and Bent-800. In Table 4 the main characteristics of each resonance for each of the three spectrums (Li et al., 1996(Li et al., , 1994(Li et al., , 1993 are reported. Linear fits (not showed here) between the quartz spectra and Bent-0 spectra cannot reproduce the corresponding Bent-800 spectrum. From a visual inspection between the spectra of quartz and Bent-800, it can be seen that they not share common characteristics. As a second observation it is worth noting the absence of the peak A in both Bent-0 and Bent-800 spectra. The occurrence of this peak is attributed to the presence of distortions in the system or to an increase in the coordination number (Li et al., 1993). The reason of this assignment is because this peak is attributed to the transition of Si 1 s electrons to the antibonding 3s-like state, Si 1s → a 1 (Si 3s-3p), forbidden by the dipole selection rules (ΔL = ±1, ΔS = 0, ΔJ = ± 1), making the peak A very weak. Thus, it can be concluded that Si occupies tetrahedral sites undistorted in Bent-0, as it expected, and LSO in Bent-800 sample. This result indicates that the thermal treatment does not significantly modify the structure of the Si-local environment.
From the information available in Table 4, it is observed a decrease in all intensities of all resonances in the absorption spectrum for Bent-800 regarding to intensities of the corresponding Bent-0 spectra. The most important diminution is for the Si 1s → t 2 (Si 3p-3s) transition, which provides information of the Si-3p unoccupied, states (Shaw et al., 2009). This can be associated with the processes of dehydroxylation suffered by heated the clay material: losing H atoms, the O atoms have a greater availability of electrons that can fill this t 2 -orbital.
Al-K XANES analysis
Fig . 5 shows the normalized Al-K XANES spectra of reference compounds α-Al 2 O 3 , gibbsite, kyanite, kaolinite, and mullite. Fig. 6 shows the corresponding spectra and their fittings for Bent-0 and Bent-800 samples. The peak A is associated with the 1s → a 1 (3 s) transition for four coordinate aluminum and the 1s → a 1g (3s) transition for hexacoordinated aluminum. Both are forbidden dipolar transitions, and their intensity is then very weak in general. Peak C is associated with the 1s → t 2 (3p) transition for four coordinate aluminum and occurs between 1566.5 and 1567.3 eV, whereas for aluminum hexacoordinated it is associated with the 1s → t 1u (3p) transition and it is located between 1568.2 and 1569.1 eV. The peak D is associated with multiple dispersions for both coordinations, and it is more noticeable in four coordinate aluminum centers. The peak E is associated with the 1s → e (3d) transitions for four coordinated aluminum and it is between 1572.7 and 1575.6 eV, whereas for aluminum hexacoordinated, it is associated with the 1s → t 2g (3d) transition and it is located between 1572 and 1574.2 V. This information is summarized in Table 5. From these assignments, it is inferable that gibbsite, kyanite and kaolinite have only hexacoordinated aluminum, while the α-alumina has only tetracoordinated aluminum. Finally, mullite contains both tetracoordinated and hexacoordinated aluminum (L. Andrini et al., 2016b).
For the corresponding Bent-0 XANES spectrum it is noted the A peak presence. This peak is usually a small shoulder, because its origin is due to quadrupole transitions. An interpretation of the origin of this transition is associated with the distortion of the tetrahedrons (Romano et al., 2000). This allows interpreting that the Al-tetrahedrons are distorted in the Bent-0. The absence of this peak in Al K XANES for Bent-800 allows us to infer that no distorted Al-tetrahedrons are present.
Following the methodology proposed by Ildefonse et al. (1998)) and Kato et al. (2001) we can assign the peak C to the presence of tetrahedral aluminum and peak D to the presence of octahedral aluminum. Then the areas (see intensity in Table 5), obtained by corresponding fitting to such peaks, are proportional to the number of atoms with these coordinations. We can then calculate the relative proportion of the tetrahedral aluminum. For Bent-0, the calculated Al IV /Al total ratio is 31.13%, and for Bent-800 is 66.45%. To explain this difference, we should note that the experimental spectrum obtained for Bent-800 is like the reported one for montmorillonite and thermally treated montmorillonite (Shaw et al., 2009), and smectites (Ildefonse et al., 1998). Thus, we can assume that the fitting for the peak C in the Al K XANES spectra for Bent-800 contains two peaks overlapped: one related to tetracoordinated aluminum, which in the literature is generally associated with glass phase or a minor order crystallographic arrangement, and another associated with the four-coordinated aluminum in the smectite phase. The way to confirm this assumption is even by simulations, and/or by empirical way, obtaining the XANES spectra of glassy phase and pure smectite one, performing linear overlays to achieve the experimental spectrum.
As it was mentioned in the section of Si-K XANES analysis, the tetrahedron silicon is highly stable to heat treatments. This tetrahedral (Garvie and Buseck, 1999). b Nomenclature used by (Li et al., 1994) . c With an error of 0.3 eV.
silicon is maintained in all phases of the samples, which is consistent with the stability of the position and shape of peak C. Additionally, the variation of the ratio between four-coordinated aluminum atoms and the total aluminum atoms could be explained by the appearance of crystalline and amorphous phases. The lack of detection of these new phases in the diffraction patterns, i.e. the presence of new phases with tetracoordinated aluminum, could be explained by the appearance of local arrangements, with dimensions of only few nanometers that do not contribute to diffraction. This proposal is consistent with the results previously obtained (Drits et al., 1995). In those experiments, the authors argue that, after heating montmorillonite, the octahedral aluminum atoms become not regular five-fold prism, as the oxygen atoms surrounding the aluminum are in a geometry that is not pyramidal (square base) or bipyramidal (triangular base). This is consistent with the lower reflection height of dehydrated smectite phase in the For the nomenclature of the peaks we follow the convention used by (Li et al., 1995). The first Gaussian (A) is located at 1566.8 eV with low intensity. The second Gaussian (C) is located 1567.7 eV with the highest intensity. The third Gaussian (D) and fourth Gaussian (E) are located at 1570.5 and 1573.1 eV respectively. diffraction pattern of Bent-800 sample compared to the reflection of the smectite phase observed in the diffraction patterns for Bent-0.
Conclusions
The layered structure of studied clay was deeply solved by the combination of three techniques. Natural (industrial grade) Na-montmorillonite-Patagonian bentonite and its corresponding heated product (obtained after a simple 800°C treatment) were studied. The extended and local range order structure characterization was carried out by means of two X-ray based techniques: XRD and XANES, respectively. The Rietveld refinement permitted to confirm the smectite phase and identify the complementary minerals of the commodity: quartz, gypsum and feldspars. The refinement allowed to quantify these phases and confirmed the high smectite (Montmorillonite type) content (≈90%).
A thermal analysis (DTA-TG-DTG) was employed for determining the bentonite thermal transformations ant to ensure the dehydration and dehydroxylation of the clay mineral. Results were consistent with the XRD-Rietveld and the chemical composition.
Both aluminum and silicon atoms in the different materials were locally described and compared by Al/Si XANES. Both coordination geometries and first neighbor environment were established in each material and consistently compared with further studies on aluminum and silicon materials and minerals. On the other hand, the smectitic interlayer thermal displacement was also evaluated (from 9.81 to 15.49 Å) by XRD. The silicon tetra coordination was confirmed in both materials; the aluminum tetra and hexa-coordination was found in both materials. The expected higher Al IV /Al VI ratio was confirmed and pondered for the thermally treated bentonite.
The features of the evaluated (Al/Si K-XANES) spectra could be employed for characterization and evaluation of more mild differences between similar materials and/or minerals like in chemical or physical modifications, usually performed for different technological applications, like in organo-clays, pillared clays, catalysts, and adsorbents. | 4,928 | 2017-03-01T00:00:00.000 | [
"Materials Science"
] |
Identification of Potential Sites for Future Lake Formation and Expansion of Existing Lakes in Glaciers of Chandra Basin, Western Himalayas, India
Disappearance of mountain glaciers and formation/expansion of glacial lakes are among the most distinguishable and dynamic impacts of climate warming in the Himalayas. The present research focuses on the identification of potential sites for future lake formation in the 65 selected study glaciers of Chandra basin located in the western Himalayas. The study adopted stress-driven, physics-based GlabTop2_IITB model to obtain the ice-thickness distribution, which was then used to extract the bedrock topography. Based on the overdeepenings determined from the derived glacier bed topographies, a total of 350 potential future glacial lakes (PFGLs) were identified, and a detailed high-resolution database of these lakes was generated. The identified PFGLs were found to occupy an area of 49.56 km2, corresponding to 8.4% of the current total area of the 65 study glaciers (591 km2). The total storage volume of these PFGLs was estimated as 1.08 ± 0.16 km3. In our study region, 20 PFGLs were identified as potentially hazardous in the event of glacial lake outburst flood occurrence, and their combined storage volume was found to be more than 106 m3, which calls for continuous monitoring of the glaciers in this region.
Disappearance of mountain glaciers and formation/expansion of glacial lakes are among the most distinguishable and dynamic impacts of climate warming in the Himalayas. The present research focuses on the identification of potential sites for future lake formation in the 65 selected study glaciers of Chandra basin located in the western Himalayas. The study adopted stress-driven, physics-based GlabTop2_IITB model to obtain the ice-thickness distribution, which was then used to extract the bedrock topography. Based on the overdeepenings determined from the derived glacier bed topographies, a total of 350 potential future glacial lakes (PFGLs) were identified, and a detailed highresolution database of these lakes was generated. The identified PFGLs were found to occupy an area of 49.56 km 2 , corresponding to 8.4% of the current total area of the 65 study glaciers (591 km 2 ). The total storage volume of these PFGLs was estimated as 1.08 ± 0.16 km 3 . In our study region, 20 PFGLs were identified as potentially hazardous in the event of glacial lake outburst flood occurrence, and their combined storage volume was found to be more than 10 6 m 3 , which calls for continuous monitoring of the glaciers in this region.
INTRODUCTION
The Hindu Kush Himalaya (HKH) region covering a glacierized area of about 40,000 km 2 (Bolch et al., 2012) is a huge source of freshwater to the major river systems in Asia. During the twentieth century, the temperature trend profiles of the northwest and western Himalayan regions of HKH have already shown significant warming (Bhutiyani et al., 2007;Dash et al., 2007). The annual mean temperature over the whole HKH region increased at a rate of 0.21 ± 0.08 • C/decade with the maximum warming of 0.26 ± 0.09 • C/decade localized over the western Himalayas (Gautam et al., 2010). In the recent decades, owing to increasing climatic warming, glaciers in many parts of the Himalayan region are thinning and retreating at high rates (Bolch et al., 2012). These changes could influence the development of unstable future glacial lakes at different locations of a glacier (Quincey et al., 2007;Frey et al., 2010).
Several studies covering various parts of the globe, including the HKH region, have attempted to detect and monitor the existing glacial lakes using remote sensing techniques (e.g., Huggel et al., 2002;Kääb et al., 2005;Quincey et al., 2007;Allen et al., 2009;Gardelle et al., 2011;Jain et al., 2012;Li and Sheng, 2012;Raj et al., 2012;Xin et al., 2012;Worni et al., 2013;Zhang et al., 2015;Nie et al., 2017) as well as through field investigations (e.g., Richardson and Reynolds, 2000;Haeberli et al., 2001). These studies have provided a critical database of existing glacial lakes. However, under the present climatic warming scenario, knowledge of potential sites of future lake formation is also essential to government agencies for preparing mitigation policies. The glacial lake outburst flood (GLOF) events originating from such proglacial lakes could be hazardous to the population and infrastructure situated in the valleys below (e.g., Vuichard and Zimmermann, 1987;Xu, 1988;Das et al., 2015;Allen et al., 2016a,b).
Among the currently available studies on identification of potential sites of future glacial lakes in any region, a detailed inventory of glacial lakes was first compiled by Reynolds (2000) for Bhutan, in the context of developing hydropower stations. Using a set of topographic maps as well as panchromatic and color SPOT images of the entire Bhutan, the study explored the formation of existing lakes on debris-covered glaciers and also provided insights into location of future lakes. In this study, a 2 • slope gradient was identified as the critical threshold for formation of large supraglacial lakes on the debris-covered glaciers under negative mass balance condition in the Himalayas. Quincey et al. (2007) identified the sites of supracglacial lake formation in the Nepali and Tibetan glaciers having a history of catastrophic outburst floods. These sites were determined based on a combined analysis of glacier surface velocity obtained from ERS-1/2 satellite data and slope gradient extracted from SPOT-5 high-resolution digital elevation model (DEM). The study highlighted that low flow velocity regions mainly constitute the potential sites for supraglacial lake formation. Further, the study found a strong correlation between glacier surface gradients below 2 • and presence of large supraglacial lakes, which agrees with the conclusions of Reynolds (2000).
To date, only limited studies (Frey et al., 2010;Allen et al., 2016a;Linsbauer et al., 2016;Zhang et al., 2019) are available for detecting sites of potential future lake formation based on models and methods using remote sensing inputs. Among them, Frey et al. (2010) presented a three-level strategy for identifying the overdeepened parts of a glacier bed and thereby the sites of potential future lake formation. The first two levels of strategies are for preliminary and qualitative identification. The third level is for quantitative estimation of glacier bed topography using modeled ice-thickness distribution. This strategy was demonstrated for the Swiss Alps using high-quality DEM and historical maps, and the overdeepenings were identified based on ice thickness obtained from the Shallow-Ice Approximation (SIA)-based GlabTop model proposed by Linsbauer et al. (2009). Linsbauer et al. (2016) investigated the formation of potential future lakes in the HKH region by modeling the bed topographies of about 28,000 glaciers (covering 40,775 km 2 ) and producing DEMs "without glaciers." These DEMs were produced using GlabTop2 model (Frey et al., 2014), wherein the laborintensive manual digitization of flow lines was replaced with a fully automated process. The study identified about 16,000 overdeepenings that together account for about 5% of the current glacierized area and estimated the total volume of future lakes as 120 km 3 . Further, the study underlined the difficulties in accurately estimating the shape of overdeepenings due to limitations in model parameterizations and input data. However, the future appearance and location of these overdeepenings under the condition of continued glacier retreat were stated to be robust. Though this study provided information on the potential future lakes in the HKH region, the results were not validated. Allen et al. (2016a) studied the current and future potential for GLOF across the Indian Himalayan state of Himachal Pradesh, and analyzed the flood impact on downstream areas and the societal vulnerability to GLOF disasters. The existing glacial lakes in the area were identified using Landsat 8 imagery, while the future glacial lakes expected to form due to glacier retreat were modeled using GlabTop2. However, the coarser resolution ASTER GDEM (30 m) and non-optimal model parameterization adopted in the study caused high uncertainty in glacier icethickness estimates and the corresponding bed topographies.
More recently, Zhang et al. (2019) conducted a study in the Poiqu River basin of the central Himalayas to identify the sites where new lakes might emerge and existing lakes could expand with projected glacial recession, following Linsbauer et al. (2016). In this study, ice thickness was modeled using the original version of the GlabTop model developed by Linsbauer et al. (2012), with the high-resolution TanDEM-X DEM as input. However, no glacier-specific model parameterization was implemented in the GlabTop model.
Among the studies discussed earlier (Frey et al., 2010;Allen et al., 2016a;Linsbauer et al., 2016), none except Zhang et al. (2019) has used a high-resolution DEM in GlabTopbased ice-thickness modeling. Further, glacier-wise shape factor parameterization has never been implemented in the modeling. Such shortcomings in these studies could give rise to higher uncertainties in the modeled glacier ice-thickness distribution and bed topographies, as reported in Ramsankaran et al. (2018). Moreover, to date, the robustness of the methods adopted for identifying overdeepening sites of future lake development and expansion of existing lakes in the Himalayas has not been validated.
In view of the aforementioned, the present research aims to provide a comprehensive, basin-scale, high-resolution database on potential future glacial lake sites in 65 selected glaciers of Chandra basin in the western Himalayas using high-resolution glacier bed topography. Here, the bedrock topography of each study glacier was extracted from our earlier study (Pandit and Ramsankaran, 2020), where the GlabTop2_IITB ice-thickness model was implemented with a high-resolution TanDEM-X DEM and glacier-specific optimal model parameterization. The western Himalayan region was specifically chosen as it has been experiencing more warming as compared with other HKH regions in recent decades (Gautam et al., 2010), and hence,
STUDY AREA
This study was performed over 65 selected glaciers of Chandra basin located in the Lahaul-Spiti valley of Himachal Pradesh, India. The location map of Chandra basin is shown in Figure 1. In this research, glaciers less than 0.5 km 2 were not considered because smaller glaciers are generally associated with large uncertainties in area estimates due to difficulty in glacier outline delineation. Chandra basin has a total area of 2381 km 2 (Pandey et al., 2017) and ranges from 2400 to 6400 m a.s.l. The glaciers in this basin feed the Chandra River, a major tributary of the Chenab river system. The basin falls in the monsoonarid transition zone and is alternately influenced by Indian Summer Monsoon during summer and mid-latitude westerlies during winter (Wagnon et al., 2007;Bookhagen and Burbank, 2010). High wet precipitation is recorded in the summer season (July-September) whereas winter season (November-February) experiences a significant amount of solid precipitation due to the influence of the westerlies (Sharma et al., 2013). This area is characterized by relatively cold temperatures (mean annual temperature of about 9 • C) and heavy and dry snowfall (mean snowfall of about 5000 mm/year) with strong wind action (Sharma and Ganju, 2000;Negi et al., 2013).
DATASET DESCRIPTION
A brief description of the datasets used in this research such as the satellite remote sensing data and in situ data is provided in the following subsections.
Glacier Boundary
For this research, the boundaries of all 65 study glaciers (>0.5 km 2 ) were obtained from Randolph Glacier Inventory
Elevation From TanDEM-X and SRTM
For the present research, high-resolution TerraSAR-X Add-on for Digital Elevation Measurements (TanDEM-X) satellite data was used for DEM generation. This raw satellite dataset was obtained under the science proposal XTI_GLAC7043 project. Specifications of each scene are given in Table 1.
Using the SAR interferometry technique available in SARScape software, a 10 × 10 m resolution DEM (referred as TDX DEM) was generated for all 65 study glaciers (Figure 1). The DEM generation process adopted here is the same as in Pandit et al. (2014), which involves interferogram generation, adaptive filter and coherence generation, phase unwrapping, and phase-to-height conversion. The overall root mean square error obtained in TDX DEM for the Chhota Shigri Glacier was 7.41 m ) by comparing with differential global positioning system elevation measurements. For more details about the validation exercise, readers can refer to Ramsankaran et al. (2018).
Other Datasets
For validating and analyzing the results on future glacial lake outcomes based on past data, two datasets of the year 2000 pertaining to Gepang Gath and Samudra Tapu Glaciers were used for ice-thickness modeling and bottom topography extraction. They were (1) the 30 m C-band Shuttle Radar Topography Mission (SRTM) DEM of February 2000 (obtained from USGS Earth Explorer) and (2) the digitized glacier boundary (extracted using high-resolution Google Earth and Landsat-7 data of October 15, 2000; path 147 and row 37). Likewise, to estimate long-term average equilibrium line altitude (ELA) for these two glaciers, we adopted the approach given by Chandrasekharan et al. (2018). For this purpose, appropriate images, i.e., only cloud-free images, in the ablation season to the beginning of accumulation season (mid-June to mid-October) from the Landsat series, Sentinel, and Indian Remote Sensing Satellite (IRS) series archives between 1989 and 2018 were considered.
METHODOLOGY
The sites of potential future lake formation can be identified by detecting overdeepenings in a glacier bed. Figures 2A,B illustrates the formation of glacial lakes at a location due to glacier 1 https://www.glims.org/RGI/ melt. Glaciers with a considerable amount of erosive power (due to steep slopes) form large depressions in the bed as shown in Figure 2A. In the future, when the glacier disappears (as shown in Figure 2B), such overdeepened parts of the glacier bed will be exposed and filled with meltwater ( Figure 2B) as well as with some morainic materials, thus forming a proglacial lake (Boulton, 1967;Clague and Evans, 1994). GLOF events may be triggered in these proglacial lakes due to external factors such as ice avalanches, rockfall, heavy rainfall, and cloud burst.
In this study, for deriving bed topography profiles of the 65 selected study glaciers, we adopted the ice-thickness distribution obtained by Pandit and Ramsankaran (2020), which was estimated using the GlabTop2_IITB version model (referred to as the GlabTop2 model in Ramsankaran et al., 2018). The GlabTop2_IITB model is fully automated and requires only DEM, slope, and glacier outlines as model inputs. It is an independent implementation of the original GlabTop2 model with minor modifications as reported in Pandit and Ramsankaran (2020). This model is fundamentally based on the physical concept of SIA wherein glacier-specific basal shear stress is estimated from glacier elevation range following Haeberli and Hoelzle (1995) and then related with glacier slope and thickness. Ramsankaran et al. (2018) reported that shape factor is the only sensitive parameter in the GlabTop2_IITB model and recommended adopting glacier-specific shape factors. For estimating the near-optimal shape factor of a particular glacier, a new self-calibration approach was introduced by Ramsankaran et al. (2018), which is independent of ground observations of ice thickness. For implementing this approach, the model was allowed to run multiple times by varying shape factor values ranging between 0.6 and 0.9 (at an interval of 0.01) using TDX DEM as input. Based on the 31 obtained icethickness simulations, spatially distributed average ice thickness was calculated. Subsequently, the shape factors for different crosssections located at various contours were estimated using Eq. 1 as follows. This equation was adopted from Nye (1965), where f is defined as a function of glacier width (w) and average ice thickness (h c ) at the intersect of the central flow line and the transverse cross-section under consideration.
Then based on the shape factor thus derived at each crosssection, the glacier-wide average shape factor (considered as near-optimal shape factor) was calculated by simple averaging. Further details of the implementation and validation of the GlabTop2_IITB model along with the near-optimal shape factor estimation method carried out for Chhota Shigri Glacier can be obtained from our previous study .
Using this optimal-model parameterization approach, GlabTop2_IITB model was implemented to estimate icethickness distribution of the 65 study glaciers in Pandit and Ramsankaran (2020). Based on the estimated ice thickness and available DEMs, the glacier bed topographies of all study glaciers were derived, and quantitative information such as area, volume, maximum depth, and mean depth of the potential future lakes were determined. The depth distribution of the identified potential future glacial lakes (PFGLs) was extracted from the respective glacier bed topography assuming the overdeepenings to be filled with water.
UNCERTAINTY ANALYSIS
In this study, the uncertainty in future lake volume was estimated using Eq. 2. Here, V is the volume of a glacier, V is the uncertainty in the volume, A is glacier area, A is the uncertainty in glacier area, h f is the thickness of the glacier at specific f, and h f is the uncertainty in the estimated ice thickness/depth of overdeepenings.
Uncertainty in the ice thickness is retrieved from Ramsankaran et al. (2018). Because the future lake depth was extracted by differencing the ice thickness and surface elevation, the same uncertainty of ± 14% was assumed in the depth of overdeepenings neglecting the thickness of deposited materials. When lake area was extracted, often a few pixels are found disconnected from the main lake. These pixels were considered as uncertainty in the lake identification, which corresponds to almost 5% of the total area of the lakes. It is important to note that many other processes can induce uncertainty in volume estimations of potential future lakes, such as, for instance, deposit of materials in the overdeepenings. However, as quantifying the amount of material getting deposited at the bottom of future lakes is difficult, the associated uncertainty was not considered in the analysis.
RESULTS AND DISCUSSION
This section is divided into four subsections. The first subsection discusses the sites of PFGLs identified in the Gepang Gath Glacier. The second subsection deals with the qualitative validation exercise undertaken to determine whether the formation/expansion of lakes in the Gepang Gath Glacier at present could have been anticipated in the past. The third subsection reports the sites of potential future lakes in the Samudra Tapu Glacier. The final subsection provides a comprehensive database of PFGLs identified in all 65 study glaciers of Chandra basin. To facilitate easier identification, each glacial lake was uniquely labeled in the format "xx_lake_yy, " where xx and yy represent glacier ID and lake number, respectively. The location of all study glaciers is provided in Supplementary Table A1.
Potential Future Lake Sites in the Gepang Gath Glacier (Figure 3D) of the Gepang Gath Glacier. These results clearly show the overdeepenings on the glacier bed surface. In Figure 3C, a relatively large overdeepening is observed in the bed topography of the glacier until 4000 m from the snout between the surface elevations 4100 and 4300 m a.s.l. This observed overdeepening may combine with the proglacial lake present adjacent to the glacier snout and give rise to a larger proglacial lake in the future, if the current glacier retreat trend persists. Note that these overdeepenings are observed at different elevations in the left/right tributaries and the main trunk of the glacier. Distinct overdeepenings are observed at the surface elevation of about 4400 and 4600 m a.s.l. for the right tributary ( Figure 3A). On the other hand, for the left tributary, overdeepening is observed at around 4400 m a.s.l. (Figure 3B). All the overdeepenings observed are below the long-term average ELA of the Gepang Gath Glacier, which is at around a surface elevation of 4700 m a.s.l. It shall be noted that the glacier region above the ELA is smaller in area and steeper than other regions, thus preventing the formation of overdeepenings on the glacier bed. Figure 4 illustrates the spatial distribution of the sites identified as potential future glacier lakes (PFGLs). In total, seven PFGL sites situated at different elevations ranging from 4100 to 4700 m a.s.l. were identified in the Gepang Gath Glacier ( Table 2).
The area of these PFGLs range from 0.01 to 1.46 km 2 , and their storage volume vary between 0.11 and 51 × 10 6 m 3 . Out of these seven PFGLs, only GG_lake_1 would be formed at the main trunk while the others would be formed at the glacier tributaries. In terms of both area and volume, GG_lake_1 would be the largest and GG_lake_5 would be the smallest. The identified PFGLs in the Gepang Gath Glacier would occupy a total area of 2.04 km 2 , which constitutes 18.61% of the existing glacier area. The combined lake storage volume was estimated as 60.96 ± 8.0 × 10 6 m 3 , which is 6% of the existing glacier volume (1.08 km 3 ) as obtained from the GlabTop2_IITB model simulations.
Currently, the Gepang Gath Glacier has one proglacial lake of around 0.8 km 2 lying between the surface elevation of 4050 and 4100 m a.s.l. (Figure 4). Between 1971 and 2004, this proglacial lake had expanded from 0.1 to 0.8 km 2 (Patel et al., 2017). Over the past 10 years, the glacier behind the proglacial lake has retreated by a distance of 300 m, and the lake may further expand in the future (Worni et al., 2013). As per our estimates, the lake would continue to grow in the face of continuous glacier retreat until it eventually attains the combined size of the current glacial lake and the PFGL GG_lake_1 to create a bigger proglacial lake (referred as Combined PFGL GG_lake_1) with an area of 2.06 km 2 . Similar conclusions regarding the expansion of existing proglacial lake of the Gepang Gath Glacier were also drawn by Allen et al. (2016a) in which the future overdeepenings were identified using the GlabTop2 model (Frey et al., 2014).
It is important to highlight that the present moraine dammed proglacial lake at the Gepang Gath Glacier (Randhawa et al., 2005) could expand in the future due to the calving of glacier ice (Thakuri et al., 2016). Such continuous calving activities at the massive glacier tongue could possibly trigger the dam breach process (Benn and Evans, 2010;Iturrizaga, 2011;Worni et al., 2013). Therefore, if collapsed, even the existing Gepang Gath proglacial lake has a high damage potential (Patel et al., 2017). Further, when the existing proglacial lake and PFGL GG_lake_1 combine, the resultant larger proglacial lake would present an even higher damage potential in terms of GLOF.
Validation Exercise on Anticipation of Current Glacial Lake Extent Using Historical Data
In view of the difficulty in validating the predictions of future conditions, an attempt was made to know whether the formation/expansion glacier lake in the present state could have been anticipated in the past using historical data. A similar exercise was conducted by Frey et al. (2010) on the Swiss Alps where the three-level approach was applied to historical data to allow a comparison with the present situation. In this study, we considered the Gepang Gath Glacier, where a proglacial lake had formed during in the 1970s (Prakash and Nagarajan, 2018). The bed topography of Gepang Gath Glacier was derived from the icethickness distribution obtained from the GlabTop2_IITB model simulations based on the glacier boundary of the year 2000 and the archived historical DEM (i.e., SRTM 30 m) captured during February 2000. Later, the extracted glacier bed topography was used to identify the overdeepenings. Figures 5A-C illustrates the change in the extent of Gepang Gath Glacier between the years 2000 and 2017. Because the exercise was performed for identifying the extent of present glacier lakes using historical data, only the PFGL predicted near the snout location is shown in Figure 5A. Based on the analysis performed using the data of the year 2000, the modeled PFGL (blue region in Figure 5A) would have been the extension of the proglacial lake existing in the year 2000 (cross-hatch region in Figure 5A).
In 2017, a segment of the PFGL (the region between the red and black glacier boundaries in Figure 5A) identified based on the data for the year 2000 had emerged, and would be expanding further due to glacier retreat and calving process, as mentioned by Thakuri et al. (2016). It is clearly visible that within a span of 17 years, a large glaciated area has been vacated and turned into a proglacial lake, as predicted in the present study (Figures 5B,C). Overall, this exercise proves that the obtained overdeepenings at glacier bed topography aid predicting PFGL sites, and thus the approach could be reliably applied to the rest of the study glaciers. However, because of the unavailability of in situ lake bathymetric data, it is not possible to directly compare the modeled results with the lake bottom topography for validation. Therefore, the discussion mainly focuses on the extent to which the existing proglacial lake would expand in the face of continuous glacier retreat.
Potential Future Lake Sites in the Samudra Tapu Glacier Figures 6A,B shows the overdeepenings at different elevations in both the right and left trunks (Figure 6C) of the Samudra Tapu Glacier. For this glacier, most overdeepenings are observed below the long-term average ELA of 5200 m a.s.l. However, a few overdeepenings are also observed close to or above the ELA. Based on the spatial distribution of all the observed overdeepenings, a total of 26 PFGLs were identified in this glacier ( Table 3). The area of these PFGLs range from 0.03 to 1.21 km 2 and their storage volume vary between 0.18 and 40 (× 10 6 ) m 3 . Out of the 26 PFGLs, 12 would appear at the main trunk while the remaining 14 would be formed at the tributaries ( Figure 7A). In terms of lake area and volume, ST_lake_1 will be the largest while ST_lake_10 will be the smallest. The total area of PFGLs in the Samudra Tapu Glacier was estimated as 7.91 km 2 , which represents 8% of the existing glacier area. The total storage volume of the PFGLs was calculated as 225 ± 30 × 10 6 m 3 , which is equivalent to 2.4% of the current glacier volume.
The proglacial lake currently existing at the snout is shown in Figure 7B. Between the years 2000 and 2017, no significant variation in the proglacial lake area was detected, unlike the case of Gepang Gath Glacier lake. Between 1971 and 2014, the area of this lake had increased from 0.2 to 1.2 km 2 (Patel et al., 2017), and its area in 2017 was calculated as 1.26 km 2 . In the future, this proglacial lake ( Figure 7B) would expand further in the face of continuous glacier retreat to attain an area of 2.47 km 2 (referred to as Combined PFGL ST_lake_1). As per the results of the present study, this combined lake is expected between the surface elevation ranges of 4160 and 4350 m a.s.l., and would have a large discharge potential in case of a lake outburst.
Potential Future Lake Sites in all the Study Glaciers of the Chandra Basin
As in the cases of Gepang Gath and Samudra Tapu Glacier, the extracted overdeepenings were analyzed to identify PFGL sites for the remaining 63 study glaciers in Chandra basin. Figure 8 shows the location and depth variations of the identified PFGL for all 65 study glaciers. Based on the modeled PFGL distribution, glacier-wise future lake information such as total lake area, total lake volume, and the number of future lakes are compiled in Figure 9 (see the Supplementary Table A1 for the location of glaciers). From Figure 9, it was found that 360 PFGLs would emerge in this region. These PFGLs were estimated to occupy an area of 49.56 km 2 , which corresponds to 8.4% of the total area of all 65 study glaciers currently (591 km 2 ). The total storage volume of these PFGLs was assessed as 1.08 ± 0.15 km 3 .
Out of 360 PFGLs, 203 lakes (56%) were found to have an area less than 0.1 km 2 ("small size"), 154 PFGLs to have an area between 0.1 and 1 km 2 ("medium size"), and only three to have an area more than 1 km 2 ("large size"). The total volume of the PFGLs falling in small, medium, and large size categories was estimated as 0.10, 0.86, and 0.12 km 3 , respectively. This indicates that the medium-size lakes would combinedly hold almost 80% of the total storage volume of all PFGLs. The analysis also showed that 65% of the lakes would be formed in the main trunk of the glaciers while the remaining would be at the glacier tributaries.
Bada Shigri, the largest of all 65 study glaciers, was observed to contain 85 PFGLs, the most for any glacier in this region. Samudra Tapu was ranked second with 26 PFGLs. The total storage volume of the Bada Shigri and Samudra Tapu glacial lakes was calculated to be 239 and 225 × 10 6 m 3 , respectively. Apart from these two glaciers, only four would have PFGLs with a total storage volume between 50 and 100 × 10 6 m 3 , while 46 would have PFGLs with a total storage volume below 50 × 10 6 m 3 . The remaining 13 glaciers did not show any sign of overdeepenings and thus indicated no future lake development. These 13 glaciers are generally smaller in size with an area ranging from 1 to 4 km 2 . Ashraf et al. (2012), which studied the GLOF hazards in the Hindukush, Karakoram, and Himalayan Ranges of Pakistan, suggested that the valley lakes larger than 0.1 km 2 are potentially dangerous. Accordingly, in our study region, 20 PFGLs were identified as potentially hazardous in the event of GLOF occurrence, and their combined storage volume was found to be more than 0.001 km 3 . The formation of these future lakes would also accelerate the ice calving process. There are many examples of serious GLOF disasters caused by smaller lakes too. For instance, the Kedarnath disaster in the year 2013 originated from a lake with a volume less than 0.5 × 10 6 m 3 (Das et al., 2015;Allen et al., 2016b). A maximum depth of around 156 m was observed for the future lake identified in Samudra Tapu Glacier. Almost 70% of the identified PFGLs would have a maximum depth lower than 50 m, whereas the rest would have a maximum depth ranging from 50 to 156 m. The mean depth of all 65 study glaciers would vary between 1.5 and 51 m.
To investigate the elevation-wise distribution, the identified PFGLs were grouped into different elevation zones having an interval of 500 m, based on the current surface elevation levels ( Table 4). Hypsometric analysis revealed that the elevation zone 5000-5500 m a.s.l. would consist of the highest number of PFGLs (208), which would totally occupy an area of 26.81 km 2 (i.e., 54.09% of combined area of all PFGLs). Also, the total storage volume of PFGLs in this elevation zone was estimated as 0.57 km 3 (i.e., 52.29% of the combined volume of all PFGLs). On the other hand, only one PFGL would be formed at the elevation zone 3500-4000 m a.s.l. Further, among the 65 study glaciers, 35 would have future proglacial lakes at their current snout locations, mostly in the elevation zones 4500-5000 and 5000-5500 m a.s.l. The ice-thickness estimates used for extracting glacier bed topographies in this study are associated with relatively low uncertainty compared with previous studies such as Frey et al. (2014) and Linsbauer et al. (2016). This lower uncertainty was achieved through optimal parameterization of Glabtop2_IITB model and use of high-resolution TDX DEM, as reported in Ramsankaran et al. (2018) and Pandit and Ramsankaran (2020). Hence, the spatial and dimensional characteristics/attributes of the PFGLs reported could be considered reliable.
FIGURE 9 | Glacier-wise number, total area, and volume of PFGLs in all 65 study glaciers. Information associated with each bar is arranged as [number of lakes, total lake area (km 2 ), total lake volume (km 3 )] Table 5 provides the area and storage volume of the largest PFGL in each elevation zone. Such elevation-based knowledge of lakes would greatly benefit the policymakers and disaster management authorities in preparing mitigation policies well in advance. In the context of GLOF hazards associated with future lake development, the authorities can devise strategies to continuously monitor the locations of PFGLs, especially in Gepang Gath, Bada Shigri, G41, and G6 Glaciers ( Table 5).
TABLE 5 | Elevation zone-wise largest potential future glacial lake information based on their area and storage volume.
CONCLUSION
In the present research, a detailed high-resolution database of 360 potential future glacial lakes (with a total area and volume of 49.56 km 2 and 1.08 km 3 , respectively) was developed for 65 glaciers in Chandra basin, located in western Himalayas. The location of the lakes was determined based on the overdeepenings identified in the glacier bed topographies derived using an optimally parameterized GlabTop2_IITB model. With the use of high-resolution bed topographic information, the study captured the PFGLs having an area greater than 0.005 km 2 . The developed database includes the lakes' geographical distribution across the elevation zones, mean depth, maximum depth, location, its area, and volume. This information shall greatly benefit the policymakers, disaster management officials, and local administrative officials in preparing mitigation policies against GLOF events in the future.
DATA AVAILABILITY STATEMENT
The modelled ice thickness information for all the study glaciers can be obtained from http://doi.org/10.5281/zenodo.3694001. The identified potential sites of future glacial lakes and their characteristics can be retrieved from http://doi.org/10.5281/ zenodo.3727617 as well as from the Supplementary Material.
AUTHOR CONTRIBUTIONS
AP and RR designed the study. AP conducted the analysis and wrote the manuscript. RR contributed to discussions and revisions of the manuscript, providing important feedback, comments, and suggestions. Both authors contributed to the article and approved the submitted version.
ACKNOWLEDGMENTS
We acknowledge the support provided by the Indian Institute of Technology Bombay, Center of Excellence in Climate Studies (IITB-CECS) project of the Department of Science and We are thankful to the German Aerospace Agency (DLR) for providing TanDEM-X CoSSC products under the TanDEM-X Science proposal XTI_GLAC7043. We are also thankful to the editor for providing valuable suggestions during the interactive review stage of the article and to the two referees for their constructive comments which helped to improve the article. We are also grateful to the publishers of Frontiers journals for providing waiver in publication fee. | 7,924 | 2020-09-16T00:00:00.000 | [
"Environmental Science",
"Geology",
"Geography"
] |
pICln Binds to a Mammalian Homolog of a Yeast Protein Involved in Regulation of Cell Morphology*
Since its cloning and tentative identification as a chloride channel, the function of the pICln protein has been debated. Although there is no consensus regarding the specific function of pICln, it was suggested to play a role, directly or indirectly, in the function of a swelling-induced chloride conductance. Previously, the protein was shown to exist in several discrete protein complexes. To determine the function of the protein, we have begun the systematic identification of all proteins to which it binds. Here we show that four proteins firmly bind to pICln and identify the 72-kDa pICln-binding protein by affinity purification and peptide microsequencing. The interaction between this protein and pICln was verified several ways, including the extraction of several pICln clones from a cDNA library using the 72-kDa protein as a bait in a yeast two-hybrid screen. The protein is homologous to the yeast Skb1 protein. Skb1 interacts with Shk1, a homolog of the p21Cdc42/Rac-activated protein kinases (PAKs). The known involvement of PAKs in cytoskeletal rearrangement suggests that pICln may be linked to a system regulating cell morphology.
Since its cloning and tentative identification as a chloride channel, the function of the pICln protein has been debated. Although there is no consensus regarding the specific function of pICln, it was suggested to play a role, directly or indirectly, in the function of a swellinginduced chloride conductance. Previously, the protein was shown to exist in several discrete protein complexes. To determine the function of the protein, we have begun the systematic identification of all proteins to which it binds. Here we show that four proteins firmly bind to pICln and identify the 72-kDa pICln-binding protein by affinity purification and peptide microsequencing. The interaction between this protein and pICln was verified several ways, including the extraction of several pICln clones from a cDNA library using the 72-kDa protein as a bait in a yeast two-hybrid screen. The protein is homologous to the yeast Skb1 protein. Skb1 interacts with Shk1, a homolog of the p21 Cdc42/Racactivated protein kinases (PAKs). The known involvement of PAKs in cytoskeletal rearrangement suggests that pICln may be linked to a system regulating cell morphology.
Expression of the pICln cDNA in Xenopus laevis oocytes was correlated with the appearance of a nucleotide-sensitive chloride current (I ϭ current, Cl ϭ chloride, n ϭ nucleotide-sensitive) (1)(2)(3)(4). Although pICln was tentatively identified as an integral component of the chloride channel (1), several observations were inconsistent with the channel hypothesis for pICln. First, pICln lacks predicted hydrophobic membranespanning domains and structural homology to known channel proteins (1). Second, in mammalian cells and Xenopus oocytes, pICln was abundant and exhibited a predominantly cytoplasmic and nuclear localization, whereas a small fraction (Ͻ5%) was associated with the cytoskeleton (5). No pICln was de-tected in the plasma membrane.
The chloride conductance associated with expression of pICln was similar to an endogenous Xenopus oocyte chloride current elicited by hypotonic challenge (6). An anti-pICln antibody specifically ablated the swelling-induced chloride current in Xenopus oocytes (5), a finding supported by antisense experiments in mammalian cells (7). For the reasons stated above, we proposed that pICln was a cytosolic regulator of a swelling-induced chloride channel rather than a channel itself (5). In contrast, Paulmichl and co-workers (8) maintain that pICln is the swelling-induced chloride channel itself. Recently, data were presented suggesting that the chloride channel evoked by pICln expression has properties different from the swelling-induced chloride current, including a higher permeability to NO 3 Ϫ , stronger outward rectification, and voltage-dependent nucleotide block (3). The molecular identification of the swelling-induced chloride channel has proven difficult, and several proteins including P-glycoprotein, pICln, ClC-2, and ClC-3 have been proposed to constitute this channel (9,10). Although it seems unlikely that either pICln or P-glycoprotein are themselves chloride channels, both ClC-2 and ClC-3 are well established members of a family of chloride channel proteins. In contrast, pICln exhibits no significant homology to any known mammalian protein and contains no domains that suggest a specific function.
Although work from several laboratories supports a link between pICln expression and activation of a chloride current, the nature of this link is not clear. Currently there are no data to suggest that pICln directly regulates a chloride channel. Indeed, pICln may act far upstream from any plasma membrane-associated event and participate in such diverse functions as transcriptional or translational regulation, cytoskeletal rearrangement, or any one of several signal transduction cascades. pICln was shown previously to exist in several discrete complexes with other cytosolic proteins (5). We reasoned that the identification of proteins interacting with pICln might reveal functional connections to signaling pathways or known cellular mechanisms. Here we report the identification of one such pICln-interacting protein, a 72-kDa protein that appears to be the human homolog of Skb1. Skb1 is a yeast protein that interacts with Shk1, a homolog of the p21 Cdc42/Rac -activated protein kinases (PAKs). 1 Although the function of PAKs are only beginning to be understood, they appear to affect cell morphology through interactions with the cytoskeleton (11).
EXPERIMENTAL PROCEDURES
IBP72 Affinity Purification-Rat pICln coding sequence was subcloned into the pGEX-2T plasmid (Amersham Pharmacia Biotech). The GST-pICln fusion protein was expressed in BL-21 bacteria and purified over glutathione-Sepharose according to the manufacturer's protocols. GST-pICln was immobilized using ActiGel ALD (Sterogene) at 2 mg of protein/ml of gel. Bovine ventricular tissue was minced and homogenized by Polytron (setting 7) for 3 ϫ 30 s in MB buffer (10 mM Na-HEPES (pH 7.5), 20 mM KCl, 1 mM EGTA, 3 mM MgCl 2 , 1 mM dithiothreitol, 0.5 mM phenylmethylsulfonyl fluoride, and 2 mg/ml each of aprotinin, leupeptin, and pepstatin). Following centrifugation at 100,000 ϫ g, the supernatant (2.6 g of protein) was loaded onto a 2 ϫ 25 cm DEAE-Sephacel (Amersham Pharmacia Biotech) column and * This work was supported by the Howard Hughes Medical Institute and National Institutes of Health training grants (to W. P. and K. W.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 washed with MB containing 100 mM NaCl. pICln-containing complexes (as detected by Western blotting) were eluted with MB ϩ 400 mM NaCl. The eluate was supplemented with Triton X-100 (1% final concentration) and rotated overnight with 100 l of GST-pICln resin. After washing beads with MB, 400 mM NaCl, 1% Triton X-100, bound proteins were solubilized in SDS loading buffer, separated by SDS-polyacrylamide gel electrophoresis, transferred to polyvinylidene fluoride film, and visualized with Coomassie staining. The 72-kDa protein band was excised, digested with trypsin and cyanogen bromide, and microsequenced (Mayo Foundation).
Constructs, Northern Blot, Yeast Two-hybrid Analysis, and Cell Transfection-IBP72 coding sequence was subcloned into pGEX-2T, and the GST-IBP72 fusion protein was produced and purified as described above. The full-length IBP72 clone was subcloned into pCDNA 3.1(Ϫ) (Invitrogen) and translated in vitro using the TNT system (Promega). pICln deletions were generated by polymerase chain reaction (PCR) subcloning of truncated versions of human pICln coding sequence into pCDNA3.1(ϩ) or (Ϫ); all constructs were "tagged" with an aminoterminal FLAG epitope (DYKDDDDK). 10 g of each construct was used for calcium phosphate transfection of 50 -70% confluent HEK293 cells plated in 10-cm culture dishes. 48 h after transfection, proteins were in vivo labeled for 5-12 h using 50 Ci/ml [ 35 S]methionine (Amersham). IBP72 coding sequence was subcloned into pBTM-116KN vector (18) using restriction sites introduced by PCR. The resultant construct was verified by DNA sequencing and used as the bait for twohybrid screening of a human heart Matchmaker library (CLONTECH) in the yeast strain L40 (18). Full-length, FLAG-tagged pICln coding sequence was cloned into pGAD424 (CLONTECH). Yeast his3 expression was assayed by growth on dropout plates lacking histidine, tryptophan, and leucine and supplemented with 5 mM 3-aminotriazole. -Galactosidase activity was measured using o-nitrophenyl--D-galactopyranoside (Sigma) as substrate. The multi-tissue human Northern blot (CLONTECH) was probed according to the manufacturer's specifications with a [ 32 P]dCTP random-labeled fragment (Stratagene) consisting of the entire human IBP72 human coding sequence.
Cell Lysis, Immunoprecipitation, and Immunoblotting-Total cell lysate for immunoprecipitation was obtained as 100,000 ϫ g supernatant after cell lysis in buffered solution containing 1% Triton X-100 and 350 mM NaCl. Soluble proteins were isolated by Dounce homogenization followed by pelleting of microsomal fractions at 40,000 rpm in an SW-55 rotor for 30 min at 4°C. The supernatant was supplemented with Triton X-100 to a final concentration of 1% and with NaCl to 350 mM. For immunoprecipitation, samples were precleaned with 40 l of Protein A/G-Sepharose (Amersham Pharmacia Biotech) for 1.5 h at 4°C. pICln was immunoprecipitated using the aFP antibody as described (5). GST-pICln (1 g) was used for precipitation of in vitro translated IBP72. Precipitation of the deletion constructs and interacting proteins was achieved using Protein G-Sepharose and anti-FLAG M2 monoclonal antibody (Kodak). The rabbit anti-IBP72 antibody was generated against the GST-IBP72 fusion protein and affinity-purified.
RESULTS
pICln was immunoprecipitated from [ 35 S]methionine-labeled Madin-Darby canine kidney (MDCK) total cell lysates using a polyclonal antibody generated to a GST-pICln fusion protein. Several proteins consistently co-immunoprecipitated with pICln (pICln-binding proteins (IBP)) with electrophoretic mobilities corresponding to molecular masses of 72, 43, 29, and 17 kDa (Fig. 1, lane 1). Since the same set of associated proteins was co-immunoprecipitated from the water-soluble cell fraction, we concluded that the IBPs are not membrane-associated proteins. Their association with pICln was judged to be specific because the same set of proteins was co-immunoprecipitated with a different anti-pICln antibody (data not shown).
IBP72 was purified by affinity to pICln. Cytosolic extracts from bovine heart were used as the source of IBP72 since initial experiments indicated that it was relatively abundant in this tissue. IBP72 was enriched significantly in eluates from the GST-pICln resin (Fig. 1, lane 2). Immobilized GST did not bind this protein, indicating that the 72 kDa protein interacted with GST-pICln specifically (Fig. 1, lane 3). The purified 72-kDa protein was digested with trypsin and cyanogen bromide, and five different peptides were sequenced. The peptide sequences obtained from the 72-kDa protein were used to screen the expressed sequence tag (EST) data base. Several overlapping clones were identified that predicted a single open reading frame (ORF) of 1911 base pairs and whose translation contained sequences identical with the 72-kDa protein-derived peptides. Subsequently, two EST clones containing an identical 2.4-kb insert were identified (GenBank TM accession numbers R13970 and AA099674) that spanned the ORF. Using 5Ј-rapid amplification of cDNA ends with a human fetal brain library, an additional 60 bases of 5Ј-untranslated sequence were identified.
The ORF predicts a protein of 637 amino acids with a molecular mass of 72.6 kDa ( Fig. 2A). Consistent with this prediction, in vitro translation of the cDNA yielded a protein with an apparent molecular mass of 72 kDa (Fig. 2B). Only two residues are not conserved between the bovine and human proteins within the sequence specified by the five fragments. A 2.4-kb transcript for IBP72 was identified in a wide range of human tissues including skeletal muscle, brain, heart, placenta, kidney, pancreas, lung, and liver (Fig. 2C). Although the cloned 72-kDa protein has no significant homology with other cloned mammalian proteins and contains no consensus structural motifs, it does exhibit moderate homology to putative proteins encoded in the Caenorhabditis elegans and Saccharomyces cerevisiae genomes, as well as significant homology to the skb1 gene product (12) from Schizosaccharomyces pombe (52% homology; Fig. 2A). Recently, a human cDNA identical with our IBP72 was cloned by homology to Skb1 and submitted to GenBank TM (accession number AF015913).
The cloned 72-kDa protein appears to be IBP72 as indicated by several approaches. First, the in vitro translated protein exhibited specific binding to the GST-pICln fusion protein but not to GST alone (Fig. 3, left panel). Second, an affinity-purified polyclonal antibody raised against the recombinant 72-kDa protein recognized IBP72 co-immunoprecipitated with pICln (Fig. 3, right panel). Third, a LexA-IBP72 fusion protein specifically interacts with a GAL4 activation domain-ICln fusion protein in the yeast two-hybrid system (Table I). Moreover, when a human heart cDNA library was screened in the yeast two-hybrid system using the LexA-IBP72 fusion protein as bait, six independent ICln clones were obtained. These results argue strongly that our cloned protein is IBP72.
To identify the domain(s) of pICln critical for interaction with IBP72, we generated several epitope-tagged (FLAG) human pICln deletion constructs and examined their ability to bind native IBP72. The full-length FLAG-pICln protein and endogenous pICln interacted with the same set of proteins in human embryonic kidney (HEK293) cells (data not shown). All deletion constructs were expressed at levels equivalent to or higher than the full-length FLAG-pICln protein, as assessed by immunocytochemical analysis and immunoprecipitation of [ 35 S]methionine-labeled proteins (data not shown). Based on this approach, we conclude that the extreme carboxyl terminus of human pICln, specifically the last 37 amino acids, is critical for interaction with IBP72 (Fig. 4). Consistent with this result, one of the pICln clones identified by yeast two-hybrid selection
TABLE I
Interaction of IBP72 with pICln in the yeast two-hybrid system Yeast-harboring plasmids expressing the indicated proteins were assayed for growth on selective plates containing 5 mM 3-aminotriazole (AT) and for -galactosidase activity. -Galactosidase activities are given in -galactosidase units (19) and represent the average Ϯ S.D. of three independent measurements. GAD is the Gal4 activation domain (amino acids 768 -881). GAD-pICln-DN103 is a pICln clone that lacks the amino-terminal 103 amino acids.
pICln Interacts with Mammalian Homolog of Skb1 has the amino-terminal 103 residues deleted but retains full ability to interact with IBP72 (Table I).
DISCUSSION
In an effort to identify the functional role of pICln, we are characterizing IBPs. Four major IBPs consistently co-purify with pICln in several tissues. We cloned the human cDNA for IBP72 based on microsequence data obtained from affinitypurified bovine IBP72. The interaction between the cloned human IBP72 and pICln was confirmed by several lines of evidence, including the extraction of pICln from a cDNA library using the full-length coding sequence for IBP72 as the bait in a yeast two-hybrid screen. IBP72 is ubiquitously expressed and has no identified mammalian homologs or recognizable structural motifs that would suggest a specific function. Currently we are cloning the other IBPs and will use similar approaches to verify the specificity of their interaction with pICln.
Although IBP72 has no known human homologs, sequence similarity suggests that IBP72 represents a human homolog of the Skb1 protein from S. pombe (12). Skb1 was identified by a yeast two-hybrid screen using the S. pombe protein kinase Shk1 as bait. The Shk1 kinase is linked to Ras-and Cdc42-dependent signaling cascades regulating cell viability, morphology, and mitogen-activated protein kinase-mediated pheromone responses (13). S. pombe lacking Skb1 are less elongated than wild-type yeast, emphasizing the role of this protein in the regulation of cell morphology. Shk1 is a homolog of the mammalian PAK, of which there are three cloned isoforms (14). PAK kinases are activated by GTP-bound forms of the small GTPbinding proteins Rho, Rac, and Cdc42 and have been implicated in control of cytoskeletal rearrangement and cell morphology (11,14).
One consistent conclusion from previous studies of pICln function is that its overexpression induces, either directly or indirectly, the appearance of a chloride conductance (1,2,4). Given the biochemical characteristics of pICln, we favor the hypothesis that pICln is not a channel itself but rather part of a pathway either closely or remotely connected to a chloride current, possibly through cytoskeletal rearrangement. Indeed, actin co-immunoprecipitated with pICln, and a fraction of pICln associated with insoluble cytoskeletal elements (5). The protein identified in this report, IBP72, may provide a link between pICln and cytoskeletal rearrangement. Regulation of swelling-induced chloride channels is likely to involve cytoskeletal rearrangement (15,16). Also, recent evidence links p21 Rhodependent cytoskeletal reorganization to a swelling-induced chloride conductance (17). Whether pICln and IBP72 are linked to a swelling-induced chloride current (5-7), a volume-insensitive chloride conductance (2, 3), or both will be determined only by understanding all elements of the pathway. pICln Interacts with Mammalian Homolog of Skb1 | 3,750.6 | 1998-05-01T00:00:00.000 | [
"Biology"
] |
Macrophages in Organ Transplantation
Current immunosuppressive therapy has led to excellent short-term survival rates in organ transplantation. However, long-term graft survival rates are suboptimal, and a vast number of allografts are gradually lost in the clinic. An increasing number of animal and clinical studies have demonstrated that monocytes and macrophages play a pivotal role in graft rejection, as these mononuclear phagocytic cells recognize alloantigens and trigger an inflammatory cascade that activate the adaptive immune response. Moreover, recent studies suggest that monocytes acquire a feature of memory recall response that is associated with a potent immune response. This form of memory is called “trained immunity,” and it is retained by mechanisms of epigenetic and metabolic changes in innate immune cells after exposure to particular ligands, which have a direct impact in allograft rejection. In this review article, we highlight the role of monocytes and macrophages in organ transplantation and summarize therapeutic approaches to promote tolerance through manipulation of monocytes and macrophages. These strategies may open new therapeutic opportunities to increase long-term transplant survival rates in the clinic.
INTRODUCTION
Organ transplantation is a life-saving strategy for thousands of patients with end-stage organ failure. Patients who find a compatible donor and receive a transplant are treated daily with multi-drug combinations designed to prevent rejection of the transplanted organ. Thanks to great progress in surgical techniques and immunosuppressive drugs, the percentage of short-term allograft rejection events has declined and 1-year allograft survival rates are above 90% (1). However long-term graft survival rates remain suboptimal (2,3), arguing in favor of additional mechanisms of immune regulation associated with chronic allograft rejection that escape current immunosuppressive therapy.
To promote long-term organ transplant survival in the absence of chronic immunosuppressive therapy, transplant immunologists have historically focused on targeting the adaptive immune response. This is in response to early work on allograft rejection, which demonstrated that T cells are both necessary and sufficient for allograft rejection (4,5). More recent work has focused on developing novel tolerogenic protocols that target the adaptive immune response using methods that include depletion of effector T cells (6), induction of CD4 + CD25 + Foxp3 + regulatory T cells (7) and blockade of co-stimulatory signals (8). The latter was achieved using monoclonal antibodies (mAb) or immunoglobulins (Ig) against cell surface molecules (CD4 (9); CD4 + DST (10); CD3 (11); non-depleting CD3 (12); CD40L (13); CD40L + CD28 (14); LFA-1 + + ICAM-1 (15); CD2 (16); CD2 + CD3 (17); LFA3-Ig (18); CD80 and CD86 (19); CD40 (20); and CTLA4-Ig (21) ( Figure 1A). While promising results have been obtained using these therapeutic approaches in experimental animal models, translation of these tolerance promoting methodologies that target innate immune cells in the clinic remain largely elusive ( Figure 1B). Considering that consistent induction donor specific unresponsiveness remains a difficult task in the clinic, there is a major unmet need for the development of additional immune regulatory programs to improve long-term allograft survival in the clinical practice. Since innate immune cells participate in allograft recognition, developing therapeutic approaches that target myeloid cells in the clinic could open novel avenues to improve long-term transplantation outcomes.
It is widely accepted that allograft rejection is the result of a complex series of interactions between both the innate and the adaptive immune systems (22,23). Recent advances in our understanding of the mechanisms that determine the outcome of the immune response to transplanted organs have highlighted the importance of the innate immune response (24). This ancient part of the immune system precedes cellular and humoral immunity and consequentially regulates the function of the adaptive immune response. The innate immune response initiates inflammatory signals as a defense mechanism against pathogens and tissue injury. Non-self-inflammatory stimuli induced by exogenous infectious agents are considered pathogen-associated molecular patterns (PAMPs), while tissue injury is recognized by self-derived damage-associated molecular patterns (DAMPs). Both PAMPs and DAMPs are recognized through pattern recognition receptors (PRRs), which include Toll-like receptors (TLR), NOD-like receptors (NLR) and C-type lectin receptors. PRRs are expressed on the cell surface and in the cytoplasm of innate immune cells, including macrophages, and mediate intracellular signaling cascades leading to transcriptional expression of inflammatory mediators (25).
Macrophages belong to the mononuclear phagocyte system and have a dual role in allograft transplantation, either triggering inflammatory response or inducing a tolerogenic environment (26). Local activation of macrophages through PRRs can lead to upregulation of major histocompatibility complex (MHC) and costimulatory molecules (signals 1 and 2), as well as the production of pro-inflammatory cytokines (signal 3) which result in T cell proliferation and differentiation (27,28). More recently, it was demonstrated that macrophages adopt a long-term proinflammatory phenotype following an initial PRR stimulation of the C-type lectin receptor dectin-1, which results in a non-specific memory of the innate immune cells mediated by epigenetic reprogramming (29). This novel macrophage functional state has been termed trained immunity and is associated with proinflammatory cytokine production (TNFa and IL-6) after a second PRR stimulatory signal with TLR4 agonists (30). Understanding the immune biology of trained immunity has important implications for the design of novel therapeutic approaches. Preventing the accumulation of trained macrophages while promoting the development of regulatory macrophages represents an attractive, innovative approach to promote organ transplant acceptance. Herein, we highlight recent studies on the role of macrophages in organ transplantation and summarize the therapeutic potential of targeting macrophages for the induction of tolerance.
homeostasis (31,32). Monocytes originate from myeloid progenitor cells in the bone marrow and circulate in the blood for several days before entering the tissue and differentiating into macrophages (33,34). Monocyte-derived macrophages also have key roles in clearing pathogens and cell debris, antigen presentation and initiating adaptive immune responses (35). To do so, macrophages acquire specialized functions according to the stimuli present in the environment. In relation to their activation, Mills et al. proposed two phenotypes: classical (M1) versus alternative (M2), in analogy to T helper cells Th1 and Th2 (36,37). M1/M2 macrophages are functionally distinct with M1 macrophages shifted to nitric oxide (NO) and citrulline secretion, while M2 macrophages shifted toward production ornithine and polyamine secretion (36,37). Consequentially, M1-derived NO inhibits T cell proliferation and exhibits a potent microbicidal activity, while M2-derived ornithine promotes cell proliferation and repair through polyamine and collagen synthesis (38)(39)(40). Over the past few years, this nomenclature has been a matter of debate due to the difficulty of including within M1 and M2 classification the multiple phenotypes adopted by macrophages. While in vitro activation of macrophages allowed us to better understand the developmental requirements of different macrophage subsets, in vivo studies are more complicated because the stimuli they encounter are multiple, complex and occur simultaneously (31,41,42).
In contrast, M2-polarized macrophages, also known as alternatively activated macrophages, are important in tissue repair. The M2 phenotype contains different macrophage populations with separated functions, which can be polarized by several stimulatory factors. Based on the stimuli and transcriptional changes, Mantovani and Roszer divided the M2 phenotype into M2a, M2b, M2c and M2d subtypes (58,59). The mutual characteristics of these subtypes are high secretion of IL-10 and low IL-12 levels, in conjunction with the generation of arginase-1 (Arg-1). M2a macrophages are induced by IL-4 and IL-13, express high levels of mannose receptor (CD206) and secrete pro-fibrotic factors, such as TGF-b, to contribute towards tissue repair (60)(61)(62). M2b macrophages have phenotypical and functional similarities with regulatory macrophages. They are activated by TLR or IL-1R agonists and produce both pro and anti-inflammatory cytokines, such as TNF-a, IL-1b, IL6 and IL-10 (41,63). M2c macrophages, also known as inactivated macrophages, are induced by IL-10 and display antiinflammatory functions. M2c secrete IL-10 and TGF-b (59,64) and are efficient at phagocytosis and elimination of apoptotic cells (65). M2d macrophages have phenotypical and functional similarities with tumor-associated macrophages (TAMs). They are induced by A2 adenosine receptor (A2R) and IL-6 (66-68) and secrete IL-10, TGF-b and vascular endothelial growth factor (VEGF) to favor angiogenesis and cancer metastasis (68)(69)(70). The need to update the M1/M2 classification has been evidenced in numerous studies addressing signaling pathways and genetic signatures associated with M1/M2 polarization (71)(72)(73)(74)(75). M1 and M2 share many genes implicated in cellular functions, such as phagocytosis, metabolism and cytokine production. IL-8, Tissue Factor and Leukocyte extravasation signaling pathways are shared among M1 and alternatively activated M2 (76,77). On the other hand, recent works show specific signatures for M1 and M2 (78). For example, Jablonski et al. identified a new set of common and distinct M1 and M2 macrophage genes. They showed that CD38, Gpr18 and Fpr2 were M1-specific while c-Myc and Egr2 were M2-specific genes, proposing a new way to define both states of polarization based on their phenotypes: CD38 + Egr2 − (M1 macrophages) and CD38 − Egr2 + (M2 macrophages) (71). In addition, Buscher et al. demonstrated a strong gene-environment interaction in activated macrophages using a hybrid mouse diversity panel (HMDP). They showed different genetic signatures associated with lipopolysaccharide (LPS) responsiveness among a wide spectrum of macrophage phenotypes from several different inbred strains (72). Recently, Orecchioni et al. compared both transcriptomes obtained from Jablonski (in vitro) and Buscher (in vivo) to define differential signatures present in M1/M2 macrophages (79) and concluded that Fcg receptor-mediated phagocytosis, MAPK signaling, MAPK, JAK1 and JAK3 signaling are upregulated in M1 upon LPS activation. These pathways control several inflammatory genes that allow the macrophages to exhibit their pro-inflammatory properties (80,81). In contrast, the main pathways specifically expressed in M2 are adipogenesis, fatty acid synthesis and integrin signaling pathways, which are important for tissue infiltration, removal of necrotic tissue and initiation of tissue regeneration (82).
While bone marrow monocytes are mobilized early after transplantation and recipient monocyte-derived macrophages represent the majority of macrophages in the transplanted organ (83), it is important to acknowledge the immune regulatory role of tissue resident donor macrophages. Tissueresident macrophages (TRMs) arise from fetal liver or yolk-sac progenitors and are phenotypically distinct from monocytederived macrophages in steady state conditions (84). While TRMs are primarily characterized by the expression of CD11b, F4/80, CD64, CD68 and MerTK and low levels of MHC-II on the cell surface in mice, monocyte-derived macrophages are characterized by CD11b, CD209, CD64 and MerTK expression on the cell surface (85). TRMs are functionally considered to be immunosuppressive because of their fundamental roles in maintaining homeostasis, inhibiting T cell activation and promoting the resolution of inflammation (75,86). TRMs are divided into subpopulations according to their anatomical sites and functionality. For instance, Kupffer cells in liver (87,88) or alveolar macrophages in lung (89) exhibit critical roles in generating CD4 regulatory T cells (Treg) and promoting tolerance. In the context of organ transplantation, Terry Strom and colleagues identified a subset of donor TRM that express high levels of the phosphatidylserine receptor TIM4 and CD169. The study demonstrated that this population of macrophages migrates to the draining lymph nodes following oxidative stress during ischemia-reperfusion injury (IRI) associated with transplantation and induces antigen-stimulated Treg. Interestingly, these M2-like TIM-4+CD169+ donor TRM were demonstrated to be immunoregulatory and to promote the engraftment in a murine cardiac allograft model (90). Contrary to this view, it has been suggested that ischemia/reperfusion primes innate immune cells for an excessive response to a subsequent inflammatory, which promotes organ injury. In the lung, alveolar macrophages under shock/resuscitation events increase their TLR4 expression in the cell surface due to oxidative stress (91). As a result, alveolar macrophages are primed and exhibit an exaggerated LPS response following a secondary stimulation. The source of the endotoxin is not clear, but it has been suggested that LPS may leak from the gut under ischemia/reperfusion conditions (92). This has major implications in lung transplantation as oxidative stress induced during IRI, coupled with an increase in the endotoxin levels in the donor organ is associated with increased neutrophil recruitment as well as physiological markers of allograft injury mediated by tissue resident alveolar macrophages through TLR4/ MyD88 dependent pathways (93). Consequentially, presence of endotoxin in the lung predisposes the donor organ to the fatal syndrome of primary graft dysfunction (PGD) and compromises the survival of the allograft following lung transplantation. Overall, the data suggests that while TRMs present in the donor organs may favor immunoregulatory mechanisms that promote allograft engraftment (94), their suppressive activity may be reversed toward a pro-inflammatory functional state (95), compromising organ transplant survival.
Macrophages and Rejection
Macrophage accumulation has long been recognized as a feature of allograft rejection (96). The total number of graft infiltrating macrophages correlates with worse clinical outcomes (97,98) and with acute allograft dysfunction in kidney transplant recipients (99). Early studies from Hancock and colleagues demonstrated that macrophages represent the majority of cells that infiltrate an allograft during severe rejection episodes (100).
Using immunohistochemical approaches, their study reported that macrophages represent 60% of graft-infiltrating cells in severe rejection, 52% in mild rejection and 38% in moderate rejection (100). Looking at the patterns of graft-infiltrating cells during the first days after transplantation, various human studies have shown that the initial accumulation of monocytic cells occurs in all grafts (rejecting and non-rejecting) (101) and that infiltration of kidney allografts by macrophages within the first week of transplantation is associated with worser clinical outcomes (102). Similarly, Schreiner et al. showed an initial accumulation of macrophages in the first 24-48 h after transplantation for both donor kidney allografts and isografts, with a marked increase in monocytes/macrophages being observed only in allografts 96 h after engraftment. As such, it is not surprising that depletion of macrophages has been used to attenuate graft injury and decrease inflammation in acute rejection models (103,104). To this end, Jose et al. by depletion of macrophages with liposomal-clodronate in a renal transplant rat model showed the contribution of macrophages to tissue damage during acute rejection (105). In another study, Ma et al. demonstrated that the depletion of monocytes/ macrophages with c-fms kinase inhibitor resulted in less renal allograft dysfunction and structural damage compared to the vehicle-treated rats (106). Data from our laboratory demonstrated early after transplantation that M1-like monocytic precursors leave the bone marrow and infiltrate heart allografts in transplanted mice (107). Importantly, while M1-like monocytes rapidly convert to M2-like regulatory macrophages in the allografts of transplant recipients under costimulatory blockade treatment with anti-CD40L mAb, untreated recipients maintain M1-like inflammatory macrophages in the rejecting allografts (108). Interestingly, depletion of recipient CD11b cells using CD11b-DTR mice as recipients, prevented the induction of tolerance. This suggests that initial events that regulate macrophage polarization (M1 to M2) rather than depletion may control the fate of the immune response, since depletion of macrophages may affect the protective role of wound healing and tissue remodeling macrophages that are required to restore homeostasis in the donor organ after the transplant surgical procedure.
Despite the significant progress in determining the roles of macrophages in acute graft rejection, the mechanisms by which macrophages mediate tissue injury are not completely understood. One of the suggested mechanisms by which macrophages mediate graft loss is through the production of nitric oxide contributing to the endothelial cell cytotoxicity and tubular injury (103). Acute rejection in heart transplant recipients was associated with severe fibrosis in 1-year biopsies, which was associated with higher CD68 + CD163 + M2 macrophages compared to barely present CD68 + CD80 + M1 macrophages in graft (109). Similarly, infiltrating macrophages in renal allograft 1-year after transplantation exhibited an M2 phenotype with CD68 + CD206 + dual staining (110). It has also been suggested that CD16 + monocytes might be responsible for the development of acute allograft rejection after liver transplantation, which may be associated with inhibition of Treg cells (111). Furthermore, whole-genome transcriptome analysis of biopsy samples identified an inflammatory macrophage polarization-specific gene signature, which is upregulated during acute rejection (112). In fact, the degree of macrophage infiltration correlates with increased incidence of allograft rejection (34). Consistent with the increased of macrophage/monocytes infiltration, the level of monocyte colony stimulating factor (M-CSF), a key cytokine in monocyte recruitment, is elevated in the graft during clinical rejection (113). Moreover, activated monocytes are detectable in the circulation before the clinical symptoms of acute rejection occur (114).
Gradual replacement with recipient-derived macrophages over time leads to chronic rejection through mechanisms that involve cell death, fibrosis, smooth muscle proliferation and cytokine-mediated inflammation (115). Although inflammation is supposed to be short lived and self-limited, acute inflammation can sometimes shift toward a long-lived and self-perpetuating chronic inflammatory response (116). Chronic inflammation develops within months to years after organ transplantation and is the major cause of long-term graft loss (115). The main feature of chronic rejection is obliterative vasculopathy, often accompanied by parenchymal fibrosis which results in ischemia, cell death and progressive graft failure (115,117). Chronic rejection is characterized by infiltrating T cells and macrophages, although other cellular compartments include natural killer cells, dendritic cells, B cells and plasma cells also play a role in chronic rejection (116). However, the high number of infiltrating macrophages in the allograft, as well as their potential to produce cytokine/growth factor suggests the crucial role of macrophages as end-effector cells in a final common pathway toward cardiac allograft vasculopathy (CAV) independent of T-cell or B-cell alloreactivity (118).
Accumulation of alternatively activated M2-type macrophages is the major macrophage population localized in areas of interstitial fibrosis in chronic kidney allograft injury and correlates with the severity of fibrosis and graft rejection (110,119). M2 polarization is considered to be anti-inflammatory, immunoregulatory and important for tissue repair and regeneration. However, during chronic rejection, the pro-fibrotic function of M2-polarized macrophages promotes interstitial fibrosis and contributes to graft failure (120). Graft-infiltrating macrophages during chronic rejection are a heterogeneous population expressing markers that are associated with M1 inflammation but also with an M2 immunoregulatory phenotype. It is possible though that immunoregulatory M2 cells are derived from M1 cells in the graft, when the pro-inflammatory microenvironment subsides over time. The predominance of a certain macrophage polarization state in the graft might determine the clinical success of the transplantation. In human kidney transplant recipients, a higher M2 ratio is associated with chronic glomerular injury and poorer graft function (121). Despite the apparent predominant role of M2 macrophages in chronic graft rejection, M1 macrophages might critically contribute with the production of eicosanoids, proteases, ROS and NO (122). To prevent chronic rejection, Liu et al. investigated the effect of macrophage depletion for a certain amount of time in a rat allogenic heart transplant model (123). Their results suggested that macrophage depletion after heart transplantation could alleviate chronic rejection through M2 polarization of regenerated macrophages, as well as the alternation of expression levels of IFN-g, TNF-a, MCP-1 and IL-10 (123). These approaches deplete macrophages and blocking monocyte recruitment by targeting CCR-and CXCR-mediated chemotaxis that reduce vasculopathy (118,124,125).
The granulocyte-macrophage colony-stimulating factor (GM-CSF) and the macrophage colony-stimulating factor (M-CSF) are some of the known factors that regulate differentiation, proliferation, and function of tissue macrophages and determine the outcome of the immune response (126). While GM-CSF induces a state in which macrophages are primed for M1, M-CSF induces M2 macrophage polarization (125,127). In a recent study, our group elucidated the molecular mechanisms behind CSF-1-mediated macrophages polarization. Our results exhibited that graft-infiltrating neutrophils in tolerized recipient allografts secreted higher levels of M-CSF compared to neutrophils from untreated rejecting mice, suggesting a potential role of M-CSF producing neutrophils in mediating regulatory M2 macrophage accumulation in the transplanted allograft (128).
Manipulation of M1/M2 polarization represents another therapeutic approach to prevent allograft rejection. Xian Li and colleagues demonstrated that M1/M2 macrophage polarization is dependent on tumor-necrosis factor receptor-associated factor 6 (TRAF6) and mammalian target of rapamycin (mTOR), respectively (129). While mice deficient for TRAF6 in macrophages prevents accumulation of M1 macrophages in recipient mice that develop severe transplant vasculopathy, deletion of mTOR prevents accumulation of M2 macrophages in long-term allograft survival without histological indications of chronic rejection, emphasizing the role of M2-polarized macrophages in chronic allograft rejection (129). The Xian Li laboratory further investigated differences between M1 and M2 macrophages and identified the adenosine triphosphate (ATP)gated ion channel (P2x7r) as a marker of M2 cells (130). Interestingly, blockade of P2x7r using oxidized ATP, prevented M2 polarization in vitro and graft-infiltration in vivo, leading to long-term heart allograft survival. This study demonstrated that pharmaceutical targeting of M2 graft-infiltrating macrophages during chronic rejection is a promising strategy to prolong graft survival. Consistent with this view, specific deletion of RhoA or inhibition ROCK kinases with a combination of Y27632, Fasudil and Azaindole inhibited vessel occlusion and tissue fibrosis, decreased M2 macrophage infiltration and abrogated chronic rejection of cardiac allografts (131,132).
Besides their M1/M2 pro-inflammatory and immunoregulatory functions, it is also possible that macrophages contribute to graft rejection by additional mechanisms. Macrophages in biopsy specimens from patients with active chronic renal allograft rejection co-expressed the macrophage marker CD68 as well as the myofibroblast marker a-smooth muscle actin (a-SMA), suggesting that macrophages undergo a macrophage-tomyofibroblast transition leading to interstitial fibrosis and reduced graft function (133). Similarly, cells co-expressing macrophage and a-SMA markers were found in allografts in mice. These cells derived from recipient bone marrow cells, thus were infiltrating the graft and also co-expressed M2 marker CD206. Further mechanistic studies identified a crucial role for Smad3 in macrophage-to-myofibroblast transition (133).
One key feature of circulating monocytes is their ability to migrate to the inflamed tissue and to initiate the immune response against non-self antigens. Fadi Lakkis and colleagues reported that F4/80 − Ly6C + neutrophils, F4/80 int Ly6C + monocytes and F4/80 hi Ly6C − macrophages rapidly infiltrate sites of inflammation and elicit an allospecific immune response. Remarkably, in contrast to the allogeneic non-self recognition by T cells that recognize MHC molecules, macrophages were shown to recognize non-MHC molecules (134). Using B6-OVA (H-2 b ) and B6F1-OVA (H-2 b/d ) donor heart grafts transplanted into B6 Rag − / − gc − / − (H-2 b ) recipients, this group further demonstrated that only monocytes and DC from B6 Rag − / − gc − / − recipient mice receiving B6F1-OVA (but not B6-OVA) grafts, were able to promote acute cellular rejection upon transfer of OVA antigen-specific CD4 + OT-II cells. The Lakkis laboratory, went on to demonstrate that monocytes and macrophages detect the polymorphic molecule signal regulatory protein a (SIRPa) on donor cells to initiate the innate alloresponse (135). SIRPa is a regulatory immunoglobulin superfamily receptor that represents a key member of the "donot-eat-me" signaling pathway that avoids the to avoid immune response by phagocytes. SIRPa is expressed by myeloid (136) and myeloid-derived suppressor cells (MDSC) that accumulate after organ transplantation and mediate allograft tolerance (137). Mechanistically, engagement of SIRPa with its ubiquitous ligand CD47 delivers inhibitory signals and suppresses the phagocytic function and inflammatory signaling of macrophages (138)(139)(140). In the context of organ transplantation, the Lakkis laboratory demonstrated that blocking SIRPa or CD47 with monoclonal antibodies induced graft dysfunction and rejection. Blocking of SIRPa-CD47 interaction results in MDSC differentiation into myeloid cells overexpressing MHC class II, CD86 costimulatory molecule and increased secretion of macrophage-recruiting chemokines leading to loss of tolerance (141). However, a donor allograft with a SIRPa molecule that is mismatched with CD47 leads causes monocytic cell activation and initiation of the immune response to the transplanted organ (135). More recently, the Lakkis laboratory also demonstrated that polymorphisms in the SIRPa gene were required to induce monocyte memory is against non-self MHC molecules. In this study, it was demonstrated that deleting the PIR-A in the recipient or blocking the paired immunoglobulin-like receptor-A (PIR-A) binding to donor MHC-I with a PIR-A3/Fc inhibits alloantigen specific memory of myeloid cells and promotes indefinite allograft survival in a murine kidney and heart transplant model (142). Overall, these studies provide compelling evidence demonstrating that monocytes initiate the immune response, determine the critical role of SIRPa polymorphic differences in the activation of graft reactive macrophages and that the immunological memory to innate myeloid cells can be potentially targeted to promote the induction of transplantation tolerance.
Macrophages and Tolerance
The participation of graft-infiltrating macrophages in the rapid, stereotypical inflammatory reactions that cause secondary tissue damage during ischemia-reperfusion injury (143) and acute episodes (144) has been long-recognized. However, we are also beginning to understand the vital role of suppressor macrophages in preventing rejection and re-establishment of tissue homeostasis after transplantation (145). Given their influence over transplant outcome, manipulating the balance between graft-protective and graft-destructive macrophage activities represents an attractive therapeutic strategy (146). Various approaches to controlling macrophage responses have been proposed, including adoptive cell therapy with regulatory macrophages (Mregs). In previous work, it was shown that treatment with ex vivo-generated CD11b + Ly6C −/low Ly6G − CD169 + Mregs could prolong fully-allogeneic heart graft survival in non-immunosuppressed mice (147). Mechanistically, Mregs can directly suppress T cell proliferation and survival through an iNOS-dependent pathway and the secretion of anti-inflammatory factors (148). More recently, Riquelme et al. demonstrated that Mregs induce TIGIT + FoxP3 + Tregs that produce IL-10 and non-specifically mediates bystander suppression of allo-stimulated CD4+ and CD8+ T cells (149). An equivalent population of human CD11b + CD115 + DC-SIGN + Mregs arises from peripheral blood CD14 + CD16 − monocytes that are cultured with M-CSF for 6 days prior to stimulation with IFN-g (150). During this period, a gradual down-regulation of CD14 is observed, which may recapitulate the physiological transition of human M1-like CD14 + CD16 − inflammatory monocytes into M2-like CD14 − / l o w CD16 + resident macrophages. Interestingly, presence of human Mregs correlates with an increase in TIGIT + FoxP3 + Treg in kidney transplant recipients (149), which is consistent with the preclinical experiments described above. In the clinical setting, Mregs are currently being investigated in humans in the ONEmreg12 trial, a phase-I/II study to minimize maintenance immunosuppression in kidney transplant recipients (151). This and previous clinical studies suggest Mregs could be used as a cell-based tolerancepromoting therapy, and for this purpose a good manufacturing practice-compliant production process for manufacturing an Mreg-containing cell product, known as "Mreg_UKR," has been established (152).
Suppressive macrophages are also be generated in recipient mice treated with costimulatory blockade. Our laboratory demonstrated that anti-CD40L mAb favors accumulation of CD11b + CD115 + DC-SIGN + expressing macrophages in the allograft, which promotes the expansion of Treg, while inhibited CD8 + T cell accumulation (108). Mechanistically, DC-SIGN macrophages produce regulatory IL-10 and their in vivo accumulation is controlled by M-CSF, which is consistent with the Mreg development requirements, phenotype, and function as described by James Hutchinson laboratory above. Besides costimulatory blockade, nanoparticles have also been used to deliver immune regulatory agents to monocytes and macrophages in vivo (153). For example, delivery of mycophenolic acid (MPA) by means of PLGA nanoparticles (NP) results in a significant allograft survival prolongation compared to conventional MPA treatment in a murine model of skin transplantation. Mechanistically, Daniel Goldstein and colleagues demonstrated that uptake of NP-MPA by myeloid cells leads to upregulation of programmed death ligand-1 (PD-L1), which results in decreasing their potential to prime alloreactive T cells associated with prolonged allograft survival (154). More recently, our laboratory described a promising strategy to induce long-term allograft survival through in vivo targeting of macrophages with nanobiologics. Our laboratory used an effective in vivo platform to deliver an mTOR inhibitor (mTORi) and NF-kB inhibitor (TRAF6i) via high density lipoprotein nanobiologics (HDL) in a murine vascularized heart transplant model. The HDL-based nanobiologics preferentially targeted myeloid cells and promoted M2 regulatory macrophage polarization, which resulted in prevention alloreactive CD8 T cell-mediated immunity and expansion of Treg (155). As a result, we believe that nanobiologics-based delivery of immunotherapeutic agents has great potential in organ transplantation as they improve the pharmacokinetics, minimize the off-target effects, maximize its dosage at the site of action, and can be as used as controlled release systems in a spatiotemporal manner (156). Taken together, it has become evident that the in vivo manipulation of macrophages through the use of nanobiologics represents a promising strategy for long-term allograft survival.
Epigenetic Regulation of Macrophages and Innate Immune Memory
Macrophages are highly plastic cells that adopt M1 and M2 phenotypes through mechanisms ultimately resulting from integrating their preexisting history and surrounding environmental signals to enable a distinct transcriptional program. In addition, their distinct transcriptional program must enable their phenotype to be distinct from other myeloid cells. The transcriptional program that makes them distinct is controlled via various epigenetic processes, among which include DNA methylation, histone modification and expression of noncoding RNAs. These epigenetic modifications of the landscape lead to either compaction or opening of the chromatin, followed by the combination of DNA and DNA-binding proteins, which are associated to gene activation or repression. This is the basis of trained immunity, a new concept in the field, which postulates that innate immune cells can retain a memory of certain primary stimuli via epigenetic mechanisms, thus potentially priming them to initiate a stronger response upon a secondary stimulus.
The term "epigenetics" was first pioneered by C.H. Waddington, seeking to explain how phenotypes could be explained not solely by genetic inheritance (157). He later then proposed the concept of the "epigenetic landscape," which posited that as cells differentiate, they become restricted in their possible fates (158). This concept of the epigenetic landscape was further elaborated on by Thomas Jenuwein and David Allis with their proposal of a "nucleosome code," an extension of the "histone code" (159,160). In their "nucleosome code" hypothesis, they propose that certain covalent modifications to the tails of histones in a region of DNA ultimately result in regional compaction or opening of chromatin. How closed or opened the chromatin in a particular region is then ultimately governs the ability of DNAbinding proteins and ultimately RNA Polymerase from binding to certain genes and subsequently transcribing. The histone modifications that encourage opening of the chromatin include H3K4me3, H3K9ac, and H3K27ac, weaken the grip tail of histone 3 (H3) to the DNA allowing other DNA-binding proteins to bind, while repressive histone modifications including H3K9me3, H3K27me3, and H3K36me3 enhance the grip of H3 to the DNA promote the opposite effect. How protected the DNA is by chromatin opening or compaction, as a result of these histone modifications regionally, ultimately mediates the accessibility of RNA Polymerase to specific sites, thus governing gene activation or gene repression.
The link between an external stimulus to macrophages and modification of the epigenetic landscape, thus establishing the importance of the epigenome in macrophages, was first established in 1999, where LPS stimulation was shown to induce IL12 p40 production by the remodeling of nucleosomes positioned at its promoter (161). This process was later shown to be TLR-dependent via acetylation of residues on histone 3 and histone 4 typically associated with open chromatin. On a genome-wide level, TLR activation has been shown to induce a program where the "brakes" on inflammatory gene expression are withdrawn by removing repressive histone modifications. Specifically it was shown that the H3K27me3 demethylase JMJD3, is induced by LPS stimulation in macrophages, and thus promotes an inflammatory gene program (162). Conversely, histone modifications pertaining to gene activation, modifications that lessen the grip of nucleosomes on the DNA, are added on at specific loci upon LPS stimulation by various epigenetic writers including histone methyltransferase myeloid lymphoid leukemia (163). The fact that macrophages' epigenetic architecture is easily changeable upon external stimulation should not be surprising, given that large changes in histone methylation and acetylation patterns occur in the transition from monocytes to macrophages alone (29). In summary, these early studies made it clear that significant epigenetic changes were happening in macrophages.
Prior to stimulation to an exogenous substance, the epigenetic landscape of monocytes and macrophages must be properly established to develop their distinguished phenotype. This is done by the LDTFs (lineage-dependent transcription factor) PU.1 and the C/EBP family of transcription factors, which bind to macrophage-specific genes and enhancers and are critical for proper monocyte and macrophage development (164). These transcription factors are thought to prime these sites, including those of inflammatory genes, suggested by the fact that these loci are marked by the presence of PU.1, H3K4me1, and open chromatin. However, to keep the brakes on the expression of inflammatory genes, these same loci of inflammatory genes are decorated with repressive histone marks that promote chromatin compaction including H3K9me3, H3K27me3, and H4K20me3 and are bound by co-repressors (165)(166)(167)(168). Only upon exogenous stimulation, these brakes are released by appropriate epigenetic erasers on the enhancers and promoters of inflammatory genes, and concurrently activating histone marks are added on by appropriate epigenetic writers.
Trained immunity is a relatively new compelling concept in immunology, whose foundation is primarily epigenetic based. It posits that innate immune cells can retain a memory after a primary stimulus and after a return to a resting phase enact a heightened response upon a secondary stimulus (169). The concept was first proposed in 2011 as a means to explain the phenomenon in vertebrates of protective effects of vaccinations or infections, including BCG vaccination and C. albicans infection, to unrelated stimuli in a manner independent of the adaptive immune system (170). Soon after, the mechanisms underlying these memory phenomena were soon determined to be based on epigenetic and metabolic reprogramming, with the two being intertwined (29,(171)(172)(173). Specifically, significant H3K4me3 deposition upon either BCG vaccination or ß-glucan stimulation was found at the gene promoters of inflammatory genes including TNF-a, IL-6 and glycolysis genes including hexokinase and phosphofructokinase, thus establishing a memory in macrophages. This process was shown to be was mTOR-dependent (172,173) and preventing epigenetic changes through the use of mTOR inhibitors, inhibited the shift in metabolism toward glycolysis and the acquisition of H3K4me3 at key inflammatory gene promoters.
With regards to organ transplantation, Fadi Lakkis and colleagues described that monocytes are able to recall skin grafts exhibiting memory features normally attributed to adaptive immune cells. Using BALB/c Rag − / − mice as recipients of BALB/ c (H-2 d ), allogeneic B6 (H-2 b ) and "third-party" C3H (H-2 k ) donor skin grafts rechallenged with B6 splenocytes 1 week after engraftment, the study demonstrated that monocytes were able to mount an inflammatory response 1 week after transplantation independently of the adaptive immune system (134). Interestingly, BALB/c recipients mounted an allo-dependent response to allogeneic B6, but also to "third-party" C3H (134). Although the third-party response was statistically lower than the allo-dependent response, the data suggests that monocytes are able to respond to non-specific recall stimuli, a feature of trained immunity. Challenging the view of non-specific responses mediated by macrophages, studies from Xian Li and colleagues reported that reconstituted Rag − / − gc − / − hosts with syngeneic B6 CD4+ T cells and donor BALB/c cells results in in vivo killing of donor BALB/c cells transferred 2 weeks after reconstitution but does not result in the rejection of "third-party" C3H cells (174). This argues in favor of further investigating epigenetic mechanisms of macrophage recall processes and the potential implication of SIRPa in these processes, as described above. Remarkably, this study demonstrated that macrophage-mediated rejection of recall responses can be prevented with CD40/CD40L costimulatory blockade during the first stimuli. This suggests that anti-CD40L mAb treatment may prevent the accumulation of memory-like macrophages in the donor allografts early after transplantation.
Inhibition of trained macrophages in the allograft can be achieved by targeting the mTOR pathway in myeloid cells in vivo (155). We recently demonstrated that vimentin promotes macrophage training via dectin-1 signaling, which results in increasing deposition of H3K4me3 at the promoter of TNFalpha and IL-6 upon a secondary stimulation with HMGB1, another protein highly expressed in the donor allograft. The same trend in epigenetic changes occur in vivo using an experimental mouse model of heart transplantation. Interestingly, inhibition of trained immunity with mTORi-HDL nanobiologics promoted long-term allograft survival via Treg expansion and inhibition of cytotoxic T cells.
In addition to targeting trained immunity in organ transplantation via the administration of mTORi-HDL nanoparticles, there is the potential use of small molecules that inhibit epigenetic-related proteins including HDAC inhibitors (HDACi) and BET inhibitors (BETi). HDACi are thought to primarily inhibit histone deacetylation, thus promoting gene expression at specific loci, while BETi inhibit the binding of BET proteins to acetylated regions of the genome, which normally promote gene expression at specific loci (175). However, reports specifically implicating their use in the context of transplant have been few. In regards to the use of BET inhibitors, a synthetic compound, I-BET, was developed that was shown to repress gene expression of LPS-inducible genes in bone marrow derived macrophages (BMDM) ex-vivo (176). The importance of BET proteins in aiding gene expression of inflammatory genes in macrophages was established through use of brd2 lo mice and silencing of BET proteins through siRNA studies (177). With regards to the use of an HDACi to prevent allograft refection, an inhibitor of HDAC6, KA1010, was shown to reduce allograft skin rejection through mechanisms that involved reduction in CD4 T cells with an increase in the Treg population (178). The effect of HDACi on macrophages on the other-hand is not clear and invitro experiments on BMDM treated trichostatin A (TSA), a class I and II HDACi, displayed a phenotype favoring progenitor-like myeloid cells rather than differentiated macrophages. These macrophages displayed a mixed M1/M2 phenotype according to cytokine and chemokine secretion analysis, suggesting that treatment with HDACi alone may not be a suggestable mode of therapeutic treatment (179). On the contrary, a study by Thangavel and colleagues demonstrated that combinatorial treatment of TSA with 5-Aza 2-deoxycytidine (Aza), a DNA methyl transferase (DNMT) inhibitor, was able to promote an M2 phenotype in macrophages and to reduce inflammation in an acute lung injury model (180). Overall, while drugs targeting epigenetic modifiers including HDACs, BET proteins and DNMTs do hold promise as therapeutic approaches that promote long-tern allograft survival in organ transplantation, it appears that successful use of these drugs to prevent graft rejection will require their use to be in combination with other drugs.
CONCLUDING REMARKS AND FUTURE PERSPECTIVES
Organ transplantation is a life-saving strategy for terminal and irreversible organ failure. While the solid organ transplantation has achieved an excellent success in short-term graft survival rates, the long-term survival rates of organ transplants remain suboptimal. The pathophysiology of graft rejection is multifactorial and growing evidence suggests that macrophages are key mediators of acute and chronic graft loss, through the secretion of inflammatory mediators that activate the adaptive alloimmune response. Historically, accumulation of macrophages in the donor organ has been associated with transplant rejection (181,182) as allogeneic antigen-primed macrophages mediate allograft rejection (183). However, not all macrophages are associated with graft loss. Different subpopulations of macrophages regulate the allograft immune response through protective mechanisms based on their phenotype and function. As a result, the identification of the in vivo signaling pathways that govern macrophage polarization and modulate their function may provide new therapeutic targets that promote allograft survival.
Therapeutic agents that regulate macrophage polarization that promote the accumulation of regulatory macrophages are potential candidates to promote long-term allograft survival in transplant recipients. In addition, identification of previously unrecognized pathways associated with chronic allograft rejection may offer new therapeutic avenues for intervention. Classically, the innate immune response has been defined as a non-specific rapid response, followed by a later-onset of antigenspecific adaptive immune cells. However, accumulating findings have challenged the fact that innate immune cells do not possess a memory, leading to the concept of innate immune memory and trained immunity. This concept postulates that stimulated innate immune cells are primed to recognize specific ligands and secrete specific cytokines more rapidly upon a second stimulus. This type of memory is retained by mechanisms of epigenetic and metabolic changes in innate immune cells exposed to particular ligands. As a result, therapeutic targeting of trained immunity represents a novel treatment paradigm to prevent allograft rejection. Thus, a comprehensive understanding of the immunobiology of different macrophage subsets is crucial to develop novel strategies that promote long-term allograft survival in transplant recipients and to translate macrophagetargeted therapeutic strategies in the clinic.
AUTHOR CONTRIBUTIONS
All authors contributed to the article and approved the submitted version.
FUNDING
The authors' work is supported by National Institutes of Health grants R01 AI139623AI (JO), and NIH-T32CA078207 (FO). | 9,282 | 2020-11-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
Efficient Discretization of Movement Kernels for Spatiotemporal Capture–Recapture
Spatially explicit capture–recapture (SECR) models treat detection probability as a function of the distance between each animal and its notional activity centre. Open-population variants of these models (open SECR) are increasingly used to estimate the vital rates (survival and recruitment) of spatial populations subject to turnover between sampling times. If activity centres also move between sampling times then modelling the movement can reduce bias in estimates of vital rates. The usual movement model in open SECR is a random walk with step length governed by a probability kernel. Space is discretized in open SECR for computational convenience, and in some implementations this includes truncation of the probability kernel. Computations for the movement submodel are nevertheless very time-consuming owing to the repeated convolution steps and the need to manage boundary effects. A novel ‘sparse’ discretized kernel is proposed that greatly reduces fitting time. The sparse kernel was tested by simulation and applied to two datasets. Differences between models fitted using the sparse and full kernels were minor and unlikely to matter in practice. The sparse kernel extends the practical limits of the movement modelling in open SECR to greater dispersal distances and greater spatial resolution. Supplementary materials accompanying this paper appear online.
INTRODUCTION
Open population spatially explicit capture-recapture modelling (open SECR) is used to estimate the vital rates (survival and recruitment) of populations subject to turnover between sampling times. In SECR models, the probability of detecting an individual is a function of the distance from its activity centre to a detector (Borchers and Efford 2008;Royle and Young 2008).
Movement of activity centres between sampling times has been treated as a random walk with step length governed by a probability kernel (Ergon and Gardner 2014;Schaub and Royle 2014). The kernel is assumed to be radially symmetrical, with circular contours of probability, so the probability density is a function of radial distance g (r ). Parameters of the kernel may be estimated from the truncated sample of movements observed as recaptures on the study area. Open SECR models that include movement have been found to fit better than static models and to show increased survival (Schaub and Royle 2014;Glennie et al. 2019;Efford and Schofield 2020). The latter has been attributed to the separation of emigration and mortality (Ergon and Gardner 2014;Schaub and Royle 2014).
Space is commonly discretized in open SECR for computational convenience. Activity centres are located at points on a finite square mesh, conceptually the centroids of grid cells. The movement kernel is then a discrete distribution, the probability of moving from a point of origin to each point on the mesh. Probabilities may be approximated by evaluating the centred continuous two-dimensional probability density at each point on the kernel and dividing by their sum. The maximum likelihood implementation of Efford and Schofield (2020) truncates the movement kernel to reduce the number of computations. The radius of truncation is chosen so that the probability of movement approaches zero for points on the edge and increasing the radius has negligible effect on parameter estimates. Computations for the movement submodel are nevertheless very time-consuming owing to the repeated convolution steps and the need to manage boundary effects.
I suggest that a thinned array of kernel points may be sufficient to capture the essence of dispersal in open SECR. A design using only radial 'spokes' greatly reduces fitting time and extends the practical limits of the method to greater dispersal distances and greater spatial resolution. Careful weighting of cell probabilities is needed to avoid artefacts. I apply the sparse kernel to two datasets and simulate other scenarios. Differences between the sparse and full kernels were minor and unlikely to matter in practice.
MODEL FOR MOVEMENT IN OPEN SECR
The state model in open SECR comprises the activity centres of individuals in a spatially distributed population; the population may change over time as animals are born or die, and centres may shift. Population processes (recruitment, mortality, and movement between sampling times) are observed imperfectly and must be estimated by modelling the detections of marked individuals at known locations (detectors). The set of observations of individual i is denoted ω i , where ω i > 0 indicates an individual detected at least once. Detection is assumed to be a function of distance between each activity centre x i and a detector. Activity centres are not observed directly, and our approach is to marginalize over x i . If each x i is static then the probability of observing ω i for an animal hypothetically present from time b to time d is Efford and Schofield (2020) may be consulted for detail on Pr(ω i ) and f (x i ). In order to allow for movement in (1), the animal-specific distribution of location at sampling time j −1 is projected forwards to time j by convolving the initial distribution with the continuous 2-dimensional kernel κ: omitting the subscript i on x for clarity. A model with movement entails multiple integration over the unknown location of the animal at each sampling occasion. For computational reasons (Efford and Schofield 2020), we replace integration by summation over points on a square mesh S with spacing . Then, for a population of uniform initial density, The discretized kernel k(x, y) is defined at an array of points with spacing , centred on the origin and with limits −w and +w on each axis. The limits are chosen so that further increase has negligible effect on the estimates (Efford and Schofield 2020). Then for This note concerns the effect of applying zero weight to some values of k(), while upweighting other values.
CONSTRUCTION OF FULL DISCRETIZED KERNEL
The movement probability for each cell of the full discretized kernel might be obtained by integrating the continuous kernel over the origin and destination cells, but it is sufficient in most respects to use the function of radial distance evaluated at the cell centre and scaled by cell area: k(x, y) ≈ g(r ) 2 , where r = x 2 + y 2 . This breaks down at the origin for some movement kernels (Efford and Schofield 2022); then it is suggested to approximate the integral of g(r ) over the origin cell by F(r 0 ), where F is the cumulative distribution function corresponding to g(r ) and r 0 = / √ π is the radius of a circle with the same area.
The approximate probabilities are normalized across the kernel. Table 1 lists the continuous kernel functions used in this paper (see also Cousens et al. 2008 Table 5.2; Efford and Schofield 2022). The parameter α controls the scale of movement; β of the bivariate t distribution (BVT) is a shape parameter (half the degrees of freedom). BVT approaches bivariate normal (BVN) for large β. The bivariate t and bivariate Laplace (BVE) distributions in Table 1 are not the only bivariate generalizations of these distributions (Kotz et al. 2001;Kotz and Nadarajah, 2004), but they are the ones used widely in ecology (Cousens et al. 2008;Nathan et al. 2012).
CONSTRUCTION OF SPARSE KERNEL
The proposed 'sparse' kernel k S (x, y) comprises points on eight 'spokes' at increments of θ = π/4 radians (the cardinal and intercardinal directions) (Fig. 1). Other values of k S (x, y) are set to zero. Each point in the sparse kernel is weighted by the approximate area of the annular sector that it represents in the full kernel (Fig. 1). The width of an annular sector a is greater along the intercardinal axes by a factor of √ 2, and accordingly a takes value or √ 2 depending on the axis type a. Discarding a 2 a term in the area of each annular sector, the weighted sparse kernel values are: The distribution along each radius is related to f (r ) = 2πrg(r ), the univariate probability density of distance moved for the bivariate kernel g(r ) (e.g. Cousens et al. 2008;Efford and Schofield 2022). In consequence, the maximum weight lies part way along each radius even if the full kernel has a maximum at the origin (Fig. 2).
PROPERTIES OF SPARSE KERNEL
The number of points in the sparse kernel increases linearly with the truncation radius, rather than with its square, so quite large-diameter kernels become computationally feasible ( Table 2).
The kernel describes a single movement step. When movement is compounded over multiple steps, location relative to the starting point becomes increasingly uncertain, blurring initial structure. Each step is a convolution of the kernel with the current distribution, starting at a point. Repeated convolution of the sparse kernel with itself quite soon leads to a distribution that approaches the full kernel (Fig. 3).
SIMULATIONS
Simulations were conducted to compare the performance of full and sparse kernels. We focus on the Pradel-Link-Barker (PLB) formulation of the open population capture-
Full
Step 1 Step 2 Step 3 Step 4 recapture model that conditions on the number caught to give estimates of per capita survival φ and recruitment f , but does not directly estimate population size or density (Efford and Schofield 2020). As the goal was to assess the movement models, the simulated population was not subject to turnover (φ = 1.0, f = 0.0) and turnover parameters were fixed in the fitted model. Movement scenarios were bivariate normal (BVN) and bivariate Laplace (BVE) with median dispersal distance 30 m or 60 m. A notional 8 × 8 trapping grid with 30-m spacing was operated for 5 primary sessions, each comprising 5 secondary sessions. Spatial detection was governed by a half-normal hazard function with baseline detection λ 0 = 0.1 and spatial scale σ = 30 m. Activity centres (N = 200) were initially distributed uniformly within an arena that extended the width of the trapping grid (210 m) beyond the grid in each direction, and the same arena, discretized as 10-m cells, was used as the habitat mask for model fitting. Models were fitted by maximizing the likelihood in the R (R Core Team 2021) package 'openCR' 2.1.0 (Efford 2021b) using both full and sparse kernels, each with a radius of 15 or 30 cells (150 m or 300 m) (Supplementary Material, Appendix A). The experiment therefore comprised 16 scenarios, each simulated 100 times. The performance of sparse kernels closely matched that of the full discretized kernels across all scenarios (Fig. 4). Median CPU time for model fitting with the full kernel was 4.5 to 12.7 times that for the sparse kernel. Sparse kernels showed a faint tendency for negative bias in the estimate of distance moved, and slightly greater sampling variance than the full kernel. The only significant aberration was the failure of the BVE model with both sparse and full kernels of inadequate radius: the same model performed well when the kernel radius was increased (Fig. 4). Coverage of 95% confidence intervals for the movement parameter slightly exceeded the nominal level in scenarios with both full and sparse kernels (coverage 96-98%) except for the two failed BVE scenarios (coverage 81%, 83%).
DATA
We compared the full and sparse kernels using contrasting publicly available robustdesign datasets on a small forest bird and an arboreal marsupial (Efford 2021a).
The ovenbird (Seiurus aurocapilla) is a migratory ground-nesting warbler. The data are from a multi-species banding study over the 2005-2009 breeding seasons on the Patuxent Research Refuge, Maryland, USA. Ovenbirds were mistnetted and banded each year for 9 or 10 days at 44 points spaced 30 m apart on a rectangular loop (Dawson and Efford 2009;Efford 2021a). About 20 ovenbirds were caught each year (details in Supplementary Material, Appendix B). Parameters were constant across years in the fitted model.
The brushtail possum (Trichosurus vulpecula) is invasive in New Zealand forests; adult possums occupy a stable home range year round. We use the 1996 and 1997 data from a long-term trapping study in the Orongorongo Valley near Wellington, New Zealand (Efford and Cowan 2004). Possums were trapped on an array of 167 cage traps at 30-m spacing and individually ear marked for 5 nights in February, June and September of each year. The brushtail possum dataset was an order of magnitude larger than the ovenbird dataset (details in Supplementary Material, Appendix B). Population turnover was strongly seasonal, so separate levels of survival and recruitment were fitted in each 'season' (February-June, June-September, September-February); other parameters were constant.
MODELS
Activity centres were either static between primary sessions or followed a random walk with step length governed by one of three distributions: BVN, BVE or BVT. We also considered zero-inflated versions of BVN and BVE (suffix 'zi') as described in Efford (2021b). Discretized kernels were truncated at a radius of 30 cells; for ovenbirds that exceeded the length of the detector array and for brushtail possums it was more than half its greatest dimension. Models were fitted by maximizing the likelihood in R package 'openCR' version 2.1.0 (Supplementary Material, Appendix B).
Spatial PLB models were compared with respect to differences in Akaike's Information Criterion, AIC. The numerically computed rank of the Hessian was sometimes less than the number of parameters for two-parameter movement models, possibly because parameters were estimated at or near a boundary of the parameter space (Viallefont et al. 1999).
RESULTS
Sparse kernels fit consistently faster than full kernels, on average by a factor of 9 for the ovenbird data and 14 for the brushtail possum data ( Table 3). The maximized log likelihood was nearly identical for sparse and full kernels fitted to the ovenbird dataset and, as a result, so were the relative AIC values within each kernel type (Table 3a). The maximized log likelihood for the sparse kernel movement models fitted to the brushtail possum dataset was consistently lower than the matching full-kernel log likelihood (Table 3b). However, the relative AIC values within each kernel type were similar, and using the sparse kernel consistently would lead to nearly the same AIC model weights.
Modelling movement increased estimates of survival for both datasets and there were no systematic differences between the full and sparse kernels (parameter estimates are tabulated in the Supplementary Material). Rank deficiency was apparent in three ovenbird models and one brushtail possum model. The BVT shape parameter was not estimated well in either case, but BVT was nevertheless the AIC-best model for brushtail possums. The zero-inflated BVN and BVE models with each kernel type produced identical estimates for ovenbirds because the fitted kernels were essentially flat away from the origin.
DISCUSSION
The efficiency of the sparse discretization makes feasible the fitting by maximum likelihood of open SECR movement models that use large-radius truncated kernels, measured in the number of cells. This enables a greater absolute span, potentially including the whole study area, or smaller cells, for greater spatial resolution. A large span is desirable when the movement kernel has a long tail, as demonstrated in the simulations with the bivariate Laplace distribution. Faster model fitting enables the bivariate normal kernel to be more easily compared with realistic, longer-tailed options, such as the bivariate Laplace and bivariate t distributions used in the case studies.
It is perhaps surprising that the kernel can be reduced to so few points. In open SECR we are concerned with estimating population-level demographic parameters, not tracing the locations and movements of individuals. While movement can be an important factor, the spatial resolution of data from passive detectors is usually poor. The movement model typically has only one or two parameters, and even the large brushtail possum dataset could barely support estimation of a second parameter (Table 3; Supplementary Material, Appendix B). The location of an animal's activity centre is uncertain even in the primary sessions that it is detected, and the probability distribution after a single step is a convolution that blurs the sharp lines of the sparse kernel. Further, the locations of marked animals missing for one or more sessions are imputed by convolution as in Fig. 3 (Efford and Schofield 2020). The empirical results are therefore not in conflict with intuition. Explicit truncation of the movement kernel is not required for open SECR in some modes. These include Markov chain Monte Carlo estimation in which the locations of individuals are updated from a continuous movement kernel (e.g. Ergon and Gardner 2014), and the cell-by-cell bivariate normal maximum likelihood method used by Glennie et al. (2019). Maximum likelihood using an explicitly truncated and discretized kernel has some inherent advantages: the kernel may take a shape other than bivariate normal, and model fit may be compared in a straightforward way using the log likelihood and AIC.
The sparse discretization has potential limitations that must be noted. The slight negative bias of estimated movement suggested by the simulation results is unlikely to have practical importance. We did not test the sparse kernel in highly structured habitats. The arbitrary exclusion of some possible destinations from the vicinity of each original location may interact with habitat structure to result in biased parameter estimates. We do not expect this to be a problem in practice because of the large uncertainty in the actual location of each activity centre, and the smoothing effect of convolution over time (Fig. 3). The differences in maximized log likelihood for the brushtail possum dataset appear to reflect the irregularity of the detector array on one edge: the log likelihoods of 'full' and 'sparse' models fitted to data simulated on a rectangular grid with the estimated parameters were within one unit (unpubl. results). Fitting a model with symmetrical (rectangular) habitat mask did not remove the effect.
Some applications of open SECR have used movement kernels specified in terms of independent marginal distributions on the x-and y-axes, particularly independent Laplace and t distributions. For non-normal marginal distributions the resulting bivariate probability contours are non-circular (Efford and Schofield 2022). These are not strictly compatible with the sparse kernel specified here, which assumes circularity.
Funding Open Access funding enabled and organized by CAUL and its Member Institutions. | 4,179 | 2022-07-22T00:00:00.000 | [
"Computer Science"
] |
A Hybrid Strategy of Differential Evolution and Modified Particle Swarm Optimization for Numerical Solution of a Parallel Manipulator
This paper presents a hybrid strategy combined with a differential evolution (DE) algorithm and a modified particle swarm optimization (PSO), denominated as DEMPSO, to solve the nonlinear model of the forward kinematics. The proposed DEMPSO takes the best advantage of the convergence rate of MPSO and the global optimization of DE. A comparison study between the DEMPSO and the other optimization algorithms such as the DE algorithm, PSO algorithm, and MPSO algorithm is performed to obtain the numerical solution of the forward kinematics of a 3-RPS parallel manipulator.The forward kinematicmodel of the 3-RPS parallel manipulator has been developed and it is essentially a nonlinear algebraic equation which is dependent on the structure of the mechanism. A constraint equation based on the assembly relationship is utilized to express the position and orientation of the manipulator. Five configurations with different positions and orientations are used as an example to illustrate the effectiveness of the proposed DEMPSO for solving the kinematic problem of parallel manipulators. And the comparison study results of DEMPSO and the other optimization algorithms also show that DEMPSO can provide a better performance regarding the convergence rate and global searching properties.
Introduction
Differential evolution (DE) is a heuristic and straightforward strategy with the prominent features of global optimization and only a few control parameters [1,2].Particle swarm optimization (PSO) is a group theory based algorithm that was inspired by fish schools, bird flocks, and others.The disadvantage of PSO is that the individuals are easily influenced by the best particle and best position so it may only get the local optimum [3,4].The control parameters of PSO can be modified according to the specific optimization problems and applications [5][6][7].This paper presents a new method integrated with differential evolution (DE) and modified particle swarm optimization (PSO).In particular, this strategy aims to combine the advantages of DE and the modified PSO together and then apply the hybrid algorithm to the numerical solution of a parallel manipulator.
Parallel robots have been a hot research topic for many years due to their superior performance such as high response speed, high stiffness, high payload/weight ratio, low inertia, and good dynamic performance [8][9][10].This paper focuses on a spatial parallel manipulator with 3 DOFs which was developed by Lee and Shah [11].The 3-RPS parallel manipulator has three legs and each branch is a serial kinematic chain [12].To obtain the position and orientation of the moving platform, it is necessary to define the forward kinematics of the parallel manipulator based on the length of the branches [13].It should be noted that forward kinematics of parallel manipulators is more complicated than that of serial ones, and vice versa [14][15][16].
Analysis of kinematics can be divided into two approaches: analytical methods and numerical methods [17,18].In numerical methods, the forward kinematic solution is found by solving a nonlinear global optimization problem to find the numerical solution [19].Many numerical calculation methods for solving nonlinear equations have been utilized in kinematic problems.Newton's iteration method, together with its improvements, is a common method that is very efficient as regards convergence speed.However, Newton's iteration method is an arduous procedure which is sensitive to initial values and involves a large number of calculation steps [8].Compared to PSO, the differential evolution (DE) algorithm has received much more attention due to its capability of global optimization [20,21].
The nonlinearity and multiple solution properties of the forward kinematics of a parallel manipulator make the analytical method more difficult and sophisticated than the numerical one.The numerical solution of the forward kinematics can be obtained using optimization methods such as DE and PSO or other hybrid optimization methods [22,23].In this paper, some modern intelligent optimization methods will be utilized and compared with each other.A timesaving hybrid strategy combined by differential evolution and modified particle swarm optimization is developed for the numerical solution of the forward kinematics of a 3-DOF parallel manipulator.
A Hybrid Strategy of Differential Evolution and Particle Swarm Optimization
The differential evolution (DE) optimization algorithm is a simulation of the biological evolution process.It is capable of handling nondifferentiable, nonlinear, and multimodal objective functions.To start an optimization process, an initial population must be randomly generated within a predefined bound, and then a new population in the next generation is generated through mutation, crossover, and selection operations.Particle swarm optimization (PSO) is a computational algorithm that optimizes a problem by iteratively improving a candidate solution with regard to a given measure requirement.The movements of the particles mimic the movement of organisms in a bird flock or fish school and are guided by their own best known position in the search space as well as the entire swarm's best known position.
Modified particle swarm optimization (MPSO) has the user-defined constant parameters , 1 , and 2 which have a great impact on the optimization performance.The parameter is utilized to adjust the velocity, the parameter 1 is utilized to adjust best(), which is the best position achieved so far by every individual, and the parameter 2 is utilized to adjust best(), which is the best value obtained so far by any particle in the population.In order to improve the performance of PSO, time-varying acceleration coefficients and time-varying inertia weight can be utilized.The inertia weight can provide a balance between the local search and global search during the optimization process: where 1 and 2 are the initial and the final inertia weight, respectively, Iter is the current iteration number, and Iter max is the maximum number of iterations.By increasing the number of iterations, the weights of best() and best() can have different convergence rates: where max and min are the maximum and minimum acceleration coefficient of 1 and 2 , respectively.The hybrid strategy of DE and MPSO (DEMPSO) is the new strategy.A key merit of DE algorithm is the efficient global optimization.Furthermore, the diversity of the entire population can be always maintained during the whole evolution process, which can prevent the individuals from falling into a local optimum.PSO, on the other hand, has the advantage of fast convergence speed.The individual with the best history and the best individual among the entire iteration are saved to obtain the lowest fitness values.
Combining the advantages of DE and PSO, a new hybrid DEMPSO strategy is proposed which aims to achieve both fast convergence speed and efficient global optimization.Since the population of PSO easily falls into a local optimum, the proposed DEMPSO method uses the DE algorithm to reduce the search space first, and then the obtained populations are taken over by the MPSO as an initial population to get a fast convergence rate to the final global optimum.The hybrid algorithm can obtain the global minimum value based on the fitness function, which is built for the numerical solution for forward kinematics of a parallel manipulator.The procedure of the DEMPSO algorithm is illustrated as follows.
(1) Population Initialization.The individual , with the population number NP, is randomly generated to form an initial population in a -dimensional space.All the individuals should be generated within the bounds of the solution space.The initial individuals are generated randomly in the range of the search space.And the associated velocities of all particles in the population are generated randomly in the dimension space.Therefore, the initial individuals and the initial velocity can be expressed as follows: where 1 ≤ ≤ Np, 1 ≤ ≤ , [ min , max ] is the range of the search space, and rand(0, 1) is a random number chosen between 0 and 1. individuals from the previous population, the mutant individual ( + 1) can be generated as the following equation: where is a differential weight between 0 and 1.It is a zoom factor to control the amplification of the mutation operation. 1 , 2 , and 3 are random integers that have been selected from 1 to NP and 1 , 2 , 3 , and are not the same as each other.
The crossover operation aims to construct a new population , ( + 1) which is chosen from the current individuals and mutant individuals in order to increase the diversity of the generated individuals: where rand(0, 1) is a random number chosen from 0 to 1, rand is an integer chosen from 1 to randomly, and Cr is a crossover parameter that is randomly chosen from 0 to 1.
In the selection operation, the crossover vector (+1) is compared to the target vector () by evaluating the fitness function value based on a greedy criterion, and the vector with a smaller fitness value is selected as the next generation vector: Update the global best part with the minimum fitness value (best) and the personal-best part (best).Let the best value be 1 and perform a comparison with the stopping tolerance value 1 .If 1 is less than 1 , the iteration of DE has finished.All the population and positions will continue to the next loop of MPSO.
Let the best value of MPSO be 2 and perform a comparison with fitness value 2 .If 2 is less than 2 , the iteration of MPSO has finished.
The initial population is generated by the DE algorithm, and the stopping criterion of DE is set as the fitness value 1 less than a user-defined stopping tolerance value 1 which is dependent on the specific kinematics of a parallel manipulator.When the fitness value 1 is less than 1 , the DE population will be taken over by the MPSO algorithm.The new velocity and new location of the population are updated in every generation until the fitness value 2 becomes less than the bound condition 2 satisfied.
In order to depict clearly the population moving process, the position and difference vector distribution of the population for the DE-based algorithm and MPSO-based algorithm are shown in Figures 1-4.
The initial and final distributions of the DE-based population are shown in Figures 1(a
Forward Kinematics of a Parallel Manipulator
A parallel manipulator with 3 DOFs was proposed by Fang and Huang [12], which has been widely used in airplane simulators, walking machines, and others.The 3-RPS parallel robot is composed of a base platform, three legs, and a moving platform, as shown in Figure 5.Each leg is a serial chain consisting of a revolute () joint, a prismatic () joint, and a spherical () joint.The manipulator has three degrees of freedom: two rotations are about the -axis and -axis and one translation is along the -axis.The joints are driven by linear actuators so that the moving platform can achieve the required position and orientation.
The global coordinate - is built at the center of the base platform.The orientation of the -axis is from point to point 1 , and the orientation of the -axis is parallel to the line → 2 3 .The joints are evenly distributed around the base platform as an equilateral triangle.The moving coordinate 1 - 1 1 1 is built at the center of the moving platform.The orientation of the 1 -axis is from point 1 to point 1 , and the orientation of the 1 -axis is parallel to the line → 2 3 .The joints are distributed as joints.For simplicity and without loss of generality, all coordination systems abide by the righthand rule.The geometric parameters of the manipulator are the radius of the base platform and the radius of the moving platform.
Let and denote the rotation matrix and position vector which move from the global coordinate system to the moving coordinates, respectively.Then, the typical kinematic chain can be denoted as a mathematical formula: where → is a vector from point to point .The vectors → 1 and → belong to the moving coordinates and the global coordinates, respectively. is -- Euler rotation transformation matrix and is position transformation vector.
where = sin and = cos .
The position vectors of points and with respect to the global coordinate system and the moving coordinates can be expressed as follows: A constraint equation, based on the assembly relationship where the revolute joint axis is perpendicular to the prismatic joint axis, can be written as follows: where is the axis of the revolute joint and its coordinate values are shown in Table 1.
Table 1: Coordinate values of each rotation axis.
The length of each link → , that is, the inverse kinematics of the manipulator, can be calculated as a closed chain as follows: Given a set of lengths of the prismatic joints, the forward kinematics is to get the position and orientation of the moving platform.The numerical forward kinematic solutions of the 3-RPS parallel manipulator are a nonlinear algebraic equation.There are three rotations and three translations in the transformation matrix, but only two rotations and one translation are left in the forward kinematics since there are three extra constraints equations.Based on the specific structure of the parallel manipulator, the constraint equations can be obtained as follows: Substituting ( 14) into ( 13), the inverse kinematics of ( 13) can be simplified as For the parallel manipulator, the analytical inverse kinematic solution of the above equation is straightforward.The nominal leg length can be obtained easily based on the assumed real pose , , and .However, given , the analytical solution for the simulated pose , , and becomes very complicated and it may have multiple solutions.The given is obtained based on the real pose , , and .So, the forward kinematics based on the given is to let the simulated pose , , and infinitely approach the real pose , , and .The simulated pose converges to the real pose with the convergence of the algorithm to the global minimum.The numerical solution of this problem can be solved by minimizing the difference between the given length and the predicted length calculated from (15) to find a set of optimized , , and .The fitness function based on the least squares method can be written as where is a known leg length which can be acquired from the linear actuator and ( , , ) is used to calculate the predicted leg length during the optimization process.
Case Simulation and Results
For the manipulator studied in this work, the task of the simulation is to obtain the end-effector pose when the length of the legs is already known.In the simulation, the geometric parameters of the manipulator are = 274 mm and = 158 mm.The workspace of the moving platform of the manipulator is given in Table 2.
For a specific pose selected in the workspace, for instance, = 5 ∘ , = 12 ∘ , and = 517 mm, through an inverse calculated as 1 = 499.0178mm, 2 = 557.7314mm, and 3 = 534.3032mm.On the contrary, if these three-leg lengths have been obtained from linear actuators, then the numerical optimization task is to search for an optimal combination of , , and to minimize the fitness function.In our first simulation, the DE-, PSO-, and MPSO-based optimization algorithms will be employed to solve this numerical problem.The aim is to investigate the performance of each algorithm and find a possible solution with the characteristics of fast convergence rate and global optimization.The simulations were implemented using Matlab R2012b on a computer with an Intel5 Core6 i7-4510U CPU @ 2.00 GHz and 7.71 GB RAM.During simulation, the classical DE algorithm DE/rand/bin with the control parameters of = 0.85 and Cr = 1 is chosen; the PSO control parameters are chosen as 1 = 2, 2 = 2, and = 0.9.For the control parameters of MPSO, 1 was set to gradually decrease from 2.5 to 0.5 and 2 was set to gradually increase from 0.5 to 2.5.The inertia weight factor was set in two ways: (1) the inverse way (here, we called it MPSO 1 , where the control parameter was set to gradually decrease from 0.9 to 0.6) and (2) the direct way (here, we called it MPSO 2 , where the control parameter was set to gradually increase from 0.6 to 0.9).
Using the above parameters and given the same population number of 30 and the same stopping criterion of 1 −08 , the simulation results of the DE-, PSO-, and MPSO-based algorithms with different iterations and computation times are listed in Table 3. From the simulation results, it can be seen that the computation time of the DE-based algorithm is less than of other algorithms, but it takes more iterations to converge; on the other hand, the MPSO 1 algorithm has the fastest convergence rate with only 560 iterations, but the computation time is a little greater than in the DE algorithm.
Figure 6 illustrates the logarithm-based fitness function values of DE, PSO, and MPSO with respect to the generations.It can be seen that the convergence rate of the DE-based algorithm declines gradually.The fitness value of PSO and MPSO 1 drops very fast at the beginning, but the decline becomes gentler after a number of iterations.The fitness value of MPSO 2 has not got any changes before 2000 generations but it suddenly converged after that period; this is a very interesting characteristic which would be utilized.In our proposed hybrid method, that is, DEMPSO, the DE algorithm part is employed to bypass the steady-state part of MPSO 2 , and the MPSO 2 part is used to obtain a fast convergence rate.
For the proposed DEMPSO algorithm, besides the control parameter selection, one important issue is to decide the break fitness value 1 for the DE algorithm since this value will influence the total computation time after MPSO 2 takes over the optimization process.kinematics with DEMPSO, DEPSO, DE, PSO, and MPSO are listed in Table 5.The logarithm-based fitness function values with respect to iterations are plotted in Figure 7.It can be clearly seen that DEMPSO has great advantage not only as regards convergence speed but also as regards the number of iterations for solving the forward kinematic problem of the parallel manipulator.From Figure 7, it is also shown that the proposed DEMPSO has successfully bypassed the steady state of MPSO 2 and inherited its steepest descent properties.
To validate the effectiveness of the proposed DEMPSO algorithm, in our second simulation, we randomly select five poses in the workspace and calculate the relevant leg lengths through inverse kinematics.DEMPSO was employed to search for the optimum pose of the 3-RPS moving platform for the given leg lengths.Table 6 shows the final results for the five different pose situations of the 3-RPS parallel manipulator.From the table, it can be seen that the searched pose value has approached the real pose value with a very small error value (≈1 − 5) when the termination fitness value 2 is set as 1 − 8.The computation time is almost the same for different pose situations.
Conclusions
In this paper, a hybrid strategy combined with DE and MPSO, termed DEMPSO, is developed to get the numerical solution of parallel manipulator forward kinematics.The proposed hybrid method benefits from the efficient global optimization of DE and the fast convergence rate of MPSO; meanwhile, it avoids the possible local optimization of MPSO.Using this method, the search bounds can be narrowed by the DE-based algorithm subtly; afterwards, the MPSO, with its fast rate of convergence, can obtain the global optimization in this narrowed search space.
( 2 )
Iteration Loop of DE.Let individual () denote the mutation operation at time .By randomly choosing three
Figure 1 :
Figure 1: Position of the population of DE.(a) At the beginning of the simulation; (b) on convergence.
Figure 2 :Figure 3 :Figure 4 : 3 Figure 5 :
Figure 2: Difference vector distribution of DE.(a) At the beginning of the simulation; (b) on convergence.
Figure 6 :
Figure 6: Logarithm of fitness function values with DE, PSO, and MPSO.
Table 2 :
Workspace of the moving platform.
Table 3 :
Simulation results of forward kinematics with DE, PSO, and MPSO.
Table 4 presents the DEMPSO simulation results of the forward kinematics with different break fitness values of DE.For different break fitness values 1 of DE and the same stopping fitness value 2 of MPSO 2 , the function ran 30 times to get average results of the total iterations of DE and MPSO 2 , the computation time of DE at break point 1 , and the total computation time of DEMPSO at terminal point 2 .From the table, it can be seen that the optimization will spend the least amount of time when the fitness value 1 is equal to 70.By selecting the fitness value 1 as a break point of DEMPSO and DEPSO, the comparison results of forward
Table 4 :
Simulation results of forward kinematics with DEMPSO for different fitness values of DE.
Table 5 :
Comparison results of forward kinematics with DEMPSO, DEPSO, DE, and PSO.
Table 6 :
Simulation results for different situations with DEMPSO. | 4,841.2 | 2018-02-22T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Image Thresholding Segmentation on Quantum State Space
Aiming to implement image segmentation precisely and efficiently, we exploit new ways to encode images and achieve the optimal thresholding on quantum state space. Firstly, the state vector and density matrix are adopted for the representation of pixel intensities and their probability distribution, respectively. Then, the method based on global quantum entropy maximization (GQEM) is proposed, which has an equivalent object function to Otsu’s, but gives a more explicit physical interpretation of image thresholding in the language of quantum mechanics. To reduce the time consumption for searching for optimal thresholds, the method of quantum lossy-encoding-based entropy maximization (QLEEM) is presented, in which the eigenvalues of density matrices can give direct clues for thresholding, and then, the process of optimal searching can be avoided. Meanwhile, the QLEEM algorithm achieves two additional effects: (1) the upper bound of the thresholding level can be implicitly determined according to the eigenvalues; and (2) the proposed approaches ensure that the local information in images is retained as much as possible, and simultaneously, the inter-class separability is maximized in the segmented images. Both of them contribute to the structural characteristics of images, which the human visual system is highly adapted to extract. Experimental results show that the proposed methods are able to achieve a competitive quality of thresholding and the fastest computation speed compared with the state-of-the-art methods.
Introduction
Image segmentation is the task of dividing the image into different regions, each one of which ideally belongs to the same object or content. As a key step from image processing to computer vision, image segmentation is the target expression and has an important effect on the feature measurement, high-level image analysis and understanding [1,2]. Examples of image segmentation applications include medical imaging [3,4], document image analysis [5], object recognition [6,7] and quality inspection of materials [8,9]. In the last two decades, a wide variety of segmentation techniques have been developed, which conventionally fall into the following two categories [2]: layer-based and block-based segmentation methods [10,11]. Among all these techniques, the thresholding methods offer numerous advantages such as smaller storage space, fast processing and ease in manipulation.
In general, thresholding methods can be classified into parametric and nonparametric approaches [12]. Parametric approaches assume that the intensity distributions of images obey the Gaussian mixture (GM) model, which means the number and parameters of Gaussians in the mixture (the model selection) must be determined [13]. Although these problems have been traditionally solved by considering the expectation maximization (EM) algorithm [14] or gradient-based methods [15,16], the methods are time consuming. Nonparametric approaches find the thresholds that separate regions of an image in an optimal manner based on discriminating criteria such as the between-class variance [17], cluster distance [18], entropy [19][20][21][22], etc. Nonparametric methods have shown the advantage of dispensing with the modeling thresholding. However, they still suffer from the problem of high time consumption, although many techniques based on intelligent optimization algorithms (IOAs) [23][24][25] have been used to speed up the thresholding procedure.
Quantum computation and quantum information processing techniques have shown an immense potential and a revolutionary impact on the field of computer science, due to their remarkable resources: quantum parallelism, quantum interference and entanglement of quantum states. Information representing and processing in the framework of quantum theory is powerful for solving complex problems that are difficult or currently even impossible for conventional methods. The most significant works include Shor's quantum integer factoring algorithm, which can find the secret key encryption of the RSA algorithm in polynomial time [26], and Grover's quantum search algorithm for databases, which could achieve quadratic speedup [27]. In the recent years, quantum approaches have been introduced into the image processing field. Various quantum image representation models have been proposed, such as qubit lattice [28] and flexible representation of quantum images (FRQI) [29]. Meanwhile, several applications of quantum image processing have been researched including quantum image segmentation [30], quantum edge detection [31], quantum image recognition [32], quantum image watermarking [33] and quantum image reconstruction [34]. Though the research in quantum image processing still confronts fundamental aspects such as image representation on a quantum computer and the definition of basic processing operations, we still could be inspired to completely exploit new methods for some classical problems from a quantum information theoretical viewpoint.
In this paper, we address the thresholding problem on quantum state space. The proposed methods relate to the details of image representation by utilizing the density matrix, optimal threshold selection based on the criteria of the maximum von Neumann entropy, a novel image encoding scheme and the corresponding segmentation approaches, which can totally avoid the process of optimal solution searching. Specifically, the contributions of this paper mainly include the following aspects: (1) We present an image thresholding method based on the criteria of global quantum entropy maximization (GQEM), which has an equivalent object function to Otsu's, but gives more explicit physical interpretation of image thresholding in the language of quantum mechanics. (2) The quantum lossy-encoding based entropy maximization (QLEEM) approach is proposed to deal with the time consumption problem of thresholding. The QLEEM algorithm directly takes the eigenvalues of density matrices of lossy-encoded images as segmenting clues and then avoids the time-consuming process of searching for optimal thresholds. It can achieve the highest execution speed compared with the state-of-the-art methods. (3) Due to the physical meaning of the lossy-encoding scheme and the unique procedure of optimal thresholding, a brand-new approach to determine the upper bound of the thresholding level automatically is offered in the proposed QLEEM algorithm. For most of the existing methods, this parameter is conventionally predetermined according to empirical knowledge. (4) The QLEEM method provides the maximum inter-class separability with lower loss of intra-class information; thus, segmented images could keep more structural information. This feature is highly consistent with the way the human visual system (HVS) works.
The paper is organized as follows: Section 2 gives a brief description of the image thresholding and introduces some state-of-the-art thresholding methods including Otsu's between-class variance method [17], Kapur's entropy-criterion method [19], the quantum version of Kapur's method [35], and Tsallis entropy-based method [22]. Section 3 introduces the details of the proposed methods. Section 4 provides the experimental results and discussions about our method's performance. The conclusions of this study are drawn in the last part of this paper.
Related Works
Thresholding is a process in which a group of thresholds is selected under some criteria, and then, pixels of an image are divided into a series of sets or classes according to the rule of: where l ∈ [0, L − 1] represents the intensity level of image pixels, {th i | i = 1, 2, · · · , M − 1} is the set of thresholds and {C i | i = 1, 2, · · · , M} are classes labeling different groups of pixels.
Otsu's between-class variance method [17] selects the optimal thresholds by maximizing the following object function: Here, i and j index the intensity classes, and ω i and µ i are the probability of occurrence and the mean of a class, respectively. Such values are obtained as: where p j denotes the probability distribution of pixels and q j = p j /ω i . As we know, Otsu's method can achieve the best segmenting results if no contextual or semantic information is considered, but it suffers from the drawback of time-consuming searching for optimal thresholds. Kapur presented another discriminant criterion based on maximum entropy [19]: where H(C i ) is the Shannon entropy corresponding to a specific class, which is defined as: Similarly, the quantum version of Kapur's method [35] determines the optimal thresholds by maximizing the von Neumann entropy: where: is the density matrix representation of the i-th class and: Recently, the Tsallis entropy-based bi-level thresholding method was proposed [22], in which the optimal threshold is given by: Here, S A T (t) and S B T (t) represent the Tsallis entropy for object A and the background B, respectively, and the entropic index q can be calculated through q-redundancy maximization.
The effectiveness of these entropy-based methods has been proven. However, similar to Otsu's method, they also have the drawback of high computational complexity, which will affect the efficiency of the whole vision task.
Proposed Methods
In this section, we will start with a new method, which utilizes the criteria of global quantum entropy maximization to achieve optimal thresholding, and then propose a novel encoding scheme. Based on this scheme, the improved method for thresholding is derived, which can determine optimal thresholds with linear time complexity.
Thresholding Based on Global Quantum Entropy Maximization
For an image, we can represent its histogram with the following entangled state of a composite quantum system: where we encode the i-th intensity level to the vector |θ i = cosθ i |0 + sinθ i |1 , which belongs to the state space of the first one-qubit subsystem (labeled as "A"), by establishing a bijective relationship between them, namely: and |i is the computational basis state of the second subsystem (labeled as "B"), which denotes the indices of pixel intensities. Though |I is a pure state, the subsystem A or B is in a mixed state. Therefore, we describe these quantum systems in the language of the density matrix. Assuming |I is rewritten as ρ AB , then the reduced density matrix for the subsystem A can be defined by: The density matrix ρ contains the information about the distance between any two intensities, as well as their probability distribution. This property will be very useful for thresholding.
If pixels of an image are divided into M classes by using M-1 thresholds, we represent the histogram of the segmented image with: where θ i = π 2 · µ i L−1 , ω i and µ i are defined in Equation (3). Then, the density matrix of the subsystem A becomes: and the von Neumann entropy of ρ : can quantify how much information is retained in the segmented image; where λ 1 and λ 2 are the eigenvalues of ρ . As a result, we maximize it to determine the optimal thresholds: According to Equations (14) and (15), the following equation is established through simple algebraic computations: where λ 1 + λ 2 = 1, as the restriction must be held.
It is worthwhile to note that Equation (17) can also be used to evaluate thresholding: when Equation (17) takes the maximum value, λ 1 and λ 2 will be most similar to each other, and then, S(ρ ) also reaches its best value. Meanwhile, Equation (17) indicates that the distance between intensities sin 2 ( θ i − θ j ), as well as the probability distribution (ω i , ω j ) affect the thresholding results.
Different from Kapur's entropy-based method and its quantum version, our method has more explicit physical meaning for thresholding in terms of the following features: (1) Encoding pixel intensities on the state space of a one-qubit system can be considered as a process in which independent intensities are squeezed into a two-dimensional space. The similarity between different state vectors, as well as its probability distribution, can be described with the density matrix. Both factors contribute to thresholding. (2) According to the fundamental principles of information theory, the image segmenting process will causes the decrease of the information contained in images. Shannon entropy cannot directly be used to measure the information losses because it quantifies the amount of information on spaces with different dimensionality for original and segmented images. On the contrary, our method encodes the histograms of original and segmented images on the same quantum state space, which indicates that their entropies are comparable. As a result, the trivial solutions for segmentation, for example the thresholds equally dividing intensities into clusters with the same probability, could never appear since the entropy of the original image acts as the upper bound of our object function for all possible solutions. (3) From Equation (17), we find that the object function of our method is very similar to Otsu's, described in Equation (2). The following experimental results will prove that they both achieve the best thresholding.
Quantum Lossy-Encoding-Based Entropy Maximization Method
As we have seen in Section 3.1, the proposed thresholding method derived from the viewpoint of quantum principles can achieve the best segmenting results similar to Otsu's. However, it still suffers from the efficiency problem of searching for optimal thresholds. In this subsection, we present another way for image thresholding on the quantum state space.
Quantum Lossy Encoding of Images
Different from the precedent method, we map the pixel intensities to quantum state vectors according to the following rules: (1) Multiple qubits should be required for encoding intensity levels in accordance with the prospective number of thresholds. In other words, the state vectors are supposed to belong to an M-dimensional space if we want the M-level segmentation. (2) The angle parameter of state vectors ranges from zero to M · π instead of π/2. Namely, θ i = Mπi/L.
Rule (1) provides the foundation for dividing pixel intensities into M classes, being linearly independent of each other. Rules (2) and (3) indicate that all state vectors representing pixel intensities are equally divided into M classes, and the corresponding density matrix: only measures the information related to the local or intra-class uncertainty contributed by those adjoining intensity levels, but removes the global or inter-class information provided by those intensities far apart from each other. According to the above rules, an alternative encoding scheme is given in the recursive form of: where the superscript M is temporarily borrowed to label the dimensionality of state vectors and i ∈ [0, L − 1] denote pixel intensities. As an example, the traces of encoded state vectors in the 2D and 3D case are shown in Figure 1. Differing from ordinary encoding practices, the proposed scheme records local information of images, but removes the global information. More precisely, the following evidence could be verified in the 2D case: we divide intensity levels into two classes equally and equivalently quantify the amount of information with the product of eigenvalues of ρ: We note that the first term on the right of Equation (20) measures the local information (intra-class uncertainty) contributed by intensities in the same class, and the second term counts the global information (inter-class uncertainty) provided by intensities in different classes. Meanwhile, it is easy to verify that the values of the two terms will increase and decrease respectively when θ covers [0, 2π] instead of [0, π/2].
Finally, the optimal thresholds TH = th 1 , th 2 , · · · , th M−1 can be determined according to the following relationships: where λ 0 , λ 1 , · · · , λ M−1 is the sequence taken from the eigenvalue set of ρ, and the corresponding sequence |θ 0 , |θ 1 , · · · , |θ M−1 belongs to the circular permutation of all eigenvectors, which satisfy the following rules: According to the methods mentioned above, the framework of the QLEEM algorithm is given in Algorithm 1.
Algorithm 1
The framework of the QLEEM algorithm Input: The original image I, the thresholding level M Output: The optimal thresholds Init: Compute the histogram of the input image; Step 1: Obtain density matrix ρ by using the lossy-encoding scheme; Step 2: Calculate the eigenvalues and eigenvectors of ρ and then ρ Step 3: Enumerate all possible M circular sequences of the eigenvalues of ρ, and then get M groups of thresholds; Step 4: loop over the M groups of thresholds, and select the optimal one based on which the entropy denoted in Equation (15) takes the maximum value.
Datasets and Settings
To evaluate the performance of the proposed methods, a set of standard test images was obtained from the Berkeley segmentation dataset [36]. All of the test images are 8-bit in depth, with a size of 481 × 321 pixels. The algorithms used for comparison are Otsu's between-class variance method [17], Kapur's entropy criterion method [19], the quantum version of Kapur's [35] and our GQEM and QLEEM methods. These algorithms are implemented with MathWorks MATLAB 2014a on a Thinkpad notebook with an Intel Core-i5 2.2-GHz processor, 16 GB RAM and Ubuntu 14.04.
Threshold levels, quality of segmented images and time complexity are the most important indicators for evaluating the performance of image thresholding algorithms. Here, we evaluate the quality of segmented images by using the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In addition, four measures: the Dice similarity coefficient (DICE) [37], the probabilistic rand index (PRI) [38], the global consistency error (GCE) [36] and the variation of information (VI) [39], are used to assess segmentations against ground truth data. Time complexity is measured by the execution time required in these methods. In particular, except for the proposed QLEEM, all the other exhaustive-search-based methods used in our experiments are sped up with the harmony search multithresholding algorithm (HSMA) [25].
Experimental Results and Comparisons
We applied these algorithms to all 300 pictures contained in the standard test dataset for assessing their performance. For the sake of representation, only five images, which are presented in Figure 2, have been used to show the bi-level segmented results. In Figure 3, the thresholding quality of the outcomes is analyzed considering the complete set, where the PSNR and SSIM scores are calculated under different thresholding levels, and we take the average values on the whole dataset. Meanwhile, we recorded the CPU time consumed by these algorithms, and the average values for all the test images under different thresholding levels are depicted in Figure 4. As an example, the experimental results in terms of thresholding level, thresholds and CPU time are tabulated in Table 1 for a randomly selected image. From Figure 2, we find that the segmentations obtained by using GQEM, QLEEM and Otsu are visually indistinguishable, which means these three methods have a similar performance. This conclusion can be further confirmed in Figure 3: the GQEM method obtains almost the same PSNR score as Otsu's in spite of very little computational error; meanwhile, both GQEM and QLEEM outperform the others in terms of SSIM. The experimental results can be explained with the criteria of maximizing quantum entropy and the lossy-encoding scheme proposed in our methods, because they emphasize the weight of between-class variance and retain the local information, respectively. This feature is highly consistent with the SSIM method, which assesses the perceived quality of images based on structural similarity indicators, such as contrast and local inter-dependencies of pixels.
Examining Figure 4 and Table 1, we can see that the proposed QLEEM algorithm achieves the fastest execution speed (at least 100-times faster than Otsu in the case of bi-level thresholding and up to 350-times when the number of thresholds increases to five). In addition, the time consumption of QLEEM was insensitive to increments of the threshold level, since the complexity of our algorithm was mainly correlated with the total intensity level, instead of the amount of thresholds.
On the other hand, the upper bounds of the thresholding level recommended by the proposed QLEEM algorithm were tested. We found that the maximum possible amount of thresholds was lower than 10 for about 40 images in the test set. Our algorithm would terminate when we try to apply more thresholds to them. Figure 5 lists two groups of images and corresponding histograms, for which the proposed algorithm gave one and two thresholds, respectively. According to the visual observation, it is reasonable to believe that the suggested amounts of thresholds are feasible, as there are no more than three distinct peaks in their histograms. Finally, we evaluate segmentations against the ground truth data. The first experiment is performed on a synthetic image corrupted by Gaussian noise (the mean value is zero, and the variance is 0.03), which is utilized for testing the efficiency and robustness of the proposed methods. Figure 6 shows the noisy image and segmentation results obtained by different algorithms. In addition, the performance indexes: the DICE ratio, PRI, GCE and VI scores, are used to assess the robustness of these algorithms. The corresponding scores are listed in Table 2. The visual comparison in Figure 6 shows that the proposed GQEM and QLEEM algorithms produce clearer and more accurate segmentation results. From Table 2, we can confirm this conclusion: our GQEM clearly outperformed the others on the DICE, PRI, GCE and VI values. The robustness of the proposed GQEM for noisy images can be explained by comparing the object function of GQEM and Otsu. Considering the last term in Equations (2) and (17), both of them measure the distance between pixel intensities, but our GQEM method scaled the range [0, L − 1] of this parameter down to [0, 1]. This feature is helpful for suppressing the high contrast caused by noise, and then, our GQEM algorithm partly played the role of a low-pass filter in segmentation tasks.
In the second experiment, we performed thresholding segmentation on BSDS300 dataset and compared the results with the ground truth segmentations in terms of the DICE, PRI, GCE and VI indexes. The average scores of these indicators obtained by different algorithms are presented in Table 3. From Table 3, we can see that all the listed algorithms obtained lower scores compared with those that have been well trained with the manually-labeled dataset. In general, thresholding segmentation is a form of unsupervised segmentation, which cannot use any a priori knowledge involving the ground truth of a training set of images. Furthermore, the proposed GQEM and QLEEM along with the others used for comparison are all histogram-based algorithms. They achieve optimal segmentation by merely utilizing the probability distribution of colors, instead of the spatial and texture information.
Conclusions
In this paper, we address the image thresholding problem on quantum state space. The proposed GQEM and QLEEM methods follow a different way to represent images and determine the optimal thresholds in the language of quantum mechanics. In summary, the contributions of this paper mainly include the following aspects: (1) To our knowledge, this is the first application of the global quantum entropy criteria to the thresholding problem. The von Neumann entropy is more powerful for image segmentations than the Shannon entropy, because it measures the distance between pixel intensities, as well as the probability distribution. (2) Compared with other state-of-the-art approaches, our QLEEM algorithm tends to retain more structural information after segmentations. It is highly consistent with the way in which the HVS works. (3) The proposed QLEEM algorithm has the lowest consumption of execution times known to us, even compared with others that are sped up with some intelligent optimization techniques. | 5,296.2 | 2018-09-23T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Structural basis for redox sensitivity in Corynebacterium glutamicum diaminopimelate epimerase: an enzyme involved in l-lysine biosynthesis
Diaminopimelate epimerase (DapF) is one of the crucial enzymes involved in l-lysine biosynthesis, where it converts l,l-diaminopimelate (l,l-DAP) into d,l-DAP. DapF is also considered as an attractive target for the development of antibacterial drugs. Here, we report the crystal structure of DapF from Corynebacterium glutamicum (CgDapF). Structures of CgDapF obtained under both oxidized and reduced conditions reveal that the function of CgDapF is regulated by redox-switch modulation via reversible disulfide bond formation between two catalytic cysteine residues. Under oxidized condition, two catalytic cysteine residues form a disulfide bond; these same cysteine residues exist in reduced form under reduced condition. Disulfide bond formation also induces a subsequent structural change in the dynamic catalytic loop at the active site, which results in open/closed conformational change at the active site. We also determined the crystal structure of CgDapF in complex with its product d,l-DAP, and elucidated how the enzyme recognizes its substrate l,l-DAP as a substrate. Moreover, the structure in complex with the d,l-DAP product reveals that CgDapF undergoes a large open/closed domain movement upon substrate binding, resulting in a completely buried active site with the substrate bound.
DapF (EC 5.1.1.7), a pyridoxal 5-phosphate-independent amino acid racemase, catalyzes the stereochemical inversion between l,l-DAP and d,l-DAP. DapF catalyzes the final step of the succinylase pathway. DapF plays an important role in the biosynthesis of l-lysine because DAP decarboxylase only recognizes d,l-DAP. Previous structural and functional studies of DapFs from several organisms such as Haemophilus influenzae [16][17][18][19][20][21] , Mycobacterium tuberculosis 22 , Escherichia coli 23 , and Arabidopsis thaliana 24 have reported that the DapFs consist of two structurally similar α /β domains and each domain provides one of the two key active site cysteine residues. In addition, DapFs utilizes two cysteine residues for catalysis: one cysteine residue acts as a base and abstracts a proton from L,L -DAP, whereas the other acts as an acid and re-protonates the molecule to form d,l-DAP. Despite the importance of C. glutamicum in the production of l-lysine, detailed structural and biochemical studies had not been reported on CgDapF prior to this study.
In this study, we determined the crystal structures of CgDapF in both the oxidized and the reduced form. We elucidated that CgDapF is regulated by redox-switch modulation via reversible disulfide bond formation between two catalytic cysteine residues, depending on the environmental redox state. We also report that CgDapF undergoes a large open/closed domain movement upon substrate binding.
Results and Discussion
Overall structure of CgDapF. To reveal the molecular mechanism of CgDapF, we determined its crystal structure (2.0 Å resolution) via the single-wavelength anomalous dispersion method, using a selenium-substituted crystal ( Table 1). The structure of CgDapF adopts an overall fold similar to those of diaminopimelate epimerases from Mycobacterium tuberculosis (MtDapF; PDB code 3FVE) 22 , Arabidopsis thaliana (AtDapF; PDB code 3EJX) 24 , and Bacillus anthracis (BaDapF; PDB code 2OTN) (Fig. 1a). Each CgDapF monomer consists of two distinct domains: an N-terminal domain (NTD; Met1-Asp131 & Gly268-Ile277) and a C-terminal domain (CTD; Met132-Thr267). Each domain contains a set of five-stranded and three-stranded antiparallel β -sheets and two α -helices (Fig. 1b). One α -helix of each domain (α 2 in the NTD and α 4 in the CTD) is sandwiched between the five-stranded and three-stranded β -sheets, whereas the other helix lies on the surface of the protein. The NTD and the CTD are structurally homologous to each other, with a root-mean-square deviation (RMSD) of 3.4 Å. However, there is one striking structural difference between the two domains; namely, the position of the α -helix PDB code CgDapF is presented as a cartoon diagram with one monomer with a color scheme in (b) and the other in lightblue and salmon for the NTD and CTD, respectively. The bound d,l-DAP is shown as a sphere model with a magenta color. The right-side figure is a 90° rotation in the vertical direction from the left-side figure.
that lies on the surface (Fig. 1c). Two catalytic cysteine residues are positioned at the active site, which is located at a cleft between the two domains ( Fig. 1b). CgDapF functions as a dimer and the asymmetric unit contains a CgDapF dimer (Fig. 1d), which is consistent with our size-exclusion chromatography results under both the oxidized and the reduced conditions (data not shown). The dimerization interface is mainly constituted by the contacts between β 16 from both monomers, and contact between two β -strands connects the two β -sheets of the NTDs from both monomers (Fig. 1d). Contacts between the connecting loops (α 1-β 3) from both monomers also mediate dimerization of the protein (Fig. 1d). The PISA 25 server was used to calculate the buried interface area (854.9 Å 2 ) and the percentage of participating residues (8.3%).
Reversible disulfide bond formation at the active site of CgDapF. Although both monomers adopt
an almost identical overall shape with respect to each other (RMSD = 0.31 Å), an interesting structural difference was observed between the active sites of each monomer. In one monomer, two catalytic cysteine residues (Cys83 and Cys221) were oxidized, forming a disulfide bond with each other; in contrast, these residues were reduced in the other monomer ( Fig. 2a,b). The formation of this disulfide bond also caused a large conformational change in the loop that contains Cys83 (Ala75-Gly84) (Fig. 2c). In the reduced form, the active site exhibits an open conformation that is freely accessible to the substrate (Fig. 2d). In contrast, in the oxidized form of CgDapF, the loop was positioned 5.8 Å closer to the substrate binding site, compared to the reduced form (Fig. 2c), resulting in closure of the substrate binding site (Fig. 2e). Based on these observations, we speculate that the activity of CgDapF is regulated by the environmental redox potential via the formation of a reversible disulfide bond and by the subsequent open/closed conformational change at the active site. Because the loop conformation seems to influence the activity of CgDapF directly, we will henceforth refer to this loop as a 'dynamic catalytic loop' (DC-loop). The conformational change of the DC-loop seems to result from the high flexibility of the loop. The DC-loop shows much higher B-factors in both the oxidized and the reduced forms, as compared with the rest of the protein, indicating that the DC-loop region is highly flexible (Fig. 3a,b). The stabilization modes of the loop are also quite different from each other. In the reduced form of CgDapF, the loop is mainly stabilized by interactions with the NTD. The main chain portion of Ala80 forms direct hydrogen bonds with the main chain portion of Tyr72 and the side chain of Arg110, and the main chain oxygen atoms of Cys83 and Gly84 formed hydrogen bonds with the main chain nitrogen atom of Val87 and the side chain of Arg88 (Fig. 3c). In contrast, in the oxidized form of CgDapF, the DC-loop interacts with the CTD instead. The disulfide bond between Cys83 and Cys221 is the main contributor to the stability of the loop, and the side chain of Thr223 forms hydrogen bonds with the main chain portions of Met82 and Cys83 (Fig. 3d). In fact, no reducing agent was added, neither to the reservoir nor to the protein solution used for crystallization, indicating that the protein was crystallized under somewhat oxidized condition. Consequently, we sought to answer why each monomer adopted a different conformation and why only one monomer contained a disulfide bond under identical redox condition. When we generated I222 symmetry operations, it became clear that the contacts of the two monomers with neighboring molecules completely differ from each other. In particular, the DC-loop of the oxidized monomer makes a close contact with one neighboring molecule, whereas that of the reduced monomer makes no contact with any molecule (Fig. 3e). These observations suggested the possibility that disulfide bond formation at the active site may have been an artifact caused by different crystal contacts, rather that constituting an authentic structural feature of the protein. We then determined a crystal structure of CgDapF in the presence of 1 mM DTT, to investigate whether reversible disulfide bond formation and the associated conformational change we had observed indeed constitute authentic structural features of the protein ( Table 1). The structure determined in the presence of 1 mM DTT belonged to space group I222, the same group as that of the oxidized structure, and thus identical crystal contacts were formed as when the protein had been crystallized without a reducing agent. Interestingly, under reduced condition the structures of both monomers adopted the same conformation that had been adopted in the 'reduced monomer' in the earlier structure that had been determined in the absence of a reducing agent: no disulfide bond was observed in either monomer and the DC-loops showed open conformations (Fig. 3f). These results demonstrate that, in the earlier structure that was determined in the absence of a reducing agent, the pattern of crystal contacts was not responsible for the formation of disulfide bond and the closed conformation. Rather, these features were derived from somewhat oxidized condition during crystallization. Taken together, we propose that CgDapF regulates its function by redox-switch modulation via reversible disulfide bond formation and the subsequent open/closed conformational change.
The redox sensitivity of CgDapF. In general, redox-mediated modification of cellular proteins confers a respose to changes in the environmental redox potential [26][27][28] . Our crystal structures of CgDapF in both oxidized and reduced forms suggest that CgDapF is regulated by redox-switch modulation, via reversible disulfide bond formation. To investigate the redox sensitivity of CgDapF, we examined the susceptibility of CgDapF to hydrogen peroxide (H 2 O 2 ). When CgDapF was treated with various concentrations of H 2 O 2 , the enzyme showed a concentration-dependent loss of activity (Fig. 4a), indicating the enzyme is indeed sensitive to oxidative environment. To verify whether disulfide bond formation was indeed reversible, we performed an activity recovery test on CgDapF using an Ellman assay, which measures the amount of free thiol groups in a protein. CgDapF was pre-incubated with various concentrations of H 2 O 2 , and the concentration of free thiol groups in the enzyme was incrementally decreased (Fig. 4b,c). Then, when the environment was switched to reduced condition by removing H 2 O 2 and adding 10 mM DTT; the oxidized protein indeed recovered and free thiol groups were measured (Fig. 4b,c). In highly oxidized condition, cysteine thiol groups can form sulfinic or sulfonic acids, which cannot be reduced back to thiol groups by DTT. However, the recovery of thiol groups in our Ellman assay results indicate that CgDapF formed disulfide bonds when oxidized by H 2 O 2 and that the thiol groups of the enzyme were recovered through breakage of disulfide bonds by DTT. Our proposal of a reversible disulfide bond formation that accompanies the structural change was investigated by circular dichroism spectroscopy (CD). CD measurements were recorded over the wavelengths 190-260 nm on the CgDapF protein, which had been treated with either 1 mM H 2 O 2 or 1 mM DTT (Fig. 4d). The CD results demonstrate that there are notable differences in the ellipticity values recorded from CgDapF that depending on whether it had been treated with DTT or H 2 O 2 , indicating that CgDapF undergoes substantial redox-dependent structural changes. The secondary structure elements calculated using the CD data were quite similar to the structural analysis from the crystal structures, and there was no significant difference between the oxidized and the reduced conditions. Taking our structural and biochemical observations on CgDapF together, we propose that the activity of CgDapF is regulated by redox-switch modulation via the reversible disulfide bond formation in response to environmental redox changes. We further propose that the open/closed conformational change caused by the reversible disulfide bond formation also contributes to the redox sensitive regulation of the protein.
Comparison of CgDapF with other DapFs. The structures of DapF from several organisms have been determined in both oxidized and reduced conditions. To investigate the differences between DapFs, we compared the oxidized form of CgDapF with MtDapF and DapF from Haemophilus influenzae (HiDapF; PDB code 1BWZ and 2Q9H) 16,21 and the reduced form of CgDapF is compared with BaDapF and DapF from Escherichia coli (EcDapF; PDB code 4IJZ) 23 . The overall structure is almost conserved in DapFs and the quaternary structure of DapFs is dimer. In both oxidized and reduced forms of CgDapF, PISA 25 server compute that the oligomeric status is dimer and other DapFs show same results. The dimeric interface of DapFs is also conserved. The two β -strands from both monomers mainly contribute to the dimerization of DapFs. The buried interface areas of MtDapF, HiDapF, BaDapF and EcDapF were calculated by PISA 25 server and the values are 687.7 Å, 1084.6 Å, 854.6 Å, and 771.8 Å, respectively. Similarities in the overall structure extend to the active site residues. In DapF structures, the residues involved in the substrate binding are almost identical. Although Thr223 in CgDapF that stabilizes the carboxyl group of d,l-DAP on the side bearing the d-amino moiety was substituted for Ser219 in HiDapF and EcDapF, the role of the residues seems to be similar each other.
When we superimposed the NTD of the oxidized form of CgDapF with those of MtDapF and HiDapF, NTD structures matched each other closely, whereas the CTDs from three oxidized DapFs show somewhat different conformations (Fig. 5a). HiDapF contains two additional helices; one helix is inserted at the entrance of CTD and the other is inserted into the connecting loop between β 10 and β 11 (Fig. 5a). The catalytic cysteine residues from CgDapF and HiDapF form a disulfide bond, whereas those from MtDapF exist in mixed states of the oxidized and the reduced conformations (Fig. 5b). As we mentioned above, the disulfide bond between two catalytic cysteine residues results in the conformational change in the DC-loop. Although the disulfide bond mediated Scientific RepoRts | 7:42318 | DOI: 10.1038/srep42318 conformational change is also observed in HiDapF, the movement of the DC-loop is less dynamic compared with CgDapF; the loop containing Cys73 (Val70-Gly76) in HiDapF was moved only by 3.8 Å (Fig. 5c). The DC-loop of MtDapF (Asn78-Gly90) is also located at the similar position as that of HiDapF (Fig. 5b). The oxidized conformations also exhibit the closed conformation at the substrate binding site in both MtDapF and HiDapF (Fig. 5d,e). When we compared the reduced form of CgDapF with BaDapF and EcDapF, the conformations of both NTD and CTD are almost identical (Fig. 5e). Compared to the reduced form of CgDapF, BaDapF and EcDapF also contain two additional helices at the similar position to HiDapF (Fig. 5f). The catalytic cysteine residues are all reduced (Fig. 5g) and the DC-loops show the open conformation (Fig. 5h,i). Based on these structural comparisons, we suggest that the activities of DapF proteins might be regulated by redox-switch modulation via the reversible disulfide bond formation to respond the environmental redox changes. We also measured the kinetic parameters of CgDapF and the Km and kcat values of CgDapF were 1.86 mM and 58 sec-1, respectively. Comparing with the kinetic parameters of other DapFs 29,30 , the enzyme activity of CgDapF seems to be somewhat lower than other DapFs.
Substrate binding mode of CgDapF.
To investigate the substrate binding mode of CgDapF, we determined the crystal structure of the protein in complex with the d,l-DAP product (Table 1). In fact, although we added a mixture of the three DAP enantiomers (l,l-DAP, d,d-DAP, d,l-DAP) into the crystallization solution, we only observed density consistent with d,l-DAP in the electron density map (Fig. 6a). We suspect that the l,l-DAP substrate was converted to the d,l-DAP product during the crystallization procedure because we used the wild-type protein. The substrate binding site is located at a deep cleft formed between the two domains. It can be divided into two sub-sites, a catalytic sub-site and a recognition sub-site, where the d-and l-amino moieties of d,l-DAP are positioned, respectively. The two catalytic cysteine residues are located at the catalytic sub-site, and the amino group in the d-amino moiety is hydrogen bonded to the carboxyl group of Glu212 (Fig. 6b). The carboxyl group of d,l-DAP on the side bearing the d-amino moiety, is stabilized through hydrogen bonds with the main chain nitrogen atoms of Gly84, Asn85, Gly222, and Thr223, and the side chains of Asn85 and Thr223 (Fig. 6b). In the recognition sub-site, the carboxyl group of d,l-DAP on the side bearing the l-amino moiety is stabilized by direct interactions with the main chain of Arg213 and the side chains of Asn74, Asn159, Asn194, and Arg213 (Fig. 6b). The l-amino group of d,l-DAP forms hydrogen bonds with the main chain oxygen atom of Arg213 and side chain of Glu212 (Fig. 6b). In our structure, the stabilization mode of l-amino moiety of d,l-DAP provides a clear explanation of the means by which the enzyme utilizes l,l-DAP as a substrate.
Based on our CgDapF structure in complex with the d,l-DAP product, we performed site-directed mutagenesis experiments to verify residues inferred to be involved in catalysis and substrate binding. We compared the enzyme activities of the resulting mutants to the activity of the wild-type protein. To test their importance as catalytic residues, Cys83 and Cys221 were mutated to alanine; the C83A and C221A mutants almost completely lost their activity (Fig. 6c). This suggests that CgDapF uses both of these residues for catalysis, and the enzymatic reaction mechanism is similar to that of other DapF proteins. In addition, we mutated residues structurally inferred to be involved in the stabilization of the substrate to alanine, generating N15A, N74A, N85A, N159A, N194A, E212A, R213A, and T223A. As anticipated, the enzymatic activities of all these mutants were almost completely abolished, as compared with that of the wild-type enzyme (Fig. 6c).
Domain movement in CgDapF.
When we superimposed the apo-structure of CgDapF with that of the complex with d,l-DAP, it became evident that substrate binding had induced the CTD to move towards the NTD (Fig. 7a). In particular, three loops (Loop I: Met157-Asn159; Loop II: Met177-Val193; Loop III: Arg213-Gly214) move by 3.5 Å, 5.8 Å, and 6.2 Å, respectively (Fig. 7a). For the stabilization of reorganized CTD, residues from NTD interact with the CTD through hydrogen bonds: Met157 from Loop I, and Arg213 and Gly214 from Loop III interact with the Arg114, Glu71 and Asp76 of the NTD, respectively (Fig. 7b). In addition, the interactions between the reorganized CTD and the residues located at the entrance of the active site cleft result in a completely buried substrate binding site (Fig. 7c,d). These structural observations clearly show that CgDapF undergoes an open/closed conformational change during the entrance of the substrate and the release of the product.
In this report, we have revealed that the function of CgDapF is regulated by redox-switch modulation via the reversible disulfide bond formation in the response to the environmental redox conditions. This reversible disulfide bond formation also induces structural change in the DC-loop of the active site. Under oxidized condition, the position of the DC-loop occludes the active site, which inactivates the enzyme. However, under reduced condition, the DC-loop undergoes a conformational change that opens the active site, restoring enzymatic activity. Because redox-switch modulation is one of the key regulatory mechanisms used to control the function of enzymes in response to changes in the environmental redox state, we suspect that cellular redox changes might significantly influence l-lysine biosynthesis. Further investigations of the relationship between the cellular redox state and l-lysine biosynthesis in C. glutamicum seem to be needed.
Methods
Production of CgDapF. The CgDapF gene was amplified by polymerase chain reaction (PCR) using genomic DNA from C. glutamicum strain ATCC 13032 as a template. The PCR product was then subcloned into pET30a (Life Science Research), and the resulting expression vector pET30a:CgdapF was transformed into the E. coli strain B834(DE3), which was grown in 1 L of LB medium containing kanamycin at 37 °C. After induction by the addition of 1 mM isopropyl β -D-1-thiogalactopyranoside (IPTG), the culture medium was maintained for a further 20 h at 18 °C. The culture was then harvested by centrifugation at 4,000 × g for 20 min at 4 °C. The cell pellet was resuspended in buffer A (40 mM Tris-HCl, pH 8.0) and then disrupted by ultrasonication. The cell debris was removed by centrifugation at 13,500 × g for 30 min and the lysate was applied to an Ni-NTA agarose column (Qiagen). After washing with buffer A containing 30 mM imidazole, the bound proteins were eluted with 300 mM imidazole in buffer A. Finally, trace amounts of contaminants were removed by size-exclusion chromatography by using a Superdex 200 prep-grade column (320 mL, GE Healthcare) equilibrated with buffer A. All purification experiments were performed at 4 °C and SDS-polyacrylamide gel electrophoresis analysis of the purified proteins showed a single polypeptide of 29.9 kDa that corresponded to the estimated molecular weight of the CgDapF monomer. The purified protein was concentrated to 40 mg/mL in 40 mM Tris-HCl, pH 8.0. Site-directed mutagenesis experiments were performed using the Quick Change site-directed mutagenesis kit (Stratagene). The production and purification of the CgDapF mutants were carried out by the same procedure employed for the wild-type protein. Data collection and structure determination of CgDapF. The crystals were transferred to cryoprotectant solution composed of the corresponding conditions described above and 30% (v/v) glycerol, fished out with a loop larger than the crystals, and flash-frozen by immersion in liquid nitrogen. All data were collected at the 7 A beamline of the Pohang Accelerator Laboratory (PAL, Pohang, Korea), using a Quantum 270 CCD detector (ADSC, USA). The oxidized form and reduced form of CgDapF crystals diffracted to a resolution 2.3 and 2.0 Å, respectively. The crystals of CgDapF in complex with D,L -DAP diffracted to 2.6 Å resolutions. All data were indexed, integrated, and scaled together using the HKL-2000 software package 31 . The oxidized form of CgDapF belonged to the space group P32 with the unit cell parameters a = b = 143.584 Å, c = 122.371 Å, α = β = 90.0°, and γ = 120.0°. In order to solve the phase problem for the crystal, we also prepared crystals from a SeMet derivative. SeMet-substituted oxidized form crystals were obtained using the same crystallization condition as used for the native protein crystal. The crystals of SeMet derivative diffracted to a resolution 2.0 Å and belonged to the space group I222 with the unit cell parameters a = 101.74 Å, b = 119.08 Å, c = 155.59 Å, and α = β = γ = 90°. Assuming two molecules of oxidized form of CgDapF (29.2 kDa) per asymmetric unit, the crystal volume per unit of protein mass was approximately 3.94 Å 3 Da −1 , which corresponded to a solvent content of 68.80% 32 . The reduced form of CgDapF crystals that substituted with SeMet belonged to the space group I222 with unit cell parameters a = 101.76, b = 118.94 Å, c = 153.33 Å, α = β = γ = 90.0°. With two molecules of CgDapF per asymmetric unit, the crystal volume per unit of protein mass was 3.88 Å 3 Da −1 , which corresponded to a solvent content of 68.31% 32 . The crystals of CgDapF in complex with D,L -DAP belonged to space group P4 3 32, with unit cell parameters of parameters a = b = c = 155.7 Å, and α = β = γ = 90.0°. With one molecule of CgDapF in complex with D,L -DAP in an asymmetric unit, the crystal volume per unit of protein mass was 2.63 Å 3 Da −1 , indicating that the solvent content was approximately 53.26% 32 . The crystal structures of SeMet-substituted CgDapF in oxidized and reduced form were solved by single-wavelength anomalous dispersion (SAD) using data collected at a determined wavelength (0.97934 Å). Experimentally determined f" and f"' values by using SOLVE/RESOLVE 33 . The initial model build was automatically performed using ARP/wARP 34 , and the final model was built by using the program Wincoot 35 . The structure of CgDapF in complex with D,L -DAP was determined by molecular replacement with the CCP4 version of MOLREP 36 using the refined CgDapF in reduced form structure. Model building was performed manually using the program WinCoot 35 , and refinement was performed with CCP4 refmac5 37 . The data statistics are summarized in Table 1. The refined models of oxidized and reduced form of CgDapF and CgDapF in complex with D,L -DAP were deposited in the Protein Data Bank with PDB codes of 5H2G, 5H2Y, and 5M47, respectively.
Crystallization of
DapF activity assay. The activity of CgDapF was measured using the coupled enzyme assay with DAP decarboxylase from C. glutamicum (CgLysA). The D,L -DAP is generated by the reaction of CgDapF using L,L -DAP and then CgLysA immediately converts D,L -DAP into L -lysine. The production of l-lysine was detected using the lysine oxidase/peroxidase method. Lysine oxidase converts the remaining l-lysine into 6-amino-2-oxohexanoate, NH 3 , and H 2 O 2 , and the H 2 O 2 is then reduced by peroxidase using 2,2′ -azino-bis(3-ethylbenzothiazoline-6-sulphonic acid) (ABTS). Immediately after the addition of CgDapF enzyme and CgLysA into the reaction mixture, equal volume of 2x lysine oxidase/peroxidase solution (0.1 unit ml −1 lysine oxidase, 1 unit ml −1 peroxidase, and 3.6 mM ABTS in 0.1 M potassium phosphate buffer, pH 8.0) was added to the reaction mixture. The amount of oxidized ABTS was detected by measuring absorbance at 412 nm. Activity assays were performed at room temperature and the reaction mixture contains 0.1 M potassium phosphate, pH 8.0, 42.19 μ M CgLysA, various concentrations of L,L -DAP, and 3.45 μ M purified CgDapF enzyme.
To examine the susceptibility of CgDapF to hydrogen peroxide (H 2 O 2 ), the CgDapF protein was treated with various concentrations of H 2 O 2 for 1 h and enzyme-buffer assay mixture was added. To switch the redox condition to the reduced state, the protein solutions treated with various concentrations of H 2 O 2 were thoroughly dialyzed against the buffer without H 2 O 2 and 10 mM of DTT was added to the protein samples.
Scientific RepoRts | 7:42318 | DOI: 10.1038/srep42318 UV/Vis spectroscopy. To quantify the free thiol group, the protein samples were incubated with 5,5-dithiobis(2-nitrobenzoic acid) (DTNB) and the UV/Vis absorbance spectroscopy was performed. In the presence of thiol group compound, the colorless 5,5-dithiobis(2-nitrobenzoic acid) (DTNB) is converted to yellow 2-Nitro-5-mercaptobenzoic acid (TNB), which shows an absorption maximum at 409 nm. The purified CgDapF protein was pre-incubated with various concentration of H 2 O 2 for 1 h and then added into the reaction mixture. To switch the environment to reduced condition, 10 mM DTT was added to H 2 O 2 -treated proteins and incubated for 30 min. The reaction mixture contains 0.1 M potassium phosphate, pH 8.0, 10 μ M DTNB, and 6.85 μ M purified CgDapF enzyme. All incubations and reactions were performed at room temperature. Circular Dichroism spectroscopy. CD measurements were carried out with a Jasco J-815 CD Spectropolarimeter. Far-UV CD spectra were collected between 190 and 260 nm with a scan speed 20 nm min −1 . Spectra of the CgDapF enzymes (33.44 μ M in 10 mM potassium phosphate buffer, pH 8.0) reduced by 1 mM DTT or oxidized by 1 mM H 2 O 2 were recorded at 20 °C in a quartz cuvette (0.2-mm pathlength) with a bandwidth 1 nm. The data were expressed in mean residue ellipticity [θ] in degree cm 2 dmol −1 . | 6,445.6 | 2017-02-08T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Joint Lemmatization and Morphological Tagging with Lemming
We present LEMMING, a modular log-linear model that jointly models lemmatization and tagging and supports the integration of arbitrary global features. It is trainable on corpora annotated with gold standard tags and lemmata and does not rely on morphological dictionaries or analyzers. LEMMING sets the new state of the art in token-based statistical lemmatization on six languages; e.g., for Czech lemmatization, we reduce the error by 60%, from 4.05 to 1.58. We also give empirical evidence that jointly modeling morphological tags and lemmata is mutually beneficial.
Introduction
Lemmatization is important for many NLP tasks, including parsing (Björkelund et al., 2010;Seddah et al., 2010) and machine translation (Fraser et al., 2012).Lemmata are required whenever we want to map words to lexical resources and establish the relation between inflected forms, particularly critical for morphologically rich languages to address the sparsity of unlemmatized forms.This strongly motivates work on language-independent token-based lemmatization, but until now there has been little work (Chrupała et al., 2008).
Many regular transformations can be described by simple replacement rules, but lemmatization of unknown words requires more than this.For instance the Spanish paradigms for verbs ending in ir and er share the same 3rd person plural ending en; this makes it hard to decide which paradigm a form belongs to. 1 Solving these kinds of problems requires global features on the lemma.Global features of this kind were not supported by previous work (Dreyer et al., 2008;Chrupała, 2006;Toutanova and Cherry, 2009;Cotterell et al., 2014).
There is a strong mutual dependency between (i) lemmatization of a form in context and (ii) disambiguating its part-of-speech (POS) and morpho-1 Compare admiten "they admit" → admitir "to admit", but deben "they must" → deber "to must".logical attributes.Attributes often disambiguate the lemma of a form, which explains why many NLP systems (Manning et al., 2014;Padró and Stanilovsky, 2012) apply a pipeline approach of tagging followed by lemmatization.Conversely, knowing the lemma of a form is often beneficial for tagging, for instance in the presence of syncretism; e.g., since German plural noun phrases do not mark gender, it is important to know the lemma (singular form) to correctly tag gender on the noun.
We make the following contributions.(i) We present the first joint log-linear model of morphological analysis and lemmatization that operates at the token level and is also able to lemmatize unknown forms; and release it as open-source (http: //cistern.cis.lmu.de/lemming).It is trainable on corpora annotated with gold standard tags and lemmata.Unlike other work (e.g., Smith et al. (2005)) it does not rely on morphological dictionaries or analyzers.(ii) We describe a log-linear model for lemmatization that can easily be incorporated into other models and supports arbitrary global features on the lemma.(iii) We set the new state of the art in token-based statistical lemmatization on six languages (English, German, Czech, Hungarian, Latin and Spanish).(iv) We experimentally show that jointly modeling morphological tags and lemmata is mutually beneficial and yields significant improvements in joint (tag+lemma) accuracy for four out of six languages; e.g., Czech lemma errors are reduced by >37% and tag+lemma errors by >6%.
2 Log-Linear Lemmatization Chrupała (2006) formalizes lemmatization as a classification task through the deterministic preextraction of edit operations transforming forms into lemmata.Our lemmatization model is in this vein, but allows the addition of external lexical information, e.g., whether the candidate lemma is in a dictionary.Formally, lemmatization is a stringto-string transduction task.Given an alphabet Σ, it maps an inflected form w ∈ Σ * to its lemma l ∈ Σ * given its morphological attributes m.We model this process by a log-linear model: where f represents hand-crafted feature functions, θ is a weight vector, and h w : Σ * → {0, 1} determines the support of the distribution, i.e., the set of candidates with non-zero probability.
Candidate selection
A proper choice of the support function h(•) is crucial to the success of the model -too permissive a function and the computational cost will build up, too restrictive and the correct lemma may receive no probability mass.Following Chrupała (2008), we define h(•) through a deterministic pre-extraction of edit trees.To extract an edit tree e for a pair form-lemma ⟨w, l⟩, we first find the longest common substring (LCS) (Gusfield, 1997) between them and then recursively model the prefix and suffix pairs of the LCS.When no LCS can be found the string pair is represented as a substitution operation transforming the first string to the second.The resulting edit tree does not encode the LCSs but only the length of their prefixes and suffixes and the substitution nodes (cf. Figure 3); e.g., the same tree transforms worked into work and touched into touch.
As a preprocessing step, we extract all edit trees that can be used for more than one pair ⟨w, l⟩.To generate the candidates of a word-form, we apply all edit trees and also add all lemmata this form was seen with in the training set (note that only a small subset of the edit trees is applicable for any given form because most require incompatible substitution operations).
Features
Our novel formalization lets us combine a wide variety of features that have been used in different Figure 2: Our model is a 2nd-order linear-chain CRF augmented to predict lemmata.We heavily prune our model and can easily exploit higher-order (>2) tag dependencies.
previous models.All features are extracted given a form-lemma pair ⟨w, l⟩ created with an edit tree e.We use the following three edit tree features of Chrupała (2008).(i) The edit tree e. (ii) The pair ⟨e, w⟩.This feature is crucial for the model to memorize irregular forms, e.g., the lemma of was is be.(iii) For each form affix (of maximum length 10): its conjunction with e.These features are useful in learning orthographic and phonological regularities, e.g., the lemma of signalling is signal, not signall.
We define the following alignment features.Similar to Toutanova and Cherry (2009) (TC), we define an alignment between w and l.Our alignments can be read from an edit tree by aligning the characters in LCS nodes character by character and characters in substitution nodes block-wise.Thus the alignment of umgeschautumschauen is: u-u, m-m, ge-ϵ, s-s, c-c, h-h, a-a, u-u, t-en.Each alignment pair constitutes a feature in our model.These features allow the model to learn that the substitution t/en is likely in German.We also concatenate each alignment pair with its form and lemma character context (of up to length 6) to learn, e.g., that ge is often deleted after um.
We define two simple lemma features.(i) We use the lemma itself as a feature, allowing us to learn which lemmata are common in the language.(ii) Prefixes and suffixes of the lemma (of maximum length 10).This feature allows us to learn that the typical endings of Spanish verbs are ir, er, ar.
We also use two dictionary features (on lemmata): Whether l occurs > 5 times in Wikipedia and whether it occurs in the dictionary ASPELL. 2e use a similar feature for different capitalization variants of the lemma (lowercase, first letter uppercase, all uppercase, mixed).This differentiation is important for German, where nouns are capitalized and en is both a noun plural marker and a frequent verb ending.Ignoring capitalization would thus lead to confusion.
POS & morphological attributes.For each feature listed previously, we create a conjunction with the POS and each morphological attribute.3
Joint Tagging and Lemmatization
We model the sequence of morphological tags using MARMOT (Mueller et al., 2013), a pruned higher-order CRF.This model avoids the exponential runtime of higher-order models by employing a pruning strategy.Its feature set consists of standard tagging features: the current word, its affixes and shape (capitalization, digits, hyphens) and the immediate lexical context.We combine lemmatization and higher-order CRF components in a tree-structured CRF.Given a sequence of forms w with lemmata l and morphological+POS tags m, we define a globally normalized model: where f and g are the features associated with lemma and tag cliques respectively and θ and λ are weight vectors.The graphical model is shown in Figure 2. We perform inference with belief propagation (Pearl, 1988) and estimate the parameters with SGD (Tsuruoka et al., 2009).We greatly improved the results of the joint model by initializing it with the parameters of a pretrained tagging model.
Related Work
In functionality, our system resembles MORFETTE (Chrupała et al., 2008), which generates lemma candidates by extracting edit operation sequences between lemmata and surface forms (Chrupała, 2006), and then trains two maximum entropy Markov models (Ratnaparkhi, 1996) for morphological tagging and lemmatization, which are queried using a beam search decoder.
In our experiments we use the latest version4 of MORFETTE.This version is based on structured perceptron learning (Collins, 2002) and edit trees (Chrupała, 2008).Models similar to MORFETTE include those of Björkelund et al. (2010) and Gesmundo and Samardžić (2012) and have also been used for generation (Dušek and Jurčíček, 2013).Wicentowski (2002) similarly treats lemmatization as classification over a deterministically chosen candidate set, but uses distributional information extracted from large corpora as a key source of information.
Toutanova and Cherry (2009)'s joint morphological analyzer predicts the set of possible lemmata and coarse-grained POS for a word type.This is different from our problem of lemmatization and fine-grained morphological tagging of tokens in context.Despite the superficial similarity of the two problems, direct comparison is not possible.TC's model is best thought of as inducing a tagging dictionary for OOV types, mapping them to a set of tag and lemma pairs, whereas LEMMING is a token-level, context-based morphological tagger.
We do, however, use TC's model of lemmatization, a string-to-string transduction model based on Jiampojamarn et al. (2008) (JCK), as a stand-alone baseline.Our tagging-in-context model is faced with higher complexity of learning and inference since it addresses a more difficult task; thus, while we could in principle use JCK as a replacement for our candidate selection, the edit tree approachwhich has high coverage at a low average number of lemma candidates (cf.Section 5) -allows us to train and apply LEMMING efficiently.Smith et al. (2005) proposed a log-linear model for the context-based disambiguation of a morphological dictionary.This has the effect of joint tagging, morphological segmentation and lemmatization, but, critically, is limited to the entries in the morphological dictionary (without which the approach cannot be used), causing problems of recall.In contrast, LEMMING can analyze any word, including OOVs, and only requires the same training corpus as a generic tagger (containing tags and lemmata), a resource that is available for many languages.
Experiments
Datasets.We present experiments on the joint task of lemmatization and tagging in six diverse languages: English, German, Czech, Hungarian, Latin and Spanish.We use the same data sets as in Müller and Schütze (2015) In each cell, overall token accuracy is left (all), accuracy on unknown forms is right (unk).Standalone MARMOT tagging accuracy (line 1) is not repeated for pipelines (lines 2-7).The best numbers are bold.LEMMING-J models significantly better than LEMMING-P (+), or LEMMING models not using morphology (+dict) (×) or both (+ ×) are marked.More baseline numbers in the appendix (Table A2).
out-of-domain test sets.The English data is from the Penn Treebank (Marcus et al., 1993) (Hajič et al., 2009).For German, Hungarian, Spanish and Czech we use the splits from the shared tasks; for English the split from SANCL (Petrov and McDonald, 2012); and for Latin a 8/1/1 split into train/dev/test.For all languages we limit our training data to the first 100,000 tokens.Dataset statistics can be found in Table A4 of the appendix.The lemma of Spanish se is set to be consistent.
Baselines.We compare our model to three baselines.(i) MORFETTE (see Section 4).(ii) SIMPLE, a system that for each form-POS pair, returns the most frequent lemma in the training data or the form if the pair is unknown.(iii) JCK, our reimplementation of Jiampojamarn et al. (2008).Recall that JCK is TC's lemmatization model and that the full TC model is a type-based model that cannot be applied to our task.As JCK struggles to memorize irregulars, we only use it for unknown form-POS pairs and use SIMPLE otherwise.For aligning the training data we use the edit-tree-based alignment described in the feature section.We only use output alphabet symbols that are used for ≥ 5 form-lemma pairs and also add a special output symbol that indicates that the aligned input should simply be copied.We train the model using a structured averaged perceptron and stop after 10 training iterations.
In preliminary experiments we found type-based training to outperform token-based training.This is understandable as we only apply our model to unseen form-POS pairs.The feature set is an exact reimplementation of (Jiampojamarn et al., 2008), it consists of input-output pairs and their character context in a window of 6.
Results.Our candidate selection strategy results in an average number of lemma candidates between 7 (Hungarian) and 91 (Czech) and a coverage of the correct lemma on dev of >99.4 (except 98.4 for Latin). 5We first compare the baselines to LEMMING-P, a pipeline based on Section 2, that lemmatizes a word given a predicted tag and is trained using L-BFGS (Liu and Nocedal, 1989).We use the implementation of MALLET (McCallum, 2002).For these experiments we train all models on gold attributes and test on attributes predicted by MORFETTE.MORFETTE's lemmatizer can only be used with its own tags.We thus use MORFETTE tags to have a uniform setup, which isolates the effects of the different taggers.Numbers for MARMOT tags are in the appendix (Table A1).For the initial experiments, we only use POS and ignore additional morphological attributes.We use different feature sets to illustrate the utility of our templates.The first model uses the edit tree features (edittree).Table 1 shows that this version of LEM-MING outperforms the baselines on half of the languages. 6In a second experiment we add the align- ment (+align) and lemma features (+lemma) and show that this consistently outperforms all baselines and edittree.We then add the dictionary feature (+dict).The resulting model outperforms all previous models and is significantly better than the best baselines for all languages.7These experiments show that LEMMING-P yields state-of-the-art results and that all our features are needed to obtain optimal performance.The improvements over the baselines are >1 for Czech and Latin and ≥.5 for German and Hungarian.
The last experiment also uses the additional morphological attributes predicted by MORFETTE (+mrph).This leads to a drop in lemmatization performance in all languages except Spanish (English has no additional attributes).However, preliminary experiments showed that correct morphological attributes would substantially improve lemmatization as they help in cases of ambiguity.As an example, number helps to lemmatize the singular German noun Raps "canola", which looks like the plural of Rap "rap".Numbers can be found in Table A3 of the appendix.This motivates the necessity of joint tagging and lemmatization.
For the final experiments, we run pipeline models on tags predicted by MARMOT (Mueller et al., 2013) and compare them to LEMMING-J, the joint model described in Section 3.All LEMMING versions use exactly the same features.Table 2 shows that LEMMING-J outperforms LEMMING-P in three measures (see bold tag, lemma & joint (tag+lemma) accuracies) except for English, where we observe a tie in lemma accuracy and a small drop in tag and tag+lemma accuracy.Coupling morphological attributes and lemmatization (lines 8-10 vs 11-13) improves tag+lemma prediction for five languages.Improvements in lemma accuracy of the joint over the best pipeline systems range from .1 (Spanish), over >.3 (German, Hungarian) to ≥.96 (Czech, Latin).
Conclusion
LEMMING is a modular lemmatization model that supports arbitrary global lemma features and joint modeling of lemmata and morphological tags.It is trainable on corpora annotated with gold standard tags and lemmata, and does not rely on morphological dictionaries or analyzers.We have shown that modeling lemmatization and tagging jointly benefits both tasks, and we set the new state of the art in token-based lemmatization on six languages.), |x| − i e ) Create a tree given a form-lemma pair ⟨x, y⟩.LCS returns the start and end indexes of the LCS in x and y. x ie is denotes the substring of x starting at index i s (inclusive) and ending at index i e (exclusive).i e − i s thus equals the length of this substring.|x| denotes the length of x.Note that the tree does not store the LCS, but only the length of the prefix and suffix.This way the tree for umgeschaut can also be applied to transform umgebaut "renovated" into umbauen "to renovate".
For the example umgeschaut-umschauen, the LCS is the stem schau.The function then recursively transforms umge into um and t into en.The prefix and suffix lengths of the form are 4 and 1 respectively.The left sub-node needs to transform umge into um.The new LCS is um.The new prefix and suffix lengths are 0 and 2 respectively.As the new prefix is empty the is nothing more to do.The suffix node needs to transform ge into the empty string ϵ.As the new LCS of the suffix is empty, because ge and ϵ have no character in common, the node is represented as a substitution node.The remaining transformation t into en is also represented as a substitution, resulting in the tree in Figure 3: if tree is a LCS node then 3: tree → tree i , i l , tree j , j l 4: if |x| < i l + j l then ▷ Prefix and Suffix do not fit.p =APPLY(tree i , x i l 0 ) ▷ Create prefix.
7:
if p is ⊥ then ▷ Prefix tree cannot be applied.return ⊥ ▷ tree cannot be applied.
In the code + represents string concatenation and ⊥ a null string, meaning that the tree cannot be applied to the form.We first run the tree depicted in Figure 3 on the form angebaut "attached (to a building)".and token-based unknown form (form unk) and lemma (lemma unk) rates.
Figure 1 :
Figure1: Edit tree for the inflected form umgeschaut "looked around" and its lemma umschauen "to look around".The right tree is the actual edit tree we use in our model, the left tree visualizes what each node corresponds to.The root node stores the length of the prefix umge (4) and the suffix t (1).
Figure 3 :
Figure3: Edit tree for the inflected form umgeschaut "looked around" and its lemma umschauen "to look around".The right tree is the actual edit tree we use in our model, the left tree visualizes what each node corresponds to.Note how the root node stores the length of the prefix umge and the suffix t.
8
Djamé Seddah, Grzegorz Chrupała, Özlem C ¸etinoglu, Josef van Genabith, and Marie Candito.2010.Lemmatization and lexicalized statistical parsing of morphologically-rich languages: the case of French.In Proceedings of the NAACL HLT 2010 First Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 85-93, Los Angeles, CA, USA.Association for Computational Linguistics.s , i e , j s , j e ← lcs(x, y) i | 4,347 | 2015-09-01T00:00:00.000 | [
"Computer Science"
] |
Time averaging, ageing and delay analysis of financial time series
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black–Scholes–Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
Introduction
In 1900, Bachelier pioneered the concept that prices on financial markets are stochastic and may follow the laws of Brownian motion [1,2]. Similar ideas for the mathematical description of option pricing were proposed by Bronzin in 1908 [3], a work which largely fell into oblivion until recently [4]. Later, in the 1960s, Mandelbrot [5,6] and Fama [7] realised that the Gaussian distribution of price changes is often violated, and introduced Paretian and Lévy stable laws into financial mathematics [5,8]. Another milestone in the stochastic modelling of financial markets is the famed Black-Scholes-Merton option pricing model [9][10][11], see also the classical textbooks on the mathematical analysis of markets [12][13][14].
Statistical models of stock price variations often assume that the increments [ ( )] X t log are Gaussiandistributed [14]. Extensive statistical analysis of financial data revealed, however, that the distribution of returns + [ ( ) ( )] X t t X t log d has a sharper maximum and fatter tails [8,12,19,34]. To account for these features, random walk models based on, inter alia, truncated Lévy stable distributions [5,8,12,36,49] and jumpdiffusion models [11,13,50] were proposed. Discrepancies between ensemble and time averaged measures were also discussed recently [18,38,51,52]. Different market impacts onto price formation were considered as well [53]-in particular, one should mention here also market microstructure effects [54]. Despite decades of intense Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence.
Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. development and significant advances the quest for a universal mathematical model of stock price dynamics is still open.
Here we propose three concepts, complementary to the well established methods, for an advanced analysis of financial time series. Specifically, these are the time averaged mean squared displacement (MSD), perfectly suited for the analysis of a single time series, as well as the ageing and delay time methods. As we demonstrate from statistical analyses of real financial time series, these approaches are highly useful and reveal universal features of the market dynamics, which may be relevant for the further development of financial market models.
The central concept promoted here is based on time averaging. Most frequently, in theoretical approaches the MSD á ñ ( ) X t 2 , defined as the ensemble average of ( ) X t 2 over many realisations of the stochastic process X(t), is used to quantify the time evolution of the process. However, when dealing with a single or few but long time series X(t), the time averaged MSD [55] is better suitable and more relevant for the analysis of time series of both stationary and non-stationary stochastic processes [55,56]. For a given lag time Δ this quantity defines a sliding average over the entire time series, of length T. Hereafter, the overline denotes time averages and the angular brackets stand for averages over an ensemble of realisations of a process X(t). While the concept (1) is by now widely used for single-trajectory analysis in several areas of science, especially microscopic single particle tracking [55,56], it is less common in mathematical finance [17,18]. Our central focus here is to introduce this concept for the analysis of financial time series.
Analysis of financial time series
We here present the results of statistical analyses of financial time series based on the time averaged MSD (1) as well as the ageing and delay time methods defined below. The observed behaviour based on these analysis tools is demonstrated to agree well with analytical results for the famed GBM model, which is introduced in section 2.2. We study the daily values of multiple financial indices of different categories. All data were taken from and analysed via the Wolfram Mathematica 9 platform.
The data are categorised and abbreviated as follows. In the first category we study the Dow Jones Industrial Average figure 1. To improve the presentation we divided all prices by the corresponding initial value, such that all traces start at unit value. This is a legitimate procedure as for a GBM-like process the initial price X 0 enters solely as a multiplicative factor (see below). Despite the common initial price, we observe large price fluctuations for different indices at later times, especially in the last decades. Roughly, an exponential increase is observed for all stocks. at short lag times increases with T for growing stock market prices, as long as no market crises occur, as seen in figure 2.
We observe a roughly linear growth d D µ D ( ) 2 at short lag times, in perfect accordance with the analytical result (6) for GBM derived below. This linear scaling stands in stark contrast to the exponential growth of the ensemble averaged MSD á ñ ( ) X t 2 of GBM seen in equation (5) below. Such a fundamental discrepancy between the time and ensemble averaged MSD of the process X(t) is well known in the theory of stochastic processes and often referred to as weak ergodicity breaking [55,56]. [55]. Note that the averaging over different trajectories in quantity (2) is necessary for our analytical calculations. In the analysis of the financial time series shown in the figures, single trajectory averages d D ( ) 2 are considered throughout.
Geometric Brownian motion
Before continuing with our analysis of the actual stock market data, we briefly digress and provide a primer on GBM, a paradigm process employed in standard models for stock price dynamics. As we show this model reproduces the essential features observed in the market data. GBM is defined in terms of the stochastic differential equation with multiplicative noise Here > ( ) X t 0 is the price at time t, μ denotes a drift, and σ is the volatility (μ and σ are set constant below). The volatility is connected to the square root of the diffusivity [55]. The increments dW of the Wiener process are defined as white Gaussian noise x ( ) t with zero mean. The price evolution, starting with the initial value > X 0 0 at t=0, is obtained from equation (3) by use of Itôʼs lemma [12,14] as This process satisfies the log-normal distribution, emerging also in models of task success and income distribution [57]. The MSD shows the exponential growth Due to this (much) faster growth compared to the linear increase of the MSD with t for Brownian motion, GBM is therefore a superdiffusive process [55].
To calculate the time averaged properties of interest, we resort to the average (2) over N independent realisations of X(t). We derive this quantity using the one and two point distribution functions of the Wiener process. For short lag times, D T , and in the absence of drift we find As mentioned already, the non-equivalence of the ensemble and time averaged MSD, in the limit D T 1 indicates weak ergodicity breaking, known for other non-stationary diffusion processes [55]. The time averaged MSD (6) scales linearly with the lag time Δ, in contrast to the exponential growth of á ñ ( ) X t 2 , but grows exponentially with the trace length T, due to the highly non-stationary character of GBM. figure 2, and the explicit dependence on T shown in figure 3. Equation (6) predicts, to leading order, an exponential growth of the time-ensemble averaged MSD with the trace length T. As shown in figure 3, computed for D = 1 day, the data indeed roughly follow an exponential increase with T. However, no universal dependence of the time averaged MSD as function of T could be found. Especially for those time windows encompassing prolonged drops or stalling of the index price, the ratio of the time averaged MSD for a trace length T to its value at T min does not grow rapidly but rather saturates, as can also be seen in figure 3. Thus, checking the dependence of d D ( ) 2 on the trace length T may be used to unveil individual features of index price dynamics. In contrast to the strongly disparate and company-specific behaviour revealed in figure 3, universal features are found in the analysed index prices when employing the new concept of the delay time averaged MSD introduced below. Figure 4 shows the data normalised to their end point. More specifically, this normalisation enhances the contribution of later parts of the time series, with typically larger prices.
We find a considerable spread of individual d i n , 2 traces for different companies, because each index can have different parameters such as the volatility value or the attitude of a given company leadership to maximise the short-term profit versus ensuring a long-time sustainability. Finite lifetimes of companies and their varying age at the start of the time series are likewise important. These factors individualise stock price variations for each company and complicate the evaluation of ensemble averaged quantities. In view of this the universality of stock prices observed in section 2.5 is even more remarkable.
Ageing analysis
Physically, the above dependence (6) of the time averaged MSD on the trace length T reflects the phenomenon of ageing, a characteristic property of non-stationary stochastic processes [55,58]. For superdiffusive processes with an MSD growing faster than the linear growth in time of Brownian motion, the time averaged MSD exhibits an increase with T. This reflects the self-reinforcing volatility of the process, as seen for the exponentially fast growth in equation (6). In contrast, in subdiffusion the effective diffusivity of the process is a decaying function of time [55]. For instance, in processes with scale free waiting times the typical sojourn periods of the motion become increasingly long, on average, effecting the decay of the time averaged MSD with T [58].
Another way to analyse ageing processes is the following. If the time series X(t), starting at time t=0, is evaluated only beginning with the so-called ageing time > t 0 a , the ageing time averaged MSD is defined as [58] ò d D = -D + D - In this formal definition we shift the starting point for the analysis of the time series yet the length T of the analysed time interval remains fixed. Of course, larger values of t a limit the remaining number of data points available for this analysis, however, the ageing time averaged MSD provides important insights into the underlying stochastic process [55]. We note that the term ageing does not imply any relaxation to an ergodic state in the limit of long ageing times, as it does for subdiffusive processes (the limit of strong ageing). In superdiffusive processes such as GBM and, as shown here, for highly non-stationary financial data the term ageing delineates the process time dependent increase of the spread of the random variable X(t).
For GBM we find in the limit of short lag times, D T , and in absence of a drift, m = 0, that on average The exponential growth with t a emerges as the process X(t) already experienced an acceleration of the price dynamics up to the ageing time t a , and thus the analysis starts with a higher volume. Note that in result (8) the argument of the exponential includes the volatility term s 2 of the GBM process.
To see whether such ageing effects can indeed be observed in real financial data we study the behaviour of the ageing time averaged MSD d D increases fast with t a . Similar to the observations above, when we varied the trace length T, partial clustering of the traces for varying t a is visible. Figure 6 quantifies the behaviour of the logarithm of the ratio d d of the ageing time averaged MSD to the corresponding non-ageing value, plotted versus the ageing time t a . Although, again, a roughly exponential increase is evident and consistent with the prediction (8) for GBM, no data collapse onto a universal curve is observed. Even at short ageing times we find a substantial spread of the d d D D [ ( ) ( )] log 2 2 curves versus t a for different stock indices, as can be seen in figure 6. The non-universal behaviour is expected to be due to the fact that the volatility parameter varies between companies, as equation (8) corresponding ensemble is not formed from trajectories with identical parameters. For instance, the volatility values and the effect of the ageing time t a for each stock market index can be markedly different.
Delay time analysis: revealing universal features
What if we allow the length of the time series to vary in the above ageing analysis? Namely, what would the expected behaviour be for the quantity The logarithm here cancels the leading exponential dependence of the delayed time averaged MSD on t d in equation (10) and the final result (11) is independent on the index-specific volatility parameter σ. We emphasise that in this result it is crucial that the process X(t) has non-stationary increments. The behaviour of, say, a standard Brownian process would follow a different scaling law with the delay time. Can this universal behaviour predicted in equation (11) Figure 8 shows the logarithm of the ratio of the delay time averaged MSD to the standard time averaged MSD, evaluated at unit lag time D = 1, as function of the delay time t d . The universal behaviour (11) expected on average is followed very closely for each stock market time series. This universal behaviour-fulfilled for delay times t d up to some 5-10 years-is the central result of this study. To our best knowledge, in terms of single time series this universal trend has not been reported before. .
In particular, this disqualifies the predictions of GBM in this long delay time regime. Note that since the time averaged MSD initially grows approximately linearly with Δ, as one can see figures 2 and 7, the trends of figure 8 will also hold for other sufficiently short lag times.
Note that for some high tech companies and banks we did not observe a growth of d d
Conclusions
Time averaging of observables of a stochastic process X(t) is a successful concept designed for the analysis of single or few, sufficiently long time series. It has been applied in various fields, in particular, in single particle tracking studies of microscopic objects [55,56]. While for such microscopic particles, at least in principle, it is possible to record more than one trace under (almost) identical conditions, the situation is much more restricted for financial contexts. The market price evolution of a given company cannot be repeated several times under identical conditions. Thus, a statistical ensemble for averaging over a set of trajectories is inaccessible [17,38]. Splitting up the time series into subparts is not an option due to the highly non-stationary character of the dynamics. The analysis in terms of time averaged observables is therefore the prime option.
Here we demonstrated that time averages indeed provide a useful toolbox for the analysis of financial data. Using the time averaged MSD, as well as the ageing and delay time methods, we showed that relevant features can be extracted from the analysis of financial time series. Good agreement of our data-driven observations with analytical predictions from the GBM model was observed, such as the linear lag time dependence of the time averaged MSD, contrasting the exponential growth for the ensemble averaged MSD. The ageing analysis combining the dependencies on the trace length T and the ageing time t a unveil peculiar features in a given time series such as prolonged stalling or even a decrease of the stock value.
Remarkably, the delay time analysis introduced here uncovered a universal behaviour for the analysed stock prices. For short and intermediate delay times t d , the logarithm of the ratio of the delay time versus the regular time averaged MSD is a linear function of t T d . At longer times, in our analysis beyond some 5-10 years, this logarithm approximately scales as a power-law n ( ) t T d with another scaling exponent, n > 1. This latter scaling behaviour is not captured with the standard GBM model, pointing at a need for improved theoretical approaches.
This study is to be viewed as a first step in applying time averaging, ageing, and delay time methods to the analysis of financial time series. Theoretically, modifications of the GBM model used here, to account for features such as the transition from the universal linear scaling to the scaling law n ( ) t T d for the delay time averaged MSD as well as the introduction of fluctuating or time-varying volatilities, are possible. The concept of 'diffusing diffusivities', which in some sense is similar to fluctuating volatilities, has been recently established in the physics literature [59][60][61]. How such concepts impact stochastic processes with multiplicative noise remains to be clarified, however, we expect a similarly rich behaviour with crossovers as observed for simple Brownian systems [59][60][61].
Deterministic time dependent volatilities may be adopted for the time averaging based description of unstable markets (at times of a financial crash), when the trading conditions change very rapidly [38]. Here, a stochastic process with a power-law volatility may be proposed: GBM with a volatility increasing with time can account for a faster than exponential price growth X(t) and explain a faster than linear trend (12) detected in the analysis of financial time series. Recently, ensemble averages of a similar modified GBM process with power-law and logarithmic volatilities were presented [53]. Also, models with value-and time-dependent diffusivity were empirically found to underlie the Euro-Dollar exchange rate dynamics [19,62], compared to anomalous diffusion with space-and time-dependent diffusivity [55]. Combining such new theoretical approaches with time averaging may provide vital new impetus in the analysis of financial time series. From a data analysis point of view, we were interested in the long-term trends for the time averaged MSD. Clearly, time series with one point a day hide possible intraday effects, such as intraday volatility patterns extracted from highfrequency data [18,38,63]. More observables from the time series should be taken into account, and the correlations between them remain to be rationalised. These include the auto-correlation function of price increments, the evaluation of volatility values [33] over different periods of time, the trading activity and volumes on the markets, a correction of the index price value due to inflation [34], crises, etc. These points, as well as the question how the dynamics observed herein is connected with heavy tailed spreads of financial volume [5,8,12,36,49] will be the focus of future work.
The area of mathematical finance is not the only domain where our time averaged MSD and ageing approaches may be useful. For instance, from a biological perspective the mathematical description of inherently highly stochastic disease outbreaks involves exponential processes similar to GBM. In epidemic spreading, an exponential increase in the number of diseased hosts is often postulated (up to the system size). The reader is referred to the optimal control models in epidemics spreading [64,65], including density dependent growth and ageing. Finally, mathematical models of tumour spreading and the growth of bacterial colonies and cells [66] also employ exponential processes, providing additional ground for the application of the concepts outlined here. | 4,712.4 | 2017-06-30T00:00:00.000 | [
"Economics",
"Mathematics",
"Business"
] |
The Vortex Signature of Discrete Ferromagnetic Dipoles at the LaAlO$_3$/SrTiO$_3$ Interface
A hysteretic in-plane magnetoresistance develops below the superconducting transition of LaAlO$_3$/SrTiO$_3$ interfaces for $\left|H_{/\!/}\right|<$ 0.15 T, independently of the carrier density or oxygen annealing. We show that this hysteresis arises from vortex depinning within a thin superconducting layer, in which the vortices are created by discrete ferromagnetic dipoles located solely above the layer. We find no evidence for finite-momentum pairing or bulk magnetism and hence conclude that ferromagnetism is strictly confined to the interface, where it competes with superconductivity.
A hysteretic in-plane magnetoresistance develops below the superconducting transition of LaAlO3/SrTiO3 interfaces for H // < 0.15 T, independently of the carrier density or oxygen annealing. We show that this hysteresis arises from vortex depinning within a thin superconducting layer, in which the vortices are created by discrete ferromagnetic dipoles located solely above the layer. We find no evidence for finite-momentum pairing or bulk magnetism and hence conclude that ferromagnetism is strictly confined to the interface, where it competes with superconductivity. The emergence of ferromagnetic order in a material breaks time reversal symmetry and is hence detrimental to conventional spin-singlet superconductivity. Since the vast majority of superconductors discovered to date exhibit spin-singlet s or d-wave pairing, the coexistence of superconductivity (SC) and ferromagnetism (FM) at the LaAlO 3 /SrTiO 3 interface has proved difficult to reconcile [1][2][3]. Bypassing the paramagnetic limit via spintriplet pairing presents a straightforward solution to the puzzle; however the observation of s-wave gaps in doped SrTiO 3 [4] and LaAlO 3 /SrTiO 3 [5] together with the loss of inversion symmetry at the interface are unsupportive of such a scenario. All other mechanisms facilitating SC and FM phase coexistence require a realspace modulation of the SC order parameter, either by creating a spontaneous vortex phase [6], forming finitemomentum electron pairs analogous to a Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state [7], or spatially dispersing FM and SC over macroscopic lengthscales [3].
The nature of the interaction between FM and SC is not the only contentious issue in LaAlO 3 /SrTiO 3 , since the origin and location of the FM moments are also subject to intense discussion. Purely FM behaviour has only been observed following growth and/or annealing at high O 2 pressures [8]; conversely, there is strong theoretical support for O 2− vacancies [9] or polar distortions [10] causing FM. Experiments have linked FM with low [11] or high carrier densities [12], describing it as both an interfacial [13] and bulk [12] phenomenon. A study of FM and SC in both stoichiometric and O 2 -deficient interfaces with a wide range of carrier densities is therefore essential to understand their coexistence.
In this Letter, we reveal a universal hysteretic in-plane magnetoresistance in LaAlO 3 /SrTiO 3 heterostructures, characteristic of FM coexisting and competing with SC regardless of the carrier density or O 2 deficiency. These results are consistent with the creation of pinned vortices by in-plane magnetic dipole moments, implying that FM is tightly confined to the interface, above a conventional 2D SC layer. The existence of this confinement is confirmed by the observation of quantum oscillations from a non-magnetic electron gas deeper below the interface.
We have performed milliKelvin ac magnetotransport measurements on two distinct series of LaAlO 3 /SrTiO 3 heterostructures, A and B, grown by pulsed laser deposition (PLD). A-type interfaces were grown at an O 2 pressure of 10 −3 mbar and subsequently annealed at 0.1 bar to reduce their O 2− vacancy concentration, leading to two-dimensional (2D) carrier densities n 2D ∼ 10 13 cm −2 . Hall bars patterned onto these heterostructures yield comparable results to the majority of SC LaAlO 3 /SrTiO 3 films in the literature [2,3,[14][15][16]. In contrast, B-type interfaces were also grown at 10 −3 mbar, but were not post-annealed: this was a deliberate ploy to maximise the O 2− vacancy concentrationand hence n 2D -close to the interface, without creating a quasi-3D electron gas spanning the entire substrate [17]. The resulting films exhibited n 2D ≥ 10 14 cm −2 , increasing to ∼ 10 15 cm −2 upon field-effect doping. Since all heterostructures behave similarly to the others in their series, we focus on two specific samples A and B for continuity, with SC critical temperatures T c = 0.28 K and 0.31 K respectively. Further growth details, resistivity/Hall data and data-sets from additional samples may be found in the Supplementary Online Material (SOM).
Searching for unconventional SC: Rashba spin-orbit coupling (RSOC) due to broken inversion symmetry at the interface also influences SC [18] and may stabilise a helical FFLO state with maximal T c for in-plane fields H // > 0 [7]. Since the RSOC is inversely proportional to n 2D [16], we focus our search for exotic superconductivity on sample A. Figure 1b shows the angle-dependent upper critical field H c2 (θ), accurately described by the Tinkham model for conventional SC films with d ξ [19]: Even at T = 0.1 K, H c2// exceeds the Pauli limit H P ≡ 1.84 T c = 0.52 T; this is consistent with the strong spinorbit scattering expected at the interface. Since H c2// > H P , the G-L method can only provide an upper limit for d; however, deviations between the calculated and true values of d are small [20] and have no impact on our discussion since SC remains confined within 20 nm of the interface. In Fig. 1c, we track T c (H // ) for sample A but find no peak at H // > 0; instead the data is closely reproduced by G-L theory. To confirm this, we examine R xx (H // ) at T = 0.25 K (just below T c (0T)) for both samples in Fig. 1d: a maximum T c at H // > 0 will create a point of inflection or local minimum in R xx (H // ), as depicted in Fig. 1e. No such features are visible and we therefore find no support for a helical FFLO state [21]. Evidence for FM: Figure 2 displays in-plane MR loops for A and B. Hysteresis is always observed at low fields (|H| < 0.15 T), with peaks in R xx (H // ) at negative/positive H after sweeping down/up from large positive/negative H. This pattern is distinct from the hysteresis seen in granular SC [22] and we hence absolve granularity from responsibility for our data. In general, hysteretic MR is a signature of FM, but the disappearance of the hysteresis above T c indicates that SC also plays some role in its origin. Non-zero resistance below T c for a SC in a magnetic field implies a loss of phase coherence, caused by mobile flux vortices. Since the vortex diameter is ∼ 2ξ and d ξ, our SC channels are too shallow to allow vortex penetration parallel to an inplane magnetic field: we must therefore identify a means of generating out-of-plane vortices using in-plane fields.
Possible mechanisms for vortex creation: Spatial inhomogeneities in the RSOC or the in-plane polarization can create vortices in helical superconductors [18]; however such vortices would only appear above a critical field, which is incompatible with our low-field hysteresis. It has been suggested that MR hysteresis in LaAlO 3 /SrTiO 3 arises from domain wall propagation in a continuous FM layer [23], but this is implausible for several reasons. Firstly, for thin FM layers the shape anisotropy energy is a key factor in domain formation and the walls should be of Néel rather than Bloch type, i.e. their magnetization points in-plane and so there is insufficient flux for outof-plane vortex formation. Secondly, there is no evidence in the literature for a continuous FM layer; in contrast, SQUID microscopy indicates an inhomogeneous FM distribution [3]. Thirdly, all our R xx (H) data are acquired (a) In-plane MR Rxx(H // ) for sample A (n2D = 2.3×10 13 cm −2 , Tc = 0.28 K) at temperatures 0.5 K and 0.1 K, with a low-field zoom on the T = 0.1 K data (below). These data were acquired after cooling from room temperature in zero field, resulting in randomly-oriented domains which generate the low peak structure seen in the sweep up. at stable fields with our SC magnet in persistent mode, i.e. dH/dt = 0 so there is no driving force for domain wall propagation (and our ac current negates any spin-torque transfer effect on the domains). Therefore, the only realistic out-of-plane flux sources in LaAlO 3 /SrTiO 3 are the dipole fields from discrete FM zones, embedded in a SC channel and polarized in-plane (Fig. 3a). To pass through the channel, out-of-plane components of the dipole fields must be quantised. Arrays of vortices and antivortices therefore form around each pole, whose size and density depend on the geometry of each FM zone and its total dipole moment. This situation is analogous to an artificial 2D SC/FM nanodot heterostructure [24].
Where are the FM dipoles located? If the moments generating the vortices are buried deep in the SC channel ( Fig. 3a centre panel), then no vortices will be generated, since the vortex/antivortex pairs at each end of the dipoles self-annihilate, leaving a purely horizontal field. Also, while clearly revealing competition between FM and SC, the minor destructive effect of the polarized FM zones on SC (visible at H // = 0 in Figs. 2a,b; S8a) does not support a large dipole population inside the channel. If the moments are instead distributed symmetrically above and below the channel (left panel), then vortex/antivortex annihilation still tends to occur, with little quantised flux left in the channel. Only an asymmetric FM zone distribution across the channel creates stable vortex/antivortex pairs (right panel), so FM must be confined to either the top or base of the channel.
Is magnetism present below the SC channel, deep within SrTiO 3 ? To address this possibility, we directly probe the electron gas in this region. In sample B, Shubnikov-de Haas (SdH) oscillations develop for H // > 2.5 T (Figs. 2b, 3b). Since d ≤ 20 nm and our field-effect doping proves that the SrTiO 3 is a good dielectric, we have not created a quasi-3D electron gas throughout the substrate [17]. Nevertheless, the oscillations imply that a conducting "tail" extends into the SrTiO 3 over a distance at least twice the cyclotron radius r g = k F /eH (where k F is the Fermi wavevector). At n 2D = 6.9 × 10 14 cm −2 , the SdH frequency F = 25 T: assuming a spherical Fermi surface for simplicity, k F = 2πF/Φ 0 (from the Onsager relation), r g = 44 nm and hence the majority of the "tail" must lie below the base of the SC channel. Now, SdH oscillations in magnetic materials should exhibit beating in the peak amplitude and field-reversal asymmetry [25]; moreover, scattering from localised electrons should suppress oscillations for H < 20-30 T [26]. From Fig. 3b, our oscillations are field-reversal symmetric, free from beating and emerge at merely 2.5 T; they hence originate from a non-magnetic electron gas. Furthermore, the vertical field from any dipoles below the SC channel will be quantised, leading to a distorted flux profile when viewed from above: no evidence for this is seen by scanning SQUID [3,27,28]. We conclude that our SrTiO 3 is non-magnetic and FM is strictly confined to the interface.
How do the vortices generate a hysteretic MR? Peak formation in R xx (H // ) corresponds to reversing the FM polarization, a process which is illustrated in Fig. 4a. This will mainly occur via dipole rotation rather than domain wall propagation, due to the limited zone size ( 10 µm from SQUID data), the presence of hysteresis up to |H|= 0.15 T (implying a large coercive field) and flux conservation within the channel. During repolarization, a torque m × H is exerted on the FM zones, where m is the dipole moment. The vortex/antivortex arrays around the poles are dragged through a 180 • rotation, creating an electric field in their cores and dissipating energy. However, vortex motion is impeded by pinning, hence broadening the MR peak. Within the framework of the Anderson flux creep model [29], where ν depin is the vortex depinning frequency, U is the vortex pinning potential, |H| exceeds the FM coercive field H coerc and we disregard the effects of our ac current. ν depin scales linearly with the induced electric field and hence R xx (H // ). Assuming an infinite vortex supply, constant U and domain polarization antiparallel to H, an increase ∆H will generate ∆R xx (H // )∝e 2mH ∆H. These assumptions are clearly unrealistic, since our samples contain finite numbers of vortex-inducing FM dipoles which may become pinned at arbitrary angles to the field during repolarization; we also expect a broad variation in the local pinning potential within the SC channel from the inhomogeneous defect distribution. Nevertheless, this relation permits a qualitative understanding of the evolution of our hysteretic peaks over time (Fig. 4b).
Compared with a reference data-set (sweep 1) acquired at field increment δ H , a long pause in the data acquisition at . Sweeps were performed directly after polarization at H = -4 T. Each data-point was acquired in a stable, constant field, 90 s after our SC magnet entered persistent mode (dH/dt = 950µTs −1 while ramping the field) and our ac current flowed throughout the experiment. The field increment between data-points determines the total measurement duration (inset) and hence the peak amplitude. The shape of the peak (corresponding to the pinning profile in the channel) is stable between sweeps, but exhibits small changes after thermal cycling to 300 K.
have been repolarised (i.e. all vortex/antivortex arrays have been rotated through 180 • ) and our ac current is the only remaining depinning force: R xx (H // ) therefore falls to its background level.
A picture emerges of a narrow layer of FM clusters confined to the interface, above a 2D SC layer. The inhomogeneity in the FM and its independence from n 2D suggest that it originates from a static defect distribution, in agreement with several recent models [9,10]. Previously, signatures of FM have been linked to high O 2 pressure growth [8,30], but this is due to enhanced defectinduced localisation in the 2D limit and FM transport signatures being "short-circuited" by mobile electrons below the interface at higher n 2D . Together with data from such resistive interfaces, our work shows that FM can exist across the entire LaAlO 3 /SrTiO 3 phase diagram: 10 12 ≤n 2D ≤10 15 cm −2 . The similarities in the hysteretic MR of our two film series indicate that O 2− vacancies are neither essential nor anathemic to FM; however scattering from vacancies and cation intermixing may help to stabilise FM. We note that beyond the hysteretic inplane MR and Kondo effect in R xx (T ) at low temperature (see SOM), there is no obvious evidence for magnetism in our heterostructures. It is therefore likely that other LaAlO 3 /SrTiO 3 films studied in the literature contained FM inclusions, but their presence was missed since parallel MR loops were not acquired below T c .
In conclusion, our work illustrates how the reduced symmetry, modified electronic structure and defects inherent to an interface promote an alien emergent phase (FM) to interact and compete with the usual emergent ground state in doped SrTiO 3 (SC). Although competition only occurs at the top of the SC channel, the asymmetric distribution of the FM influences the entire SC layer by generating vortices even at zero applied field. Looking to the future, we propose that atomically precise defect control may enable the development of hybrid FM/SC devices in LaAlO 3 /SrTiO 3 films, such as spintronic latches, vortex-based memory or SC qubits.
The authors gratefully acknowledge discussions with I. Martin, W. Pickett and J. Chakhalian. This work was supported by the National Research Foundation, Singapore, through Grant NRF-CRP4-2008-04. The research at the University of Nebraska-Lincoln (UNL) was supported by the National Science Foundation through the Materials Research Science and Engineering Center (Grant DMR-0820521) and the Experimental Program to Stimulate Competitive Research (Grant EPS-1010674).
Synthesis and Measurement Techniques
Our two series of heterostructures A and B were both grown using a standard pulsed laser deposition system manufactured by Twente Solid State Technology B.V., equipped with a reflection high-energy electron diffraction (RHEED) facility. 10 unit cells of LaAlO 3 were deposited on TiO 2 -terminated 0.5mm thick commercial SrTiO 3 (001) substrates from Shinkosha. The total incident laser energy was 9 mJ, focussed onto a 6 mm 2 rectangular spot. For both samples, the oxygen pressure was maintained at 10 −3 mbar during growth at 800 • C. However, series A underwent a subsequent annealing stage: after cooling to 500 • C at 10 −3 mbar, the oxygen pressure was increased to 0.1 bar. The temperature was held at 500 • C for 30 minutes before natural cooling to room temperature, still in 0.1 bar oxygen. In contrast, series B was allowed to cool naturally to room temperature in 10 −3 mbar oxygen.
Hall bars with Au-Ti contact pads were patterned onto the top surface of the LaAlO 3 using electron-beam lithography, while sample B also had an Au-Ti back gate sputter-deposited across the entire base of the SrTiO 3 substrate prior to patterning. The Hall bar width was 80 µm and the voltage contact separation 660 µm. Patterning onto the LaAlO 3 surface rather than directly contacting the interface allows us to remain sensitive to AlO 2 surface transport in parallel with interfacial states, even when the interface is superconducting (see section 2).
Transport measurements were performed in a Janis cryogen-free dilution refrigerator, using an ac technique with two SRS 830 lock-in amplifiers and a Keithley 6221 AC current source. All data were acquired using an excitation current of 500 nA at 19 Hz; this value was chosen to maximise the signal-to-noise ratio whilst eliminating any sample heating effects at milliKelvin temperatures. Our noise threshold is approximately 2 nV. The evolution of the capacitance (∼ 1 nF) of the SrTiO 3 substrate with gate voltage V g was verified using a General Radio 1621 manual capacitance bridge: no leakage or breakdown occurred even at V g = 500 V.
Electrical Resistivity and Superconducting Transitions
Upon cooling from room temperature, the electrical resistance of both heterostructures passes through a minimum at low temperature ( fig. S5), then rises logarithmically prior to the onset of superconductivity at ∼ 0.3 K. The minima are located at T m = 10 K and 25 K for samples A and B respectively. A logarithmic divergence in the low-temperature resistance is a signature of the Kondo effect, i.e. scattering off dilute magnetic impuri-ties. Within the Kondo scenario, sample B (which has a high oxygen vacancy concentration) must contain a higher density of magnetic scattering centres due to its higher T m . The resistive transitions to the superconducting state used to determine the critical fields in fig. 1 of our manuscript are shown in fig. S6a. Although the transitions qualitatively follow the behaviour expected for a quasi-2D superconducting film, the resistance does not fall to zero even at the lowest temperatures measured (0.035 K). This does not necessarily imply that our interface is inhomogeneous; in fact, non-zero resistance is a natural consequence of the patterning technique which we have used. A schematic diagram for our pattern is shown in fig. S6c: since the contacts for our Hall bars are only deposited onto the top surface of the LaAlO 3 , there is no direct contact with the conducting interface. Instead, contact is made vertically through the 10 unit cells of LaAlO 3 , which exhibits a weak conductivity dependent on the oxygen vacancy concentration. This provides a small resistive component in series with the interface, leading to a measured non-zero resistance even with a homogeneous superconducting interface. Any conducting AlO 2 surface states will generate a parallel contribution to the measured resistance; the advantage of this patterning technique is that it enables these surface states to be probed without being "shorted out" by the superconducting interface.
Conversely, this does not easily permit us to differentiate between homogeneous and inhomogeneous superconducting layers (or even field-induced inhomogeneous nucleation of superconductivity). However, this does not affect the conclusions of our work in any manner: firstly, inhomogeneities at the interface are entirely expected due to the tendency of the d xy electrons to localise and form ferromagnetic zones where superconductivity is suppressed. For sufficiently high densities of ferromagnetic inclusions above a shallow superconducting channel, the percolative zero-resistance current path may vanish. Secondly, the observed quantum oscillations originate from carriers deeper below the interface and only emerge at high magnetic fields when the superconductivity has been entirely quenched. Thirdly, if we consider the oxygendeficient high carrier density heterostructure B, it is plausible that clusters of oxygen vacancies at the AlO 2 surface could locally overdope the interface beyond its maximum superconducting carrier density within certain discrete zones (also enhancing the ferromagnetism, as suggested by our Kondo effect and theory [1]). This is a natural consequence of vacancy-doping the LaAlO 3 /SrTiO 3 interface and in no way reflects negatively on our results.
Furthermore, using data for the critical current density from the literature [4] (∼ 40 nA per micron channel width), we estimate that the critical current in our Hall bars is of the order of several microAmps at zero field. Our measurement current (500 nA) is only one order of magnitude smaller than this value, thus contributing to the broadening of the transitions which we see in a magnetic field. We stress that a current of this magnitude is essential to achieve an adequate signal-to-noise ratio in highly conductive materials such as sample B, especially for Shubnikov-de Haas measurements. It also facilitates depinning by exciting a lateral "shaking" force on vortices, thus influencing (though not causing) the hysteresis in our in-plane magnetoresistance data.
Hall Effect Data
All stated sheet carrier densities in our work have been obtained by linear fits to the high-field Hall resistance R xy (H) (H > 5 T), where n 2D dRxy dH = − 1 e . However, both our Shubnikov-de Haas effect and various magnetotransport studies in the literature [5][6][7] have revealed evidence for multiple conduction bands at the interface, which should lead to Lorentzian forms for both the perpendicular magnetoresistance (MR) R xx (H ⊥ ) and the Hall coefficient R H . We plot R xx (H ⊥ ) and R H in fig. S7.
The simple message which we wish to convey here is the dramatically different behaviour of both R xx ( fig. S7a) and R H (fig. S7b) for the two heterostructures. Examining the MR first, we observe that the curvature of R xx is positive for sample A, compared with negative curvature and a Lorentzian form in sample B. The Hall effect is even more revealing, with R H in sample A displaying approximately linear behaviour, which for LaAlO 3 /SrTiO 3 implies d xy single-band occupancy (slight deviations from linearity may be due to limited hole-like contributions from carriers at the AlO 2 top surface). In contrast, R H in sample B again shows the Lorentzian shape expected for a two-band system (i.e. d xy and d xz,yz band occupancy). Beyond the ability to extract the total carrier density, the failings of simple two-band Hall coefficient models are well-known [7] due to the significant differences in the field dependence of the mobilities of carriers in each band; we therefore do not attempt any more as the temperatures at which Rxx falls by 20% of the difference between its values at 0.5 K and 0.04 K in zero field, indicated by the dashed lines intersecting the two Rxx(T ) plots in (a). Note that this 20% criterion is arbitrary, since although changing the percentage will lead to small variations in the calculated coherence length ξ, the anisotropy and hence our determination of the superconducting channel thickness d will remain unchanged. A scaling analysis for 2D SC [2,3] is also shown, which provides an alternative means to determine d using H 2 c2// (T ) = πΦ 0 2d 2 H c2⊥ (T ). This yields SC layer thicknesses d = 16 ± 1 nm and 9 ± 1 nm for samples A and B respectively, in excellent agreement with the values from Ginzburg-Landau theory (18±1 nm and 9±1 nm). (c) Schematic of the experimental setup and Hall bar patterning, indicating the resistive component from the 10 unit cells of LaAlO3 which we always measure in series with the superconducting interface, together with the AlO2 surface states in parallel with the interface. detailed quantitative analysis of our data. One further unusual feature in our Hall data is worthy of mention: a crossover in the gradient of R H from negative to positive at high gate voltages ( fig. S7b, inset). This cannot be explained by a simple interfacial two-band model and is a consequence of the gradual population of states deeper within the substrate.
We note that the total carrier density which we measure for sample A is 2.3×10 13 cm −2 , slightly larger than the critical density 1.68 ± 0.18 × 10 13 cm −2 recently obtained for the Lifshitz transition [7]. However, we see no evidence for two-band behaviour in the Hall coefficient of sample A and we conclude that the extra carriers which we detect most probably originate from deeper within the SrTiO 3 substrate (since any AlO 2 surface states should still be hole-like within this doping range). This is an important point, since it absolves the d xz,yz bands of responsibility for generating our observed ferromagnetic domains. Let us consider the effects of applying a gate voltage to sample B on the ferromagnetic and superconducting phases. SQUID microscopy studies have indicated that gating the LaAlO 3 /SrTiO 3 interface to modulate its carrier density does not have any effect on the density of ferromagnetic inclusions [8,9]. In contrast, applying a positive gate voltage to increase n 2D in sample B does change the shape of the hysteretic peaks, which are broader and clearer at V g = 350 V ( fig. S8a).
Numerous potential explanations exist for this effect. Firstly, we suggest that the electric field across the SrTiO 3 may lead to a further increase in the vortex mobility once depinning has occurred, thus increasing the measured resistance even for small applied fields. We must also consider the expansion of the superconducting channel upon gating: the channel roughly doubles in thickness between V g = 0 V and 350 V ( fig. S8b). A thicker channel will increase the probability of pinning any given vortex during the rotation of its respective fer-romagnetic dipole, hence broadening the hysteretic peak. However, the "hardest" pins (from the largest defects) will be located closer to the interface and hence the maximum field at which hysteresis is observed should remain similar: from fig. S8a, this is indeed the case. Another relevant factor in modifying the hysteretic peak shape may be a partial suppression of superconductivity very close to the interface due to the high carrier density in sample B. This would also explain its narrower as-grown superconducting channel compared with sample A, although it is important to remember that the as-grown vertical charge distribution profile is a crucial factor in determining the absolute superconducting layer thickness and this may vary significantly between heterostructures.
A very small hysteresis is visible in the out-of-plane MR R xx (H ⊥ ), although this occurs at fields close to zero, comparable to the typical remanence in superconducting magnets. For completeness, we plot this in fig. S8c. We stress that the absence of any large hysteretic peaks in R xx (H ⊥ ) is entirely expected, since in this configuration there is no rotation of the in-plane moments; instead, the vortex density is much higher and we enter a liquid phase at very low applied fields.
Data Reproducibility
Over an 18-month period spanning the duration of this project, we synthesized numerous "A-type" and "B-type" heterostructures, all of which exhibited qualitatively similar behaviour. Series A have n 2D ∼ 10 13 cm −2 and display single-band transport, while n 2D > 10 14 cm −2 and the Hall coefficient is non-linear in series B. Superconductivity is observed to coexist with ferromagnetism regardless of the carrier density, while Shubnikov-de Haas oscillations emerge in parallel fields below series B interfaces. Data-sets from several other heterostructures may be found in fig. S9. | 6,686.4 | 2013-11-11T00:00:00.000 | [
"Physics"
] |
Bioresorbable Multilayer Organic–Inorganic Films for Bioelectronic Systems
Bioresorbable electronic devices as temporary biomedical implants represent an emerging class of technology relevant to a range of patient conditions currently addressed with technologies that require surgical explantation after a desired period of use. Obtaining reliable performance and favorable degradation behavior demands materials that can serve as biofluid barriers in encapsulating structures that avoid premature degradation of active electronic components. Here, this work presents a materials design that addresses this need, with properties in water impermeability, mechanical flexibility, and processability that are superior to alternatives. The approach uses multilayer assemblies of alternating films of polyanhydride and silicon oxynitride formed by spin‐coating and plasma‐enhanced chemical vapor deposition , respectively. Experimental and theoretical studies investigate the effects of material composition and multilayer structure on water barrier performance, water distribution, and degradation behavior. Demonstrations with inductor‐capacitor circuits, wireless power transfer systems, and wireless optoelectronic devices illustrate the performance of this materials system as a bioresorbable encapsulating structure.
Introduction
][6][7] Previous demonstrations of this technology include systems for continuous physiological recording, [8] electrophysiological monitoring, [9] electrotherapy, [2,10] and drug delivery. [11]A critical engineering challenge for these and related types of devices is in encapsulation materials that can provide barriers to surrounding biofluids but with degradation behaviors aligned to desired operational lifetimes.The difficulty is in balancing superior barrier performance with suitable degradation rates.
Materials commonly utilized for such purposes include organic polymer films, such as poly(lactic-co-glycolic acid) (PLGA), [12] polyanhydride (PA), [8,13] polyurethane (PU), [10] poly(octanediol citrate) (POC), [14] wax, [15] and cellulose, [16] and inorganic thin films, including monocrystalline silicon (mono-Si), [17] silicon dioxide (SiO 2 ), [9,18] silicon nitride (SiN x ), [9,19] and silicon oxynitrides (SiON). [20,21]Polymer materials are attractive due to their processability but they offer relatively poor barrier characteristics.Wax materials and PU have acceptable barrier characteristics but can require several years to completely degrade in vivo.Cellulose has some promise, but requires improvements in structure, processing, and barrier functionality. [16]Various inorganic alternatives offer superior properties.Thin films of mono-Si and thermally grown SiO 2 provide nearly perfect barriers, and their degradation proceeds by controlled processes of surface erosion. [17,22]The main disadvantage of these materials is that they involve high temperature synthesis procedures that must be conducted in controlled conditions, on narrow classes of substrates.Integration into bioresorbable electronic systems demands lift-off processes and transfer printing methods that can be difficult given the mechanical fragility of these materials.Plasma-enhanced chemical vapor deposition (PECVD) and other methods can be used to form inorganic thin films, [9,21,23] but previous studies of SiO x , SiN y , and SiON indicate that isolated defects and various forms of localized imperfections limit their fluid barrier performance. [20,22,24]his work introduces classes of bioresorbable encapsulation materials that address these shortcomings by presenting a materials design strategy that allows imperfect inorganic barrier layers to meet specific application requirements.These systems consist of bioresorbable forms of PA and SiON configured into multilayer assemblies.Films of PA result from spin-coating a monomer solution followed by ultraviolet (UV)-initiated thiolene polymerization and those of SiON follow from deposition by PECVD.Alternating stacks of PA and SiON lead to tortuous paths and interface resistances for water permeation, and thus superior barrier properties.Experimental studies and theoretical simulations explore the interrelated effects of multilayer structure, chemical composition, water permeation characteristics, and degradation behaviors.These results provide evidence that multilayer SiON-PA films offer effective water barrier performance both in vitro and in vivo.Device level demonstrations with bioresorbable inductor-capacitor circuits, wireless power transfer systems, and wireless optoelectronic devices highlight practical applications of these multilayer SiON-PA films as encapsulating structures for bioelectronic devices.
Preparation and Characterization of Bioresorbable Multilayer SiON-PA Films
The bioresorbable multilayer SiON-PA films reported here comprise a PA network, formed by the thiol-ene click chemistry, and SiON grown via PECVD (deposition temperature = 180 °C).Composition tunability of SiON during the PECVD process enables optimization of the degradation rate and mechanical properties (such as film stress).A substrate to provide mechanical support helps to avoid cracking of the SiON films under bending deformations.The substrate materials must offer good thermal stability, suitable mechanical properties, an appropriate degradation rate, low swelling ratio in an aqueous environment, low surface roughness, and strong interface adhesion with SiON.The PA network not only satisfies these requirements, but also offers transparency, enabling potential with optoelectronic devices.
Figure 1A shows the thiol-ene reaction between two alkyne monomers, 4-pentenoic anhydride (PEA) and 1,3,5-triallyl-1,3,5-triazine-2,4,6(1H,3H,5H)-trione (TTT), and two crosslinkers, pentaerythriol tetrakis(3-mercaptopropionate) (PETMP) and silsesquioxane (SSQ), as well as the degradation process associated with immersion of PA in water.The degradation mechanism of PA primarily involves bulk erosion and passive hydrolysis of the anhydride group, suggesting that the thickness of the PA film is unlikely to decrease significantly, but that the material will undergo weight loss over time. [25,26]The PA network refers to Ra-bSSQ-PA, where a is the molar ratio of PEA (anhydride monomer) to TTT and b is the molar percentage of the thiol group from SSQ.The amounts of anhydride groups (a) and SSQ (b) define the degradation, mechanical, and swelling properties. [27]Fourier-transform infrared spectroscopy (FTIR, Figure S1, Supporting Information) provides information on thiol groups (−SH) that react upon UV irradiation. [28]The results show that more than 97% of these groups react with alkyne monomers within 120 s for a 100-μm thick PA film (UV wavelength: 365 nm; intensity: 65 mW cm −2 ).Thus, UV irradiation times are 150 and 30 s for PA films with thicknesses of 100 and 20 μm, respectively.
PA has good thermal stability, favorable mechanical properties, minimal swelling in aqueous environments, and suitable rates of degradation, as a substrate for films of SiON.Thermogravimetric analysis (TGA) indicates that PA (Ra-bSSQ-PA, a = 1, 2, and 3, and b = 0, 20, and 50) is stable below 310 °C (evaluated by 5% weight loss, Figure 1B).PA films are hydrophilic (Figure S2, Supporting Information) [29] and amorphous (differential scanning calorimetry (DSC), Figure S3A, Supporting Information) and their coefficients of thermal expansion (CTE) decrease with increasing SSQ content (Figure S3B, Supporting Information).The thermal properties indicate that PA can support deposition of SiON via PECVD at 180 °C.Dynamic mechanical analysis (DMA) of the stress-strain curves the fracture strains and the elastic moduli of films of PA (thicknesses: 100 μm), as Figure 1C and Figure S4 (Supporting Information).Incubating these films in phosphate-buffered saline (PBS) solution at 37 °C and comparing their dry and wet weights as a function of time defines aspects of degradation and swelling properties (Figure 1D and Figure S5, Supporting Information).The results suggest that increasing the amount of SSQ or decreasing that of anhydride groups improves the elastic modulus, reduces the degradation rate, and suppresses swelling.The composition of SiON is key to its water barrier properties and degradation. [20]X-ray photoelectron spectroscopy (XPS) shows that the atomic percentages of Si, O, and N in SiON grown by PECVD are 31.30%,61.91%, S1, Supporting Information).
Figure 1F illustrates the process for preparing a free-standing 3-layer SiON-PA film, where a layer of SiON-PA includes one layer of both SiON and PA.Cross-sectional scanning electron microscope (SEM) images of SiON-PA films enable precise visualization of the film structures, indicating strong interface adhesion between SiON and PA (Figure S7, Supporting Information).The total thicknesses of PA and SiON in these stacks are 100 and 2 μm, respectively, unless otherwise noted.A sacrificial layer of polystyrene sulfonate (PSS) dissolves in water to allow release from a silicon wafer substrate.Specifically, casting the PA monomer solution on a wafer coated with PSS followed by UV exposure and deposition of SiON via PECVD forms one layer of SiON-PA.Repeating the processes for forming films of PA and SiON yields multilayers of SiON-PA.Casting a final overlying layer of PA helps to suppress bowing of the released multilayer that otherwise occurs due to residual stresses in the SiON-PA, as shown in Figure S8 (Supporting Information).Atomic force microscope (AFM) images indicate that the PA and SiON sublayers have surface roughnesses around 1 nm (Figure 1G).The resulting multilayer SiON-PA films are flexible, mechanically robust, and transparent (≥90% with wavelength >500 nm), [30] as shown in Figure 1H and Figure S9 (Supporting Information).Results of finite element modeling and theoretical analysis reveal maximum strains in PA and SiON of 1-, 2-, 3-, and 4-layer SiON-PA films under different bending conditions (Figure S10A-D, Supporting Information, insets show architectures of SiON-PA films).The fracture strain of the SiON (≈1%) limits the minimum bending radii of multilayer SiON-PA films (number of film layers m ≥ 2); the fracture strain of the PA (≈25%) determines that of 1-layer SiON-PA films.The strain distribution in a 3-layer SiON-PA film appears in Figure S10E (Supporting Information) as an example where the minimum bending radius is ≈2.4 mm.The minimum bending radius increases with the number of layers, as summarized in Figure S10F (Supporting Information).
Water Barrier Properties of Bioresorbable Multilayer SiON-PA Films
Studies of water barrier properties of multilayer SiON-PA films involve their use as encapsulants over patterned traces of Mg as a resistance test structure.A structure of PDMS forms a sealed reservoir for a solution of PBS applied on top of the multilayers with the Mg traces underneath, as illustrated in Figure 2A (see details in the Experimental Section).Reactions between Mg and water that permeate through the SiON-PA films lead to formation of magnesium hydroxide and associated dramatic changes in the optical and electrical properties of the traces.The water barrier properties of SiON-PA films can be described as the soaking time required to increase the resistance of the Mg test structure by 20% at 37 °C (Figure S11, Supporting Information).The average lifetimes are 49 ± 2, 54 ± 2, and 57 ± 1 days for 2-, 3-, and 4-layer SiON-PA films, respectively, when the molar ratio of PEA:TTT = 1 and molar ratio percentage of SSQ-SH = 50% (Figure 2B
Water Permeation Characteristics Through Bioresorbable Multilayer SiON-PA Films
Electrochemical approaches serve as a basis for monitoring the water permeation process through measurements of current leakage upon application of a voltage across a film and through electrochemical impedance spectroscopy (EIS).Figure 3A illustrates the setup, which involves a working electrode (WE: Au), a counter electrode (CE: Pt wire), and reference electrode (RE: Ag/AgCl) in an electrolyte bath (PBS, pH = 7.4, 10 × 10 −3 m).Current leakage tests are similar but use only two electrodes (Au and Pt wire, Figure S17, Supporting Information).The films under test exist as coatings on a silicon wafer coated with Au (WE, thickness: 100 nm), sealed on top with a PDMS well to define the testing area (9 × 9 mm or 11 mm in diameter) and a reservoir for the electrolyte.A constant potential of 3 V and a voltage with amplitude of 10 mV applied between the WE and CE enable measurements of current leakage and impedance (frequency range: 1 Hz − 100 kHz), respectively.A decrease in EIS phase in the low frequency range indicates the existence of defects in a film.A decrease in impedance with time follows from permeation of PBS into the films.The appearance of current leakage corresponds to permeation through the films.
EIS and leakage measurements suggest open defects in the SiON films (Figure 3B) and no open defects in the PA films but fast fluid permeation across their full area (<12 h, PBS at 50 °C, Figure 3C).By contrast, for 3-layer SiON-PA films, leakage appears after 20 days of immersion in PBS at 50 °C as the impedance decreases gradually (Figure 3D and Figure S18A, Supporting Information).For 1-layer SiON-PA films with the same total thickness, leakage appears after 1 day under the same condition, with decreasing impedance (Figure 3E and Figure S18B, Supporting Information).These results confirm that water permeates through films via local defects and that 3-layer SiON-PA SiON-PA films block water permeation more effectively than 1-layer SiON-PA films, consistent with observations from experiments with the Mg dot arrays and disks.
Additionally, compared with 1-layer SiON-PA films, when decreasing the total thickness of SiON-PA films while increasing the number of layers, for example two bilayers of 25-μm PA and 667-nm SiON, the time for water permeation increases from 1 day to 7 days (Figure S19, Supporting Information).These results highlight the key role of the multilayer structure in blocking water permeation, in some ways more important than the overall thickness.EIS and current leakage measurements indicate that water permeation times are nearly the same regardless of the thickness of an individual PA layer in 3-layer SiON-PA films (25 μm, Figure 3D and Figure S18A (Supporting Information), versus 15 μm, Figure 3F and Figure S18C, Supporting Information).This also indicates that PA has a negligible effect on directly blocking water permeation.Defect-rich multilayer SiON-PA films are good water barriers likely because the multilayer structure 1) leads to misalignment between uncorrelated defects in the various SiON layers and thus to tortuous pathways for water permeation, and 2) creates interfaces in adjacent SiON layers that reduce water concentration (or water flux) in the bottom layer of the multilayer SiON-PA films, as illustrated in Figure S20 (Supporting Information, additional studies see Note S1, Supporting Information).33]
Theoretical Modeling of Reactive Diffusion for Water Permeation Across Multilayer SiON-PA Films
Theoretical modeling of reactive diffusion in these systems provides quantitative insights into the essential behaviors (Figure 4).A single-layer model applied to experimental studies of films of SiON (Figure S21, Supporting Information) yields the water diffusivity in the SiON, D SiON = 10 −13.6 cm 2 s −1 , and the reaction rate constant for hydrolysis, k SiON = 10 8.1-4530/T s −1 , where T is the absolute temperature.Since the predicted hydrolysis rate of SiON is only ≈7.5 nm day −1 in PBS at 37 °C, and k SiON has few effects on the modeling of the change in the resistance of the Mg test structures (Figure S22A, Supporting Information, simulation of resistance of Mg resistors sealed by 1-layer SiON-PA films), this reaction can be neglected at physiological temperature in studies of water permeation.Additionally, based on observed changes in resistance (Figure 2B and Figure S12, Supporting Information) and on current leakage and EIS measurements (Figure 3D,F), the PA has a negligible direct effect on blocking water permeation.The modeling thus considers that the SiON limits water permeation through the multilayer structure and that changes in resistance follow from both reaction and diffusion of water in the Mg.The model includes two types of interfaces: the SiON-SiON interface and the film-Mg interface (Figure 4A).
The time-dependent percentage change in resistance of the Mg test structure, ΔR = (h 0 /h − 1) × 100%, depends on the initial thickness h 0 and remaining thickness h, as obtained in the same manner as in the single-layer model (see Experimental Section).The film-Mg contact transfer coefficient (defined in Experimental Section), [34,35] H film-Mg = (2.0 ± 0.3) × 10 −11 cm s −1 , follows from experimentally measured changes in the resistance of Mg structures protected by 1-layer SiON-PA films in PBS at 37 °C (Figure 4B).The SiON-SiON contact transfer coefficient, [34,35] H SiON-SiON = (3.0 ± 0.2) × 10 −11 cm s −1 , follows from similar measurements of resistance but protected by 3-layer SiON-PA films (Figure 4C).With these parameters, the water concentration in 1-layer (Figure 4D) and 3-layer (Figure 4E) SiON-PA films after soaking in PBS at 37 °C for 10, 30, and 60 days, can be calculated.The water barrier effects at the SiON-SiON interfaces lead to abrupt decreases in concentration at those locations.For example, in the 3-layer SiON-PA films, the water concentration at the film-Mg interface is nearly zero even after immersion in PBS for 10 days at 37 °C, followed by a slow increase with time as expected.The cases of 2-layer (Figure S22B, Supporting Information) and 4-layer (Figure S22C, Supporting Information) SiON-PA films show similar results.Based on the water concentration at the film-Mg interface, the resistance changes of Mg traces sealed by 2-layer and 4-layer SiON-PA films can be determined, as summarized in Figure S22D,E (Supporting Information), respectively.The computed barrier properties, defined by the time for the resistance to increase by 20% in PBS at 37 °C, improve with the number of SiON-PA layers, consistent with the experimental results (m ≤ 4, Figure 4F).A discrepancy occurs, however, when the number of film layers is greatly increased for a fixed total thickness of polymer and SiON.The cause may follow from increases in the densities of defects in the SiON layers as their thicknesses decrease, whereas the modeling assumes a constant density.Experimental support is in increases in current leakage with decreasing SiON thickness (Figure S23, Supporting Information).The increase is pronounced for thicknesses less than 1 μm, corresponding to the number of film layers m >2.This phenomenon likely also explains the significant improvement in barrier properties of 2layer SiON-PA films over those of 1-layer SiON-PA films, and the incremental additional improvements for three and four layers.
Biocompatibility of Bioresorbable Multilayer SiON-PA Films
In vitro and in vivo studies yield insights into the biocompatibility of these films (Figure 5).Testing involves samples of 3-layer SiON-PA films (3L-R1-50SSQ, 3L-R2-50SSQ) and related polymer films (R1-50SSQ-PA and R2-50SSQ-PA) with diameters of 5 mm and thicknesses of ≈100 μm, incubated in cell culture media in a 24-well plate for 2 days and then cultured with mouse fibroblast (L929) cells at 37 °C for another 3 days. [36]The results indicate no significant differences in the percentages of live cells with and without the films, suggesting that these materials and the products of their degradation are not harmful to mice fibroblast cells (Figure 5A,B, and Figure S24, Supporting Information).
In vivo results follow from implanting the 3-layer SiON-PA films (5 × 10 mm) in the dorsal subcutaneous region of mouse models for 4 and 8 weeks.Changes in the weights of the mice in the implanted and control groups at 4 and 8 weeks are nearly identical (Figure 5C). Figure 5D,E shows the weights of key organs, including heart, spleen, kidney, brain, lung, and liver, explanted at 4 and 8 weeks after implantation, and the distribution of Si in these organs measured by inductively coupled plasma optical emission spectrometry, respectively.These data also show minimum differences.Complete blood counts (CBC) and blood chemistry serve as additional indicators for organrelated responses.CBC reveals the average level of red blood cells, hemoglobin, hematocrit, platelets, and white blood cells (Figure 5F).Analysis of blood chemistry includes the level of blood urea nitrogen, proteins, liver enzymes, glucose, lipids, and important minerals etc. (Figure 5G).Both tests show no significant differences between the implanted and control groups.Histological analysis of the skin and muscles in the implantation region (Figure 5H,I, and Figure S25, Supporting Information), including skin morphology, skin thickness, morphology of muscle cells, and the average sizes of myofibers, also indicates negligible differences.
Device-Level Use of Multilayer SiON-PA Films as Encapsulating Layers
Inductor-capacitor (LC) circuits and systems for wireless power transfer serve important roles in many applications of bioresorbable medical devices, from temperature and pressure sensors, [37,38] to cardiac pacemakers, [10,12] to nerve stimulators. [2,11]A demonstration example reported here uses a bioresorbable LC circuit (metal: Zn; dielectric layer: PLGA) and a bioresorbable power receiving module (metal: Mo) to illustrate the use of multilayer SiON-PA films as water barriers.The experiments involve recording the radio frequency (RF) behavior using a vector network analyzer.Figure 6A shows the peak frequency of a LC circuit (inset figure) built on a glass substrate and encapsulated with a 3-layer SiON-PA film, immersed in PBS at 37 °C.The frequency remains unchanged for at least 12 weeks.Figure 6B presents the RF behavior of a wireless power receiving module, similarly encapsulated with a 3-layer SiON-PA film.The power transmission performance appears visually through a wirelessly activated light-emitting diode (LED).Here, a signal generator supplies RF power to a transmitter coil (60 mm diameter, 3 turns) placed over the receiving coil.The RF behavior changes by a negligible amount in 63 days (9 weeks) and the LED operates after immersion in PBS at 37 °C for at least 70 days.Quantitative analysis of the peak frequency and quality factor (Q factor) of the receiving coil, as important parameters that influence the power transfer efficiency, shows unchanged behavior for 9 weeks, followed by a shift of the peak frequency to lower values and a decrease in the Q factor (Figure 6C).By comparison, the RF behavior of an otherwise similar wireless power receiving module encapsulated with PLGA varies greatly, and the LED fails within a day.
Timescales for complete degradation following a stable operating period are also important.Figure 6D shows the evolution of 3-layer SiON-PA films (12 × 12 mm) immersed in PBS at 95 °C, indicating that the constituent materials disappear in 10 days.Results of similar experiments performed at different temperatures [39] are consistent with Arrhenius scaling, ln(1/t) = (−E a /R) (1/T) + ln A, where t is the degradation time, T is the temperature, E a is the activation energy for degradation reactions, A is the pre-exponential factor, and R is the universal gas constant.Extrapolations using this scaling suggest that these multilayer SiON-PA films fully degrade within 1 year (Figure 6E).
Device-Level Demonstration of Multilayer SiON-PA Films in Mice
Figure 7A presents an exploded view schematic diagram of the wireless optoelectronic device and its encapsulation using 3-layer SiON-PA films.The diameter of the device before and after encapsulation is 6 and 10 mm, respectively.Subcutaneously implanted devices in mice retain their functionality even after 4 weeks, as evidenced by the light emission from activated devices in Figure 7B and Figure S26 (Supporting Information).Inspection of a representative device explanted at the 3-week time-point, shows expected functionality and indicates that the SiON-PA films remain transparent, enabling the optoelectronic device to operate efficiently without compromising its optical performance in vivo.Figure 7C,D presents the RF characteristics of devices with and without encapsulation using 3-layer SiON-PA films, respectively.Figure 7E summarizes the changes in resonant frequencies and Q factor values of devices during the implantation period (4 weeks) in mice.Both parameters remain stable for the cases with encapsulation, whereas those without show significant decreases, caused by exposure to biofluids in mice.Figure 7F presents photographs of mice implanted with optoelectronic devices with (n = 3) and without encapsulation (sham group, n = 3).Cases with encapsulation show expected function through LED illumination for more than 4 weeks.By contrast, devices without encapsulation rapidly fail to function, typically after 2 days.These results highlight the superior water barrier performance of SiON-PA multilayer films within mice.We note that encapsulation using wax with a thickness of 300 μm, can support operation of similar devices for 7 days postimplantation, as published previously. [2]
Conclusion
In conclusion, this work introduces an organic-inorganic multilayer design strategy for bioresorbable encapsulating structures that can protect implantable electronic devices from biofluids to ensure reliable operational performance over timeframes longer than those possible with previously reported materials approaches.Tailored schemes in polymer synthesis and chemical vapor deposition of films of PA and SiON, respectively, provide means for controlling the rates of degradation.The resulting multilayer SiON-PA films (total thickness of PA and SiON: 100 and 2 μm) can effectively block water transport for at least 7 weeks (49 ± 2, 54 ± 2, and 57.3 ± 0.6 days for 2-, 3-, and 4-layer of SiON-PA films, respectively).Comprehensive experimental and modeling studies demonstrate that the multilayer structure plays a significant role in water barrier properties due to formation of tortuous permeation paths and low water concentration in the bottom layers.Investigations of viability in cell cultures and in animal models indicate that the multilayer SiON-PA films have both good biocompatibility and superior water barrier performance.Successful demonstrations as encapsulating structures for LC circuits, wireless power transfer systems, and wireless optoelectronic devices suggest that the materials strategy introduced here may find broad and immediate applications for a range of practical electronic devices that are capable of resorption in biological and environmental settings.
FTIR measurements (Figure S1, Supporting Information) indicated that 97% of the thiol groups reacted with alkyne monomers within 120 s in the synthesis of a 100-μm polymer film.These results provided approximate times for UV irradiation of films with different thickness, for example, 150, 75, and 30 s for films with thicknesses of 100, 50, and 20μm, respectively.To confirm the applicability of polymers in the growth of SiON via PECVD, the thermal properties, including decomposition temperature (Figure 1B), thermal transition temperature (Figure S3A, Supporting Information), and coefficient of thermal expansion (CTE, Figure S3B, Supporting Information), were characterized using TGA (DSC Q400, TA Instruments), DSC (DSC Q400, TA Instruments), and thermomechanical analysis (TMA Q400, TA Instruments), respectively.The mechanical properties (Figure 1C and Figure S4, Supporting Information), specifically the stress-strain responses, were characterized by DMA (RSA-G2 solids analyzer, TA Instruments).The optical transparency of the films was measured using a UV/Vis/NIR Spectrophotometer (Figure S9, Supporting Information; LAMBDA 1050, PerkinElmer).To test film swelling (Figure S5, Supporting Information) and degradation (Figure 1D), films were soaked in PBS at 37 °C and their wet and dry weights were measured using a microbalance (XPR2, Mettler Toledo).Film surface roughness and wettability were characterized by AFM (Figure 1G; Dimension Edge, Bruker) and contact angle measurements, respectively (Figure S2, Supporting Information, VCA Optima XE).
SiON Growth and Characterization: SiON films were deposited via PECVD at 180 °C (SiH 4 100 sccm, NH 3 300 sccm, N 2 400 sccm, and N 2 O 1420 sccm; LPX CVD, SPTS Technologies Ltd., UK) at a high frequency (13.65 MHz), with thickness adjusted through the deposition time.The elemental percentages of the resulting films were characterized using XPS (Figure 1E, Figure S6, and Table S1 (Supporting Information); ESCALAB 250Xi, Thermo Fisher Scientific) after in situ etching of the films for 120-240 s.
Fabrication Process of Free-Standing SiON-PA Films: Fabrication involved casting a sacrificial layer of PSS on a silicon (Si) wafer, repeating the steps for preparation films of PA and SiON, and then releasing the resulting multilayers from the wafer by removing the PSS.Specifically, the process began with spin-coating a thin layer of PSS solution and baking at 110 °C for 30 s to form a film of PSS (thickness: 2 μm) on a wafer.Coating with a solution with monomers followed by exposure to UV formed a film of PA with thickness controlled by the spinning speed and time.PECVD defined a film of SiON with thickness determined by deposition time.Repeating this process followed by coating a final layer of PA on top completed the process.Release by soaking in water for a few hours yielded largescale free-standing SiON-PA films.Rinsing with water and drying with N 2 followed by cutting using a CO 2 laser ablation system (VLS3.50,Universal Laser) yielded small pieces with desired shapes.
The overall thicknesses of SiON and PA in multilayer SiON-PA films were 2 and 100 μm, respectively, regardless of the number of film layers.One layer of SiON-PA film consisted of one single layer of SiON and one single layer of PA.Thus, the number of SiON-PA layers determined the thicknesses of one single layer of SiON and PA in the multilayer assembly.Free-standing SiON-PA films included an additional layer of PA on top.For example, a 3-layer SiON-PA film (Figure 1F) involved three bilayers of 25μm PA and 667-nm SiON as well as a layer of 25-μm PA on top; a 1-layer SiON-PA film included one bilayer of 50-μm PA and 2-μm SiON as well as a layer of 50-μm PA on top.
Numerical Simulation and Theoretical Modeling for Minimum Bending Radii of Multilayer SiON-PA Films: Numerical simulation by the finite element method (FEM) and analytical modeling revealed mechanical aspects of the bendability of multilayer SiON-PA films.ABAQUS software simulated the distribution of strains in the PA and SiON of multilayer SiON-PA films with different bending radii (Figure S10, Supporting Information).A plane strain model with the element CPE8R, is consistent with film thick-to Dulbecco′s PBS (DPBS, 10 mL, Thermo Fisher Scientific) to prepare a staining solution.After removing the medium and the films from the cells, the staining solution (400 μL) was added to each well and incubated 30 min at room temperature, followed by replacing the staining solution with 0.5 mL DPBS each well.The stained cells were then ready for imaging using a confocal microscope (SP8, Leica).Calcein generated from calcein-AM by esterase in the live cells exhibits strong green-fluorescence (excitation/emission ≈490/515 nm), while ethidium homodimer-1 binds to the DNA double helix of dead cell membranes and emits red fluorescence (excitation/emission ≈490/617 nm), shown in Figure 5B and Figure S24 (Supporting Information).A blind analytical study of cell viability was performed using ANOVA showed no significant differences between cells cultured with films and the control group.The in vitro biodegradation tests were performed by soaking the 3-layer SiON-PA films (3L-R2-50SSQ, 12 × 12 mm, thickness ≈100 μm) in PBS at a series of elevated temperatures, followed by observing the degradation process once per day under an optical microscope (VHX-6000 Series, Keyence) until the films completely dissolved.
In Vivo Biocompatibility and Biodegradation: All procedures associated with the animal studies followed the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health and approved by The Institutional Animal Care and Use Committee at Northwestern University (Approval numbers: IS00018881 and IS00024102).
Female mice (C57/BL6; age at the initiation of the treatment: 8-12weeks; Charles River Laboratories) were anaesthetized using isoflurane gas (1-2% isoflurane in oxygen) in an anesthetizing chamber during the implantation surgery.The films under test were 3-layer SiON-PA films with the polymer composition R2-50SSQ-PA cut into a rectangular shape (5 × 10 mm, thickness ≈100 μm).After sterilizing in a steam sterilizer for 25 min at 121 °C (or by UV irradiation overnight), two films were implanted in the dorsal subcutaneous space of each mouse (n = 9) without overlapping between films.The sham group (n = 4), which served as a control, was subjected to the same surgery procedures but without film implantation.The incision was closed using interrupted sutures followed by standard combined postoperative analgesic regimen.The weights of the mice were recorded every week (Figure 5c).After 4 and 8 weeks, the mice with films implanted were euthanized to collect blood, organs, and tissues for subsequent blood, elemental, and histological analyses, referred to as the 4-week (n = 4) and 8-week group (n = 5), respectively.Mice in the sham group were euthanized at 8 weeks (n = 4).
To evaluate the distribution of Si resulting from film degradation, organs, including heart, spleen, kidney, brain, lung, and liver were collected and weighed (Figure 5d), and then dissolved in 50-mL tubes (metal-free, Thermo Fihser Scientific) using solutions of 500-μL HNO 3 and 125-μL H 2 O 2 (volume fraction, HNO 3 :H 2 O 2 = 4:1), which were kept in a water bath at 65 °C until the organs were fully digested, and diluted with Millipore water to 10 mL after the solution cooled to room temperature.The solution was then analyzed by inductively coupled plasma optical emission spectrometry (ICP-OEX, iCAP 6500, Thermo Fisher Scientific).All measurements were performed simultaneously at three different wavelengths 212.412, 251.611, and 221.667 nm and the results presented here were obtained by averaging the values of these emission lines (Figure 5E). [48]o assess the overall health of mice implanted with films, blood samples were collected and tested at the Veterinary Diagnostic Laboratory at University of Illinois.Blood was collected in collection tubes with K2-EDTA coatings and fully mixed with K2-EDTA to obtain the whole blood for CBC test.To obtain serum for blood chemistry tests, blood was collected in tubes without K2-EDTA coatings and centrifuged at 5000 rpm for 10 min after immersion in an ice bath for 5 min.The serum was transferred to a new tube using a pipette.
For histological studies, subcutaneous tissue and muscle were collected, fixed, and stored in formalin in 20-mL glass vials, followed by embedding in paraffin, sectioning, and staining with hematoxylin and eosin (H&E) at the Mouse Histology and Phenotyping Laboratory (MHPL) at Northwestern University.These tissue samples were observed using bright-field microscopy (VS120, Olympus).A blind histological assessment was performed, and representative images were shown in Figure 5H and Figure S25A (Supporting Information).Skin thickness and myofiber size were blindly analyzed using ANOVA, which showed no significant differences between film implantation groups and control groups, Figure 5I and Figure S25B (Supporting Information).
Device Fabrication and Measurements: The inductor-capacitor (LC) circuit consisted of laser-cut Zn coils (width: 100 μm) insulated by wax and then electrically connected with two Zn electrodes separated by a film of PLGA (thickness: 10 μm; lactide:glycolide, 50:50; molecular weight: 50 000−75 000) using a bioresorbable conductive wax paste. [37]The LC circuit was then placed on a glass slide, with the exposed side encapsulated by a 3-layer SiON-PA film.A PDMS tube positioned on top of the film with edges sealed using marine epoxy defined a structure to enable immersion in PBS solution, placed in an oven at 37 °C.The change in frequency of the LC circuit was monitored using a vector network analyzer (E5063A, Keysight, USA).
For the power receiving module, the fabrication began with laser-cutting to form a molybdenum coil from a foil (Mo, thickness: 100 μm; Goodfellow, USA) followed by removal of surface oxidation by immersion in aqua regia for 3-5 min.The next step involved separating the coils with PLGA films (thickness: 10 μm) and electrically connecting the coils and a LED with a bioresorbable conductive wax paste, with the system encapsulated by a 3-layer SiON-PA film.Test conditions were the same as those for the LC circuit.The RF behavior of the coils (S11 in dB) was evaluated using a vector network analyzer, to determine the peak frequency and quality factor (Q).A signal generator (3390, Keithley) provided RF power to a transmitter coil (60 mm diameter, 3 turns) placed over the coils to wirelessly power the LED.
The implantable LED devices used designs modified from flexible nearfield wireless optoelectronic systems reported previously for optogenetics applications. [49]The device consisted of two circular Cu coils (diameter: 6 mm,10 turns; thickness: 18 μm; dielectric layer: 75-μm polyimide) with surface-mounted electronic components for power transfer.Power transfer uses magnetic coupling to a separate RF transmission loop antenna (a near-field communication (NFC) extension board, STMicroelectronics) or a wireless powering system (Optogenetics Starter Kit, NeuroLux, USA) operating at 13.56 MHz (Figure S26, Supporting Information).Device fabrication began with patterning a flexible substrate made of a copper-PIcopper laminate (Pyralux, DuPont, USA) using laser ablation (LPKF, Pro-toLaser U4, Germany) to define the circuit interconnects and the bonding pads for the electronic components.These flexible printed circuit boards with customized designs can also be obtained from commercial vendors (i.e., PCBWay).Hot-air soldering indium solder bonded electronic components, including two diodes, one capacitor, and two LEDs with emission at 630 nm.The resonant frequency of devices was measured using a vector network analyzer (E5063A, Keysight) while LEDs were wirelessly powered by the NFC extension board or the wireless powering system.
Device encapsulation using SiON-PA films began with cutting 3-layer SiON-PA films into circular forms (diameter: 10 mm) using a CO 2 laser ablation system followed by deposition of SiO 2 (thickness: ≈30 nm) via sputtering (AJA Orion Sputter System, AJA International, Inc.) and UV/ozone (UVO) exposure of the SiO 2 for 3 min.The next steps included sandwiching a device between two SiON-PA films (the UVO-treated sides facing the device), applying silicon adhesive (Kwik-Sil, World Precision Instruments, USA) to the edges, and placing the setup in a hot-press machine, preset to a temperature of 35 °C, for 30 min.Applying epoxy (Loctite, USA) to seal the edges further prevented water permeation from those regions to complete the process.
Device Level Demonstration on Water Barrier Performance In Vivo: All procedures associated with the animal studies followed the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health and approved by The Institutional Animal Care and Use Committee at Northwestern University (Approval numbers: IS00018881 and IS00024102).Female mice (C57/BL6; age at the initiation of the treatment: 8-12 weeks; Charles River Laboratories) were anaesthetized using isoflurane gas (1-2% isoflurane in oxygen) in an anesthetizing chamber before the implantation surgery.After sterilizing by UV irradiation for 2 h, three optoelectronic devices that were encapsulated using the 3-layer SiON-PA films underwent subcutaneous implantation (n = 3).Mice in the sham group (n = 3) were subjected to the same surgery procedures, but the implanted devices lacked film encapsulation.The incision was closed using interrupted sutures followed by standard combined postoperative analgesic regimen.After a designated implantation period, the mice both in the implantation and the sham groups were anaesthetized to collect data on resonant frequency of the implanted devices and LED illumination conditions.One mouse was sacrificed to harvest the device at the 3-week timepoint to provide a visual examination of the device and the encapsulation films.
Statistical Analysis: Results present as mean ± standard deviation with n ≥ 3. The ordinary one-way ANOVA with post hoc Tukey's means comparisons (p < 0.05) was used for statistical significance testing for histological analysis.GraphPad Prism and/or Origin software were used to perform statistical analysis.ImageJ software was used for grayscale analysis of images.
Figure 1 .
Figure 1.Preparation and characterization of polyanhydride (PA), silicon oxynitrides (SiON), and SiON-PA films.A) Illustration of the PA synthesis and hydrolysis-induced degradation pathway.B-D) Decomposition temperature (B), mechanical properties (C), and degradation behavior (D) of PA (thickness: 100 μm) with different compositions characterized by thermogravimetric analysis (TGA), dynamic mechanical analysis (DMA), and soak testing (three samples for each type of film), respectively.E) Elemental distribution of a SiON film grown via plasma-enhanced chemical vapor deposition (PECVD) characterized using X-ray photoelectron spectroscopy (XPS).F) Illustration of the process for preparing a free-standing 3-layer SiON-PA film.G) Surface roughness of PA and SiON characterized by atomic force microscopy (AFM, scanning size: 20 × 20 μm).H) Photographs of a 3-layer SiON-PA film.Scale bars: 1 cm.
and 6 .
79%, respectively (Figure1E, FigureS6, and Table and Figure S11, Supporting Information, R1-50SSQ).The lifetimes for 1-layer SiON-PA films and bare PA films (Figure S12, Supporting Information) are 14 ± 2 days and 2.3 ± 0.3 h, respectively.The results demonstrate that 1) multilayer SiON-PA films (m = 2, 3, 4) have water barrier properties superior to those of 1-layer SiON-PA films and bare PA films; 2) increasing the number of layers (m = 2, 3, 4) of the SiON-PA films slightly improves the lifetime.Decreasing the amount of SSQ (Figure 2B, R1-20SSQ, R1-0SSQ) or increasing the amount of anhydride (Figure S13, Supporting Information, R2-50SSQ and R3-50SSQ) in the PA decreases the lifetimes.These results indicate that the chemical composition of PA affects the water barrier properties of SiON-PA films, which is likely due to reductions in the modulus and increases in the swelling of PA.The following experiments use the optimized composition of PA, R1-50SSQ.Additionally, given the practical consideration that adding a SiON-PA layer requires at least another 30 min in film fabrication, the 3-layer SiON-PA film serves as a representative multilayer structure to showcase the materials design strategy in the following experiments, unless otherwise noted.Additional insights into the mechanisms of permeation follow from tests that involve arrays of Mg dots (100 × 100 arrays, each dot: 20 × 20 μm) sealed by 3-layer SiON-PA films (3L-R1-50SSQ) and immersed in PBS at 50 °C, in setups otherwise similar to those in Figure 2A.Grayscale analysis of images collected at various times appears in Figure 2C.The percentage of dark areas (corresponding to Mg dots) remains relatively constant during the first 11 days, then slightly decreases, and finally dramatically decreases after 20 days.Representative images in Figure 2D show that permeation at the locations of several dots occurs in 20 days and then spreads quickly afterwards.Similar phenomena occur with comparatively large, isolated Mg disks (diameter: 4 mm), for cases of encapsulation with 3-layer SiON-PA films (Figures S14 and S15, Supporting Information), 1-layer SiON-PA films (1L-R1-50SSQ, Figure S14, Supporting Information), and bare PA films (Figure S16, Supporting Information), in PBS and Dulbecco's Modified Eagle Medium (DMEM, a cell culture medium) at 50 °C.Mg disks sealed by 3-layer SiON-PA films display visible change in 20 days and those by 1-layer SiON-PA films and bare PA films in 2 days and 15-80 min, respectively.These observations suggest that water passes mainly through local defects, and that the multilayer structure effectively impedes this process.
Figure 2 .
Figure 2. Water barrier properties of bioresorbable SiON-PA films.A) Experimental setup for evaluation of water barrier properties using Mg resistors and Mg dot arrays.B) The lifetime of Mg resistors sealed by SiON-PA films (time of Mg resistance increases by 20% in PBS (pH = 7.4, 10 × 10 −3 m) at 37 °C, Figure S11) for different PA compositions (amount of SSQ) and numbers of SiON-PA layers (n = 1, 2, 3, 4).Evaluations for each type of film include at least two samples.C) Grayscale analysis of images of Mg dot arrays.Mg dot arrays are encapsulated with 3-layer SiON-PA films (3L-R1-50SSQ, soaked in PBS at 50 °C).D) Representative images of Mg dot arrays (dot: 20 × 20 μm; thickness, 100 nm) in (C).Arrows indicate the corrosion of Mg dots at day 20 due to water permeation across the film, spreading to a larger area at day 22 (Image size: 1.5 × 1.5 mm).
Figure 3 .
Figure 3. Electrochemical measurements on PA, SiON, and SiON-PA films.A) Experimental setup for impedance spectroscopy (EIS) measurements using a three-electrode system, including reference electrode (RE, Ag/AgCl), counter electrode (CE, Pt wire), and working electrode (WE, Au).B) Bode plot of a 2-μm SiON film.A current of 288 nA cm −2 appears when a voltage of 3 V is applied between Pt electrode and Au electrode.C) Bode plot of a 75-μm PA film.A large current (>0.1 mA cm −2 ) appears after PA film is soaked in PBS for 12 h, indicating that water permeates fast through the entire PA film.D) Impedance at 1 Hz and current leakage of 3-layer SiON-PA films (three bilayers of 25-μm PA and 667-nm SiON).E) Impedance at 1 Hz and current leakage of 1-layer SiON-PA films (one bilayer of 75-μm PA and 2-μm SiON).F) Impedance at 1 Hz and current leakage of 3-layer SiON-PA films with PA sublayers having a reduced thickness (three bilayers of 15-μm PA and 667-nm SiON).Insets are illustrations of film architectures.PA here refers to R1-50SSQ.All films were soaked in PBS (pH = 7.4, 10 × 10 −3 m) at 50 °C.
Figure 4 .
Figure 4. Theoretical modeling of reactive diffusion for water permeation across multilayer SiON-PA films.A) Illustration of a multilayer model for resistance changes of Mg resistors sealed by SiON-PA films.PA serves as an interface material between two adjacent SiON layers forming SiON-SiON interfaces.Another interface exists between the SiON-PA film and an Mg resistor, i.e., film-Mg interface.B) Theoretical and experimental resistance changes with time of Mg resistors sealed by 1-layer SiON-PA films.C) Theoretical and experimental resistance changes with time of Mg resistors sealed by 3-layer SiON-PA films.D) Water concentration distribution in a 1-layer SiON-PA film after 10, 30, and 60 days.E) Water concentration distribution in a 3-layer SiON-PA film after 10, 30, and 60 days.F) Theoretical and experimental water barrier properties of SiON-PA films with the number of SiON-PA layer (total thickness of SiON and PA: 2 and 100 μm, respectively).Water barrier properties are evaluated by the lifetimes of Mg resistors whose resistances increase by 20%.Dashed lines in B, C, and F indicate a deviation from the solid lines due to the errors in determining the contact transfer coefficient of film-Mg interface and dots are experimental data (PBS, pH = 7.4, 10 × 10 −3 m, at 37 °C).PA here refers to R1-50SSQ.
Figure 5 .
Figure 5.In vitro and in vivo studies on biocompatibility and biodegradation of3-layer SiON-PA films.A) Cell viability of 3-layer SiON-PA films (3L-R1-50SSQ, 3L-R2-50SSQ; R1 and R2 refer to the molar ratio of PEA/TTT = 1 and 2 respectively), and the related PA films (R1-50SSQ-PA and R2-50SSQ-PA).B) Fluorescence images of cells cultured with 3-layer SiON-PA films in contrast to a control assay.C) Weight change of mice with/without film implantation.Nine mice were implanted with 3-layer SiON-PA films: 4 mice in the 4-week group and five mice in the 8-week group.Four mice without film implantation were the sham group.(Film thickness: 100 μm; size: 5 × 10 mm).D) Tissue weight change of mice, including heart, spleen, kidney, brain, lung, and liver, in three groups.E) Silicon distribution in heart, spleen, kidney, brain, lung, and liver, in three groups of mice.F) Complete blood count (CBC) analysis from mice.G) Blood chemistry analysis on mice.H) Histological analysis on mice skin with/without film implantation.I) Average skin thickness of mice with/without film implantation.No significant differences exist among groups of Sham, 4-week, and 8-week analyzed using an ordinary one-way ANOVA with post hoc Tukey's means comparisons.Results in Figure 5C-G are presented as mean ± standard deviation.Scale bars: b. 1 mm; h.200 μm.
Figure 6 .
Figure 6.Device demonstrations of 3-layer SiON-PA films as encapsulating structures and studies of their degradation.A) Frequency stability of an LC circuit sealed on a glass slide by 3-layer SiON-PA films and soaked in PBS (pH = 7.4, 10 × 10 −3 m) at 37 °C.Inset is a photo of the LC circuit.B) Radio frequency (RF) behavior of a power receiving modulus for wireless power transfer sealed on a glass slide by 3-layer SiON-PA films and soaked in PBS at 37 °C.Inset is a photograph of the power receiving modulus with a LED indicator.C) Frequency change of the power receiving modulus in B. D) Degradation process of 3-layer SiON-PA films (3L-R2-50SSQ) in PBS at 95 °C.E) Arrhenius plot of ln(1/t) versus 1000/T.This illustrates the temperature-dependent degradation of 3-layer SiON-PA films, which are expected to fully degrade in 353 days at physiological temperature.Scale bars: 5 mm.
Figure 7 .
Figure 7. Optoelectronic device demonstration of water barrier performance of 3-layer SiON-PA films through implantation studies in mice.A) Exploded view schematic diagram of the device and its encapsulation.Inset shows activated LEDs, indicating proper functioning of the device.B) Schematic diagram of the subcutaneous implantation and photographs of LED activation upon implantation for 3 and 4 weeks.Inset is a photograph of a device explanted at the 3-week time point, showing wireless activation of the LEDs, and the optical transparency of the encapsulating films.C) RF behavior of devices encapsulated using 3-layer SiON-PA films.D) RF behavior of devices without encapsulation.E) Frequency and Q factor of devices, shown in C and D, changing with implantation time in mice.F) Photographs of mice implanted with devices during wireless activation.Devices with encapsulation using 3-layer SiON-PA films can be wirelessly powered after implantation in mice (n = 3) for 4 weeks, as evidenced by the activated LEDs.Those without encapsulation (sham group, n = 3) fail to function after implantation for 2 days.One mouse was sacrificed to harvest the device (B, inset) after 3 weeks.Insets show activation of the LEDs in a darkroom for improved visualization.Scale bar: 1 mm. | 10,777.2 | 2024-02-10T00:00:00.000 | [
"Materials Science",
"Engineering",
"Medicine"
] |
Impact of COVID-19 News on Performance of Indonesia Stock Market
This research tries to show that information about COVID-19 affects market arousal indicated by the frequency of transactions, and the market performance shown by Jakarta Composite Index (JCI). The theory used for analysis is the prospect theory and efficient market hypothesis (EMH). The results of statistical analysis indicate that information about COVID has a negative effect on JCI, as well as trading volume the previous day. The evidence can briefly prove that there is an effect of COVID-19 and weakening daily transactions on JCI. The research findings show that the JCI market uncertainty is in line with the VUCA and Prospect theory. In this case, it occurs that uncertainty affects the behavior of investors' decision making. Investors' decision-making behavior is accumulated in market behavior, and is subsequently manifested in index changes in accordance with the efficient market hypothesis. The contribution of this research to the study of financial market behavior is that uncertainty and uncertainty faced by investors affect market behavior and changes as measured by the index.
Introduction
Uncertainty that occurred on the Indonesian stock exchange (IDX) as well as various world exchanges caused by Covid-19 can be matched with VUCA (velocity, uncertainty, complexity, and ambiguity). It is stated in the VUCA concept that the current business environment (from 1990 -2014) is full of difficult-to-predict of VUCA condition [1]. All take place quickly, full of uncertainness, very complicated relationships with each other and also create confusion and hesitation. VUCA'S condition also occurred at the time of the Covid-19 outbreak. The response to the uncertain, and stressful situation can also be traced and attributed to prospect theory [2]. There was a decrease in Jakarta Composite Index (JCI) in the early period of 2020 in line with the outbreak of Covid- 19. The conditions reflected in VUCA and prospect theory can be attributed to the current Covid-19 outbreak. At this time where the Covid-19 outbreak is still ongoing it is known that business activities and financial markets (foreign exchange rates) and capital markets are fluctuated, and business prospects are also experiencing as due to lock downs in many countries. The complexity of the situation also occurs because the transportation, oil, and manufacture industries are also hit, all hooks attached to each other. State leaders as well as businesses are also experiencing confusion and hesitation in making decisions due to all-round suddenness and limited data. More news, and discussion, news disbanding data and analysis. It all happened because it happened surprisingly and quickly. Talk of news topics, discussion of something that is widely circulated [3]. News as one of the sources of discussion or discussion and provoke responses that have been widely circulated and discussed widely is also known to have an impact on perception, and concern in the community as well as business people. In addition to news and extensive talks on Covid-19, it is also known that various policies on the partial lockdown in Indonesia add to the economic burden in the form of a partial closure of business activities that are sure to have a direct impact on the economy, through the transportation, distribution, production and trade sectors and services.
Based on that rumors affect predictions and expectations through the mechanism of behave. In accordance with the context of Schindler's [3] actions, the news and extensive discussion of important topics (in this case, Covid-19 rumors) can affect the perception, expectations of investors in making investment decisions in financial markets. In other words, the news can affect market sentiment. Finally, it can be concluded that this study seeks to explain how Covid-19 news impacts investment transaction decisions on the market or market sentiment, and further how that market sentiment affects market performance. Empirically research of the impact of Covid-19 news on market performance is useful to examine how information of Covid-19 news impacts market performance, in this case IDX. Indirectly this research also explores the implementation of two aspects of agency theory namely: about the risk and the absence of asymmetry information will cause a difference in the expected return rate.
Efficient Market Hypothesis
In the agency theory highlighted three things: the absence of conflicts of interest related to ownership, about risk preferences, and information asymmetry [4]. Some investors or traders in the capital market are people who are co-opting or noise traders.They lack or do not understand the information so that the action comes along (herding). However, because of the large number, bias affects the market [5]. The Efficient Market Hypothesis theory states that all information about the company is reflected in the share price, thus the change in the share price reflects the change in the value of the company as measured by the performance it achieves. The relationship between the company's performance and the market can be seen from the assumptions raised. First that market participants are rational. In fact, the rationality of the investor is different (random) depending on the information he receives [6]. In terms of traders, they can be grouped into well informed traders and participating traders. Secondly, investors take advantage of the development of market information in the form of price movements associated with fundamental analysis and technical analysis. As such EMH is close to CAPM analysis which is a risk analysis based on changes in individual stock prices and market changes. The reason why EMH is largely ignored, because of EMH's failure to predict and explain asset bubble, excessive market reaction, excessive stock price volatility [5]. Based on this, the theory of market risk and market behavior is reasonable. However, EMH continues to contribute that decisions about pricing and investment are influenced by information that is later reflected in pricing. In other words, the share price is an assessment or reflection of investor expectations, which is the result of the various information it receives. In EMH it can also be explained that the investor there is a really understanding and there is origin (noise trader), and this is the source of the market sentiment as well. This market sentiment can be identified from the market passion reflected by the number of transactions in the capital market.
Prospect Theory
The application of Prospect Theory does in-aggregate stock market return puzzles was reviewed by Bing Han and Hsu [7], which stated that prospect theory can explain how the complexity of aggregate stock market mechanisms is reviewed. It is assumed in this theory that the behavior of risk aversion by investors depends on his previous experience. If they have experience that the results in the capital markets are good enough, they will go back after shock, because the losses in the past will soon be recoverable by future earnings. However, if they lose enough then they will be less averse or less daring to take risks. Prospect theory also underscores the in-line risk premium that investors want higher incomes if there are frequent fluctuations, so they hope that once there is a loss then a bigger and faster recovery is demanded by them. In short investors will see various movements of risk and income opportunities in the capital market [7,8]. This theory suggests that investors will continue to make investments and make investment decisions based on opportunities on the conditions that will date. Any shocks or stocks will soon be followed by recovery as long as there is still a good chance in the dating period.
Method
The research model initiated is that JCI is influenced by information about Covid-19. The Covid-19 information will likely affect the behavior of traders and brokers [9]. Similarly, there has been evidence that there is an influence of Covid-19 death toll on the stock market performance of various European countries [10]. In addition to researching the impact of Covid-19 models compiled can be based on the theory of efficient market hypothesis (EMH) which states that all information will be reflected in the market price, and that market price is also a reflection of investor expectations. Changes in market prices will ultimately impact stock indices. In addition, based on the theory prospects that the behavior of investors who risk aversion will move their investment to sectors that are considered safe. If there is an impact of Covid-19 on the company's performance, the investor will sell his shares and transfer to other forms of investment. Data that can be proxied for the condition or performance of the stock market is the frequency of trading. Meanwhile, the use of real time models (at the time) and lag t-1 and t-2 over Covid-19 news was carried out in previous investigation [11], while some have examined the impact of the Covid-19 outbreak on economic activities [12].
The basis of the theory and also previous research, where Covid-19 news can have an effect in real time, da also lag-time. In addition, based on research and experience that investment decisions and stock transactions are influenced by previous performance and conditions, the following econometry equation models can be prepared.
In this study, Covid-19 information is projected with the number of cases per day in the period of daily data. Trade Frequency Lag is defined as Frequency of Total trade lag (daily data).
The observation period of this study is from March 2, 2020 to June 12, 2020. It could be considered quite safe and representative according to the daily data from the official announcement, until the end of the observation period is n = 67 working days of the IDX stock exchange (holidays are not taken into account). Daily stock performance is OJK (daily transaction); for Covid-19 data the source is from Wikipedia webpage on Covid-19 pandemic in Indonesia (http://en.m.wikipedia.org/wiki/Covid-19_pandemic_in_I ndonesia).
Results
The results showed the descriptive statistics and correlations of variables used in this study (Table 1; Table 2). Moreover, data processing with SPSS version 22 generates the value of Adjusted R-square is 73.10%. A high Adjusted R-square revealed in this study indicates that market conditions reflected by the volatility of transaction activity, as well as the news about Covid-19 being able to become a high estimator /predictor. This indicates that however the Covid-19 outbreak is able to affect the performance of the capital market [12]. The findings as shown in Table 3 showed that the influence possibly depends on the length of the partial lockdown which affects economic activity and will further provide a negative signal for investors. The condition of declining activity and the potential for losses that will be dating is a driver of negative expectations with the result of further decline in market activity and increasing negative market sentiment, this negative market sentiment is ultimately also depressing JCI. The due diligence of the regression model tested can be said to be feasible because of F-statistics of 88.004 with a significance of 0.000 ( Table 4). The research model that uses real-time based consecutive time variables and lag t-1, t-2 empirically proves that the model remains fit [11]. The suitability of this good model shows that in times of crisis the role of black swan information about the impact of Covid-19 is significant in the model, in other words proven to play a role in influencing JCI. The level of conformity of the model also shows that Covid-19 affects capital markets [13]. Pandemics and restrictions cause supply shocks in employment, productivity, and markets. Going forward it will affect the company's performance, and those conditions are immediately responded to by exchanges with price declines and stock indices both in Germany, Japan, America and various European countries. Information about Covid-19 news influence the stock price and trading volatility, but there is an asymmetric impact on price and trading behavior. It related on macro-economy of each country [14; 15]. Furthermore, Table 5 presented the results of processing statistical data for the total test.
The model arranged can be said to be safe from the disruption of multi-collinearity with VIF<10, and Tolerance > 0.10. On the basis of using standardized, the Covid-19 regression coefficient is 0.344 and the Total Frequency lag is 0.862. This suggests that the total influence of the composite remains dominant. The coefficient values inform that the research is based on standardized variables, where 2 different factors are performed standardized equalization. Standardized results show that the negative influence of Covid-19 outbreak is growing, and significant. The results also indicate that various news stories can be positive and negative in the capital markets and affect the market. This is also in line with signaling theory and prospect theory. The Covid-19 outbreak is a negative signal, causing the market to be negatively corrected. The speed of change in the Jakarta Stock-market Composite Index (JCI) can be explained by prospect theory that the investor often has to make decisions in situations where the information is less complete, and tends to seek a safe (risk averse). As a result, various news stories can be responded to quickly, although perhaps the market is inefficient. It also indicate the same thing, where the Covid-19 outbreak is rapidly affecting the stock market in various countries as in the European region [10]. The speed of Covid-19 information transmission because of the large information news flow in term of national or international scale. Rumors, or news can influence stock market activities so it impacted stock index [16].
Discussion
The news about Covid-19 prompted doubt and ambiguity among investors about the security of their investments. These doubts have an effect on the volatility of capital market performance reflected by the Jakarta Capital-market Index or Indonesia Stock Exchange (IDX) Index slump. Investor skepticism arises in the form of negative sentiment that the flow of selling interest in shares is greater than the buying interest, in addition to negative sentiment in the money market related to the capital market is the withdrawal of capital / foreign investment funds in IDX (capital outflow). Such negative sentiment affects stock prices and ultimately affects the performance of IDX in the form of a slump in JCI. This paper aims to analyze the effect of news about Covid-19 on market sentiment, the effect of Covid-19 rumors on market performance, how strongly Covid-19 rumors affect market performance, and how strong market sentiment was in the Covid-19 period on market performance.
The results are in line with previous research that tries to link the influence of Covid-19 on capital markets using daily data from January 23 to March 13, 2020, and found that news of the number of deaths from Covid-19 in each country of Germany, the UK, France co-ed with the FTSE, MIB, CAC40, DAX40 indexes [10]. Investors turn to gold investment, crypto currencies, and other derivative products. In fact, investors avoid risk as a result of the Covid-19 outbreak [10]. This condition indicates the movement or relationship between Covid-19 news and stock market performance on each country's exchanges. The impact of Covid-19 on the capital market can be said to be a form of continuous impact and stronger in a spiral (spillover impact). The use of distance restriction, lockdown, and various other policies will reduce transportation, production levels, sales and ultimately the performance of the company. As a result, exchange trading activities are also disrupted by restrictions on attendance and other activities. The impact of Covid-19 on the capital market can be categorized as a form of indirect influence and stronger than the result of the policy of rating on sector riel activities. This influence was examined using variance of internal restrictions, monetary policy, government spending, the number of Covid-19 cases [12]. Various fiscal and monetary policies are a stimulus of people's purchasing power to mobilize business and economic activities. The better the pace of the economy it is assumed that capital market activity and stock prices will be better. It was conducted by using variables of lockdown policies, government activity fund budgets, and variables depending on the share price and level of economic activity in leading stocks in various countries [17; 18].
The results also in line with research examining the impact of Covid-19 on capital markets as conducted on Hong Kong and America capital markets in April 2020 [11]. The study is intended to look at the impact of Covid-19 on capital market returns in real time. The study was conducted in the form of predicted market return rates of the t period, based on the t-1 and t-2 periods on a daily basis. It was found that there was a good change in the decrease (increase) in multiples of 4% to 11% the next day. On the basis of this condition, it is expected that in the event of a rebound, the recovery will be faster. It was also found that stocks that suffered major declines/losses were stocks of capital-intensive companies that relied on leverage [19,20]. The losses are of greater due to the transmission of Covid-19 handling larger disbanding other types of companies. The findings also suggest that subsequent research on the decline in the company's stock market value is related to the rate of unemployment. Previous research on stock changes can be packaged into a cost-per-worker ratio versus a labor versus capital cost [11].
The study of Covid-19 was conducted in combination with several researchers with several methods namely: (i) catalyst of contagion; (ii) media attention; (iii) spillover effect at financial markets; (iv) macroeconomic fundamentals, suggesting that Covid-19 is a black-swan shock to the economy and financial and capital markets. The impact of Covid-19, considered a black swan, provides an overview of the financial conduct management of the broad research horizon on various things that can affect finances, both from data and new methods and aspects [9]. Covid-19 has a large impact on risk on economic and business as unpredictable (prevalent) event. The criterion is to appear surprisingly, influentially, and after its emergence it is necessary to look back [21].
Conclusions
The results of statistical analysis tell us that information about Covid-19 has a negative effect on JCI, as well as trading volume the previous day. This study empirically confirmed that the Covid-19 outbreak had an effect on Jakarta Composite Index (JCI). In addition to these variables, it was found that market conditions also affected JCI. The evidence can empirically prove that there is an effect of Covid-19 and weakening daily transactions on JCI. The research findings show that the JCI market uncertainty is in line with the VUCA and Prospect theory. Theoretically, the findings also presented the prospect theory as a critique of utility theory that assumes all can be ascertained, and rationality and tends to risk aversion, whereas the reality is not always the case, in certain circumstances people are forced to make decisions amid uncertainness, and cannot definitively calculate the probability (probability of events). Various natural decisions of high uncertain conditions will take the form of S-curve. At some point will increase then will decrease so in a row of ups and downs alternately. In other words, prospect theory can explain how this uncertain Covid-19 event will run on the behavior of capital market transactions and it will influence the market composite index.
In this case it occurs that uncertainty affects the behavior of investors' decision making. Investors' decision-making behavior is accumulated in market behavior, and is subsequently manifested in index changes in accordance with the efficient market hypothesis. The contribution of this research to the study of financial market behavior is that uncertainty and uncertainty faced by investors affect market behavior and changes as measured by the index. All of that can be explained by the theoretical prospect and the efficient market hypothesis.
For future research it is recommended that deepening is carried out with a more complete simultaneous model and more advanced analysis method than a simple linear regression model. Models that could be suggested as development or enhancements for future research (Appendix). The development of models that can be suggested in the form of simultaneous form research, and using more complete indicators to be known various specific impacts on the type of stock, the type of risk or the form of impact it has. | 4,655 | 2021-04-01T00:00:00.000 | [
"Business",
"Economics"
] |
Complete kinematical study of the 3 α breakup of the 16.11 MeV state in 12 C
. The reaction 11 B + p has been used to populate the ( J π , T ) = (2 + , 1) state at an excitation energy of 16.11 MeV in 12 C, and the breakup of the state into three α -particles has been studied in complete kinematics. A two-step breakup model which includes interference effects is found to provide the most accurate description of the experimental data. The branching ratio to the ground state of 8 Be is determined to be 5.1(5)% in agreement with previous findings, but more precise by a factor of two, while the decay to the first excited state in 8 Be is found to be dominated by d -wave emission.
Introduction
The breakup of the excited 12 C nucleus into three α particles has been studied since the days of Lord Rutherford, motivated by a desire to understand the breakup mechanism and gain new insights into the nuclear structure [1]. In the 1960s and 1970s it was demonstrated that the breakup primarily proceeds in a sequential manner, i.e., 12 C → α 1 + 8 Be followed by 8 Be → α 2 + α 3 . The sequential model was successfully applied to describe the breakup of several states in 12 C [2], but it failed in the case of the (J π , T ) = (2 + , 1) state at an excitation energy of 16.11 MeV. Initially, this led to the suggestion that the breakup of the 16.11 MeV state proceeds directly to the 3α final state [3], but it was later shown that the breakup can be described within a more sophisticated sequential model, which takes into account the interference due to Bose symmetry in the 3α final state [4,5,6,7,8].
The 3α breakup has gained renewed attention in the past decade, in part due to the advent large-area segmented silicon detectors and fast multi-channel data acquisition systems which have made it possible to collect improved experimental data. In particular, it has become possible to perform double and triple-coincidence measurements with high efficiency and high resolution (energy and angle), allowing the breakup mechanism to be studied in far greater detail than previously possible. The renewed interest in the 3α breakup is also motivated by a broader interest in understanding the new multi-particle decay modes that are being discovered in exotic isotopes close to the driplines. e.g., two-proton radioactivity.
Send offprint requests to: a e-mail<EMAIL_ADDRESS>Modern detection techniques were first applied in the early 2000s to the breakup of the (1 + , 0) state at 12.71 MeV [9]. Using the β decay of 12 N as a means to populate the 12.71 MeV state, the breakup was measured in complete kinematics for the first time and was shown to be in quantitative agreement with a sequential model based on the R-matrix formalism [10]. More recently, the reaction 11 B( 3 He, d) has been used to investigate the breakups of the (2 − , 0) state at 11.83 MeV, the (1 + , 0) state at 12.71 MeV and the (4 − , 0) state at 13.35 MeV. Again, the same sequential model was found to provide the most accurate, though in this case not fully satisfactory, description of the breakups [11]. In the same experiment the breakup of the (0 + , 0) state at 7.65 MeV was shown to be primarily sequential [12] (see also Ref. [13]).
As argued in Ref. [14] the distinction between a sequential and a direct decay becomes ambiguous if the total decay energy is comparable to or smaller than the width of the intermediate state through which the sequential decay would proceed. In such cases it matters little which decay model one adopts. As long as Bose symmetry and spin-parity conservation are correctly incorporated into the models, the calculated 3α momentum distributions will not differ much, making it very difficult to distinguish between sequential-and direct-decay models based on a comparison to experimental data. For some states, such as the 12.71 MeV state, the constraints imposed by Bose symmetry and spin-parity conservation are particularly strong, leaving less room for the decay mechanism to influence the 3α final state. Indeed, fairly good descriptions of the breakup of the 12.71 MeV state have been obtained with rather different models [9,11], including the direct-decay model of Ref. [15] (known as the democratic model), the three-body model of Ref. [16] and the afore-mentioned sequential model of Ref. [10], which provides the most accurate description of the three.
Previous to this work the 3α breakup of the (2 + , 1) state at 16.11 MeV in 12 C had not been studied with a modern experimental setup. (See, however, Ref. [17] which reports on a measurement of the branching ratio for the sequential breakup through the ground state of 8 Be.) The most recent studies of the breakup of the 16.11 MeV state date back to the late 1960s [4,5] and early 1970s [6,7,8]. In the present work the 16.11 MeV state is populated via the p + 11 B reaction, and the 3α breakup is measured with a state-of-the-art detection system with the aim of obtaining a quantitative and accurate understanding of the breakup mechanism. Another aim of this work has been to measure the weak γ-decay branches of the 16.11 MeV state. Preliminary results on this aspect of the work have been published in Ref. [18]. The p + 11 B reaction is also of interest due to its potential use as the primary source of energy in an aneutronic fusion reactor [19]. This motivated a recent study of the 3α breakup of the (2 − , 0) state at 16.6 MeV by Stave et al. [20].
The paper is structured in the following way: Section 2 describes the breakup models which will be tested against the experimental data. Section 3 covers the experimental part, including a description of the setup and a discussion of the calibration procedures. Section 4 describes the data reduction and analysis. Section 5 presents the results, followed by a discussion of the results in Section 6. Finally, Section 7 concludes and provides an outlook.
Breakup models
Two conceptually different pictures of the breakup are tested in the present work: direct and sequential.
Direct breakup
For the direct picture we adopt the so-called democratic model of Ref. [15]. In this model the α-α interaction is assumed to play an insignificant role in the breakup, implying that the breakup proceeds without the formation of an intermediate two-body resonance. The breakup amplitude is calculated by performing an expansion in hyperspherical harmonics (eigenfunctions of the grand angular momentum operator of the the three-body system) retaining only the lowest-order term permitted by symmetries. The amplitude is further symmetrised in the coordinates of the three identical α particles as dictated by Bose symmetry.
Sequential breakup
The sequential model takes the opposite position of the direct model. The α-α interaction is assumed to play a central role in the breakup by "locking up" two of the α particles in an intermediate two-body resonance. The breakup is modelled as a sequence of two two-body breakups, i.e., 12 C → α 1 + 8 Be followed by 8 Be → α 2 + α 3 , the only correlation between the two breakups being those due to the conservation of energy, momentum, angular momentum and parity. We shall refer to α 1 as the primary α particle and α 2 and α 3 as the secondary α particles. We consider breakups of the 16.11 MeV state (J π = 2 + ) through the narrow ground state (J π = 0 + ) and the broad first-excited state (J π = 2 + ) in 8 Be, which we shall refer to as the 8 Be(gs) and 8 Be(exc) channel, respectively. In the former case the orbital angular momenta allowed by spin-parity conservation are l = 2 in the first decay and l ′ = 0 in the second decay; in the latter case l = 0, 2, 4 and l ′ = 2.
Implementation of the sequential model is straightforward for the 8 Be(gs) channel, but requires special care for the 8 Be(exc) channel due to the large width of the firstexcited state in 8 Be. Following the approach of Refs. [9,10], we employ the R-matrix theory [21] in which resonances are parametrised in terms of level energies and reduced widths, while penetration factors account for the energy-dependent probability of quantum tunneling through the Coulomb and angular-momentum barriers.
We begin by introducing some notation, where unprimed quantities refer to the first decay, 12 C → α 1 + 8 Be, and primed quantities refer to the second decay, 8 Be → α 2 + α 3 . Since we shall be assuming that a single orbital angular momentum dominates in the first decay, and since only a single orbital angular momentum is allowed in the second decay, we will leave out the subscripts l and l ′ in what follows to simplify the notation. The partial decay widths are given by Γ = 2P (E)γ 2 and the energy available in the first decay and E ′ = E 23 is the energy available in the second decay, E beam being the kinetic energy of the proton in the laboratory frame and Q = 8.682 MeV being the Q-value of the 11 B(p, 3α) reaction.
Disregarding the overall orientation of the breakup, knowledge of the relative kinetic energy of the secondary α particles, E 23 , and the angle between the first and second breakup, θ 2 , is sufficient to fully specify the kinematics of the 3α final state.
No symmetrisation
Neglecting Bose symmetry and assuming that a single orbital angular momentum dominates in the first decay, the E 23 dependence of the breakup probability is given by, 1 Since the reduced width in the first decay only enters as an overall multiplicative factor, not affecting the functional dependence, we arbitrarily fix it to γ 2 = 1 MeV. For the 2 + resonance in 8 Be we use the R-matrix parameters from Ref. [22], which assume a channel radius of 4.5 fm. Note that in Ref. [22] the level energy is given relative to the ground state, whereas here it is given relative to the 2α threshold, which is 92 keV lower in energy. We compute the channel radii as a = a 0 (4 with a 0 = 1.42 fm. (This gives a ′ = 4.5 fm consistent with the channel radius adopted in Ref. [22].) Having assumed that a single orbital angular momentum dominates in the first decay, we can determine the θ 2 dependence of the breakup probability from theory [23].
(For the general case in which several orbital angular momenta contribute, the θ 2 dependence cannot be uniquely determined because the relative phase shifts are not known a priori.) Assuming that l = 2 dominates, one obtains the following angular distribution, W l=2 (θ 2 ) = 1.12 + 0.80 sin 2 (2θ 2 ) . ( Here θ 2 is the angle of α 2 relative to α 1 , measured in the 8 Be rest frame. As we shall see, the assumption that l = 2 dominates is supported by the experimental data. Finally, we note that the angular distribution for the 8 Be(gs) channel is isotropic because the ground state has J = 0 and hence no directional memory.
Symmetrisation
To take into account Bose symmetry, the modified expression from Ref. [10] is used for the amplitude, 1 The same formula appears in Ref. [9], but with a wrong sign in the denominator.
The factors E 1 2 1 and E 1 2 23 have been introduced to remove the two-body phase-space factors inherent in the penetrability factors. The breakup probability is obtained by symmetrising in the coordinates of the three α particles, then squaring and finally averaging over the initial spin directions, This result is then multiplied by the appropriate threebody phase-space factor. If the symmetrisation step is neglected, Eq. (2.3) is recovered. The symmetrisation step introduces interference effects in the 3α final state, the importance of which have been clearly demonstrated in the case of the 12.71 MeV state [9,10].
Coulomb repulsion
As discussed in Ref. [9] it is possible to incorporate a rough correction for the Coulomb repulsion between the primary and the secondary α particles into the sequential model. This correction turns out to be significant for the breakup of the 12.71 MeV, where the primary α particle only travels a very short distance before the short-lived 8 Be nucleus breaks up. The correction is based on a greatly simplified picture of the breakup process, in which the 8 Be nucleus and the primary α particle move apart until they reach a certain separation, r 0 , at which point the 8 Be nucleus breaks up. In this picture, the primary α particle must first tunnel through the potential barrier of the α 1 -8 Be system to r = r 0 , after which it must tunnel through the combined potential barrier of the α 1 -α 2 and α 1 -α 3 systems to r = ∞. Since the tunneling probabilities combine multiplicatively, the penetrability factor in Eq. (3) must be replaced by, 2 where the "tilde" sign indicates that the penetrability factors should be evaluated for the enlarged channel radius a = r 0 . For the present calculations, we adopt a = 10 fm and assume l 12 = l 13 = 2 for the orbital angular momenta of the α 1 -α 2 and α 1 -α 3 systems. A naïve estimate of the distance travelled by the primary α particle may be obtained by considering the asymptotic relative speed of the α 1 -8 Be system, v ≈ 0.068c, and the mean lifetime of the first-excited state in 8 Be, τ ≈ 0.47 × 10 −22 s, yielding vτ ≈ 9.6 fm. Assuming an initial separation equal to the channel radius of a = 5.1 fm, this gives the rough estimate r 0 ≈ 15 fm. 2 The penetrability factor is included implicitely through Γ = 2P (E)γ 2 .
Possible extensions of the sequential model
Below, we outline some possible extensions of the sequential model which, however, are beyond the scope of the present study.
Several l values Eq. (3) is easily generalised to the case of several l values by introducing a second summation running over all orbital angular momenta allowed by spinparity conservation (l = 0, 2, 4). Since, however, neither the relative magnitude nor the relative sign (+ or −) of the reduced widths, γ l , are known, these would have to be treated as free parameters, to be constrained by fitting the experimental data.
Higher-lying resonances in 8 Be In addition to the two breakup channels considered here, the 16.11 MeV state could also decay via the low-energy tail of the very broad (Γ ≈ 3.5 MeV) second-excited 4 + state in 8 Be at 11.35 MeV. This channel can easily be included in the formalism, but only at the expense of introducing more free parameters. Given the good fit to the experimental data achieved with the existing model, the motivation for including the extra channel is limited.
Formation channel Some degree of polarisation of the 12 C resonance formed in the p + 11 B reaction is to be expected. This could potentially distort the experimental Dalitz plot (cf. Section 4.4) because the detection system does not cover all of 4π. An extended formalism, which takes into account the p+ 11 B formation channel, has been developed [24], but introduces additional free parameters such as proton-decay widths, which would have to be constrained by fitting the experimental data. Table 1 gives an overview of the four models that are being tested in the present study. M1 is a direct model based on the democratic-decay formalism of Ref. [15] while M2-M4 are three variants of the sequential model. M2 is the unsymmetrised model based on Eq. () and Eq. () and assumes l = 2. M3 and M4 are symmetrised models based on Eq. (3) and Eq. (4), which also include the rough correction given in Eq. (5) for the Coulomb repulsion between the primary and the secondary α particles. M3 and M4 differ in that the former assumes l = 0 while the latter assumes l = 2.
Experimental procedure
The experiment was performed at the 400 keV Van de Graaff accelerator at Aarhus University. The 16.11 MeV state in 12 C was populated through the p + 11 B reaction, using protons accelerated to energies of 167-170 keV. At the target position the typical beam intensity was 1 nA, while the transversal size of the beam was approximately 2 mm × 2 mm, as defined by a set of horizontal and vertical slits. The target consisted of natural boron on a 4 µg/cm 2 carbon backing. Several such targets were used in the experiment, with the boron thickness ranging from 10 to 15 µg/cm 2 . The reaction chamber was pumped by an oil diffusion pump. The experiment was conducted during a period of 6 months 3 wherein several changes were made to the setup as described below. A detailed account of the experiment is given in Ref. [25].
Detection system
The detection system consisted of two double-sided silicon strip detectors (DSSSD) of the W1 type [26] with 16 × 16 strips and an active area of 5 cm × 5 cm. The detectors used in the present experiment were both 60 µm thick; enough to fully stop the most energetic α particles from the p + 11 B reaction. One detector had a deadlayer of 200 nm Si equivalent (DSSSD 1), the other 700 nm (DSSSD 2). For the largest part of the experiment the detectors were positioned as shown in Fig. 1, at a distance of 2-3 cm from the target (see Table 2 for the precise positions), providing a combined solid-angle coverage of 35% of 4π, with DSSSD1 covering the center-of-mass angles 60°-150°and DSSSD2 covering 35°-120°. The intrinsic energy resolution of the detectors was 40 keV (FWHM).
The setup did not allow us to discriminate between different types of particles. However, since the 3α channel is the only open three-body channel, the 3α events could readily be identified in the off-line data analysis as those having a multiplicity of 3. Random coincidences were identified and discarded by imposing additional cuts as discussed in detail in Section 4.1, providing us with an efficient and highly selective method to identify the events of interest.
The electronic signals were read out using charge-sensitive Mesytec MPR-32 preamplifiers connected to Mesytec STM16+ shaping amplifiers and analogue-to-digital-converter (ADC) modules of the CAEN 785 type. The amplification gain was stable throughout the experiment. Fast and delayed time signals, generated by the Mesytec STM16+ modules, were fed to a time-to-digital converter (TDC) of the CAEN 1190 type, providing time stamps with a resolution of about 100 ns. The thresholds of the data acquisition system were set as low as possible above the electronic noise level. For all electronic channels, the trigger efficiency was found to rise gently as a function of energy, increasing from 0% to 100% within an interval of 200 to 400 keV, depending on the channel. Trigger thresholds, defined as the energy at which the efficiency reaches 50%, ranged from 100 to 300 keV for DSSSD 1 and 200 to 500 keV for DSSSD 2. Low energy cutoffs in each ADC channel ranged from 10 to 100 keV for DSSSD 1 and 100 to 200 keV for DSSSD 2. Low thresholds are essential to obtain complete kinematic information for events with low-energy α particles. The detection efficiency for low-energy α particles was further enhanced by placing the target at an angle relative to the beam axis, so that the α particles reaching DSSSD 2 (which has the thickest deadlayer) had to traverse the least possible amount of target material.
Data sets
In the course of the experiment several optimisations were made to the setup. The detectors were turned by a small angle and moved slightly closer to the target in order to achieve a better compromise between the elastic scattering rate and the solid-angle coverage. Small changes in the detection geometry, arising due to slight changes in the beam properties, were continuously monitored. The collected data has been divided into 10 data sets, each characterised by slightly different experimental conditions, as detailed in Table 2.
Calibration
Below, we describe the procedures adopted to calibrate the energy response and the geometry of the setup. Precise and accurate calibration is particularly important for the determination of the 3α detection efficiency, which is highly sensitive to energy losses and thresholds effects.
Geometry calibration
The geometry is defined by specifying the position of the detectors relative to the beam spot and their orientation relative to the beam axis. By analyzing the hit pattern from a radioactive source placed at the target position, which emits α particles isotropically, the geometry can be deduced with high precision. The geometry thus obtained is, however, not entirely accurate because the source cannot be positioned exactly at the beam spot. This results in a distortion of the extracted kinematic curves, most easily seen in the case of the Be(gs) breakup channel which gives rise to an α-particle group with a well-defined centre-ofmass energy of 5.8 MeV. By adjusting the geometry until the centre-of-mass energy no longer exhibits any angular dependence, we obtain a more accurate determination of the geometry, which differs by no more than 2 mm in all three spatial directions compared to the geometry deduced from the source measurement.
Energy calibration
The six most intense α-particle lines from the 228 Th decay chain were used for the energy calibration, providing calibration points between 5.4 and 8.8 MeV. Calibrations were made at regular intervals during the experiment; only small shifts of < 0.2% were observed. SRIM range tables [27] were used to correct for energy losses in the source and the detector deadlayers, taking into account the varying effective thickness due to the angle of incidence and assuming a point-like source. Corrections were also made for the non-ionizing energy loss [28] in the active detector volume, i.e., the energy loss that does not contribute to the measured signal. The source thickness was determined to be 100(4) nm carbon equivalent by rotating the source relative to the detector while monitoring the rate of change of pulse height with angle. The thickness of the detector deadlayers were determined by studying the variation in pulse height across individual strips due to the changing effective thickness. Having corrected for the above effects, a linear fit was made to the calibration points, giving slope and offset values with statistical errors of 1 × 10 −3 keV/channel and 2 keV, respectively.
Temporal variations
In the course of the experiment, the energy calibration was seen to vary substantially. The variations had a recurring structure: During measurements a gradual, downward shift was observed, but when measurements were interrupted to vent the chamber, the original calibration was recovered. The largest shift observed amounted to a 90 keV decrease in the 3α total energy. The shift is most significant for low-energy α particles, suggesting that the cause is energy loss in a material which is gradually adsorbed on the target. The shift in energy calibration was found to be correlated with a gradual decline in the 3α detection efficiency, supporting the above conclusion. The α-source measurements made at regular intervals did not reveal any significant changes in the calibration, which rules out adsorption on the detector surfaces. Thus, we favour the explanation that the adsorption occurs mainly on the target. The adsorped material is most likely hydrocarbons originating from the oil diffusion pump. Similar effects have been observed in other experiments employing similar pumps, see e.g. Ref. [29]. For each measurement we translate the observed energy shift into an equivalent thickness of adsorped carbon. The values thus obtained range from 10 to 30 µg/cm 2 . To keep the analysis tractable, we do not take into account the gradual nature of the absorption process when we determine the detection efficiencies. Instead we assume a constant thickness equal to half of the maximal thickness. The 3α detection efficiency is, as noted above, significantly influenced by the extra energy loss in the target. This dependency it not surprising since for the 16.11 MeV state, secondary α particles are emitted with energies as low as 40 keV, far below the detection thresholds of our setup.
Data analysis
In this section we discuss the various cuts applied to the experimental data in the off-line analysis. We also discuss how Monte Carlo simulations are used to model experimental effects and determine detection efficiencies, and finally we introduce the Dalitz-plot analysis technique.
Data reduction
The data reduction involves several cuts designed to remove random coincidences, i.e., events in which one or two α particles from a reaction in the target are recorded in coincidence with a spurious signal due to electronic noise, an elastically scattered proton, or another α particle originating from a separate, but nearly simultaneous, reaction in the target. First, we use the TDC information to narrow the coincidence window from 2.5 µs (the width of the ADC window) to 100 ns, thereby reducing the number of random coincidences by approximately a factor of 25. Second, we require the energies measured on the front and back sides of the detectors to match within 150 keV, while allowing for the possibility that two particles may hit the same strip, whereby their energy is added up (summing), and the possibility that a particle may hit an interstrip region in such a way that its energy is shared between the two adjacent strips (sharing). It may noted that summing occurs more frequently for the 8 Be(gs) channel than the 8 Be(exc) channel due to the small relative energy of the secondary α particles in the former channel.
We define the multiplicity of an event as the number of particles in that event which survive the above cuts. For those events which have a multiplicity of two, we use momentum conservation to reconstruct the momemtum of the unobserved α particle. For those events which have a multiplicity of three, we can apply additional cuts to further clean the data. First, we require the total momentum to be conserved within the experimental resolution. The effect of this cut is shown in Fig. 2. Panel B shows the total momentum versus the excitation energy in 12 C, reconstructed from the energies of the three α particles. Panel C shows the projection onto the excitation-energy axis without any cut imposed on the total momentum, while Panel A shows the projection obtained when we require the magnitude of the total momentum of the three α particles in the centre-of-mass frame to be less than 50 MeV/c, as indicated by the dotted (red) line. In the data analysis, separate cuts are also imposed on each of the momentum components. Furthermore, we require the relative angles of the three α particles to add up to 360°, and we require the breakup to occur in a plane. For both of these cuts a margin of 10°is allowed.
Identification of the breakup channel
The narrow width of the 8 Be ground state, combined with a high experimental resolution, makes it possible to cleanly identify the 8 Be(gs) channel on an event-by-event basis by evaluating the relative energy of the three possible pairs of α particles, where p i and p j are the α-particle momenta and M α is the α-particle mass. If any pair has a relative energy consistent with the 8 Be ground-state energy of 92 keV within the experimental resolution, we assign the event to the 8 Be(gs) channel. In the opposite case, we assign the event to the 8 Be(exc) channel, though this serves merely as a convenient label until we have established whether the sequential model provides an accurate description of the breakups that do not proceed through the ground state of 8 Be. Fig. 3 shows the α-α relative-energy spectrum for multiplicity-two and three events, clearly displaying the ground state peak at the expected energy.
Experimental acceptance
The α-particle spectrum measured in DSSSD 1 is shown by the filled histogram in Fig. 4, including both multiplicitytwo and multiplicity-three events. The broad distribution peaking between 3 and 4 MeV and the narrow peak just below 6 MeV are the most significant structures in this spectrum. The former is the combined energy spectrum of all three α particles in the 8 Be(exc) channel, while the latter is the energy spectrum of the primary α particle in the 8 Be(gs) channel. The multiplicity-two and multiplicity-three spectra are shown separately by the solid (black) histogram and the dashed (red) histogram. The intensity of the narrow peak just below 6 MeV is similar in the two spectra, showing that for the 8 Be(gs) channel the probability of detecting all three α particles is similar to that of detecting just two α particles. For the 8 Be(exc) channel, on the other hand, the probability of detecting all three α particles is seen to be significantly reduced compared to the probability of detecting just two α particles. This difference is easily understood: For the 8 Be(gs) channel the energy in the secondary breakup is small compared to the energy in the primary breakup, and hence the trajectories of the secondary α particles nearly coincidence, while for the 8 Be(exc) channel the energies are comparable so the trajectories of the secondary α particles will often be very different, making the coincident detection of both secondary α particles unlikely.
A Monte Carlo simulation program [30] is used to determine the distortion of energy spectrum resulting from the limited angular coverage of the detector setup, as well as other experimental effects such as the finite beam-spot size, the energy loss in the target and the detector deadlayers, the finite granularity and intrinsic energy resolution of the detectors and the detection thresholds. The simulation program takes as input the 3α final-state momentum distribution determined by the breakup models discussed in Section 2. The ouput of the simulation is a data file with the same structure as the data collected in the experiment. This allows us to pass the simulated data through the same analysis procedure that we apply to the experimental data, thus accounting for any bias introduced by the cuts and gates applied in the analysis procedure.
The Dalitz plot
Assuming an unpolarised initial state, the measurement of two α-particle energies, E 1 and E 2 , gives complete kinematic information. Thus, a two-dimensional energy plota so-called Dalitz plot [31]-provides a useful way to visualize the 3α final state without loss of information. For cases such as the 3α system, in which the masses are identical, it is advantageous to use a special version of the Dalitz plot, in which the quantities plotted on the abscissa (χ) and the ordinate (ψ) are, ) are the α-particle energies in the centre-of-mass frame, normalised to the total decay energy. Thus, we obtain a representation that exhibits sixfold rotational symmetry around (X, Y ) = (0, 0) in which the kinematically allowed region is a circle with radius 1/3. Since the phase-space density is constant within the kinematically allowed region, any deviation from constant density is a manifestation of symmetries in the 3α system or dynamical correlations in the breakup process. The Dalitz-plot distribution from a sequential breakup is shown schematically in Fig. 5. The distribution is characterised by a band structure, with the 8 Be(gs) channel In the full Rmatrix description, the intensity distribution across the bands reflects the profile of the intermediate two-body resonance, modified by the penetration factors in the entrance and exit channels, while the intensity distribution along the bands reflects the angular-correlation function, as seen from the color scale in Fig. 5. Finally, we note that interference effects due to Bose symmetry are expected where the bands overlap which, as seen in Fig 5, only occurs for the 8 Be(exc) channel.
Dalitz distribution of the 8 Be(exc) channel
The Dalitz distribution of the 16.11 MeV state measured in the present experiment is shown in Fig. 6, separated into multiplicity-two events (a) and multiplicity-three events (b). As discussed in Section 4.3, the difference between the two distributions is entirely an effect of the experimental acceptance. The lack of events near the centre of the multiplicity-three distribution reflects the fact that for the 8 Be(exc) channel the the probability of detecting all three α particles is significantly reduced compared to the probability of detecting just two α particles. In contrast, no such suppression is observed for the 8 Be(gs) channel (the three narrow bands near the circumference of the Dalitz plot), reflecting the fact that for the 8 Be(gs) channel the probability of detecting all three α particles is similar to that of detecting just two α particles.
In the following we focus on the 8 Be(exc) channel, which is much richer in physics than the 8 Be(gs) channel due to the large width and non-zero spin of the firstexcited state in 8 Be. Multiplicity-two Dalitz distributions generated from simulations of the 8 Be(exc) channel are shown in Fig. 7. The different breakup models (M1-M4) give noticeably different distributions. By comparing to the measured distribution, shown in Fig. 6 (a), we conclude that M4 provides the most accurate description of the breakup. The democratic model (M1) fails altogether at reproducing the triangular shape of the measured distribution, whereas the sequential models (M2-M4) all reproduce it in various degrees. Among the sequential models, M3, which assumes an s-wave (l = 0) primary α particle, is the least consistent with the measured distribution, while M2 and M4 both come quite close, showing that the breakup is dominated by a d-wave (l = 2) primary α particle. M4, which includes symmetrisation, is seen to fill out the inner region in better agreement with the measured distribution than M2, which does not include symmetrisation. Thus, the effect of the symmetrisation is to cause constructive interference at the centre of the triangle and destructive interference on the outside, resulting in sharper edges and a more uniform intensity distribution within the triangle.
To facilitate a quantitative comparison of the simulated and measured data, we consider three different projections of the Dalitz plot, designed to highlight different aspects of the two-dimentional distribution. The projected coordinates ρ, ξ and η are given by [32], where we have re-ordered the α-particle energies such that ǫ i < ǫ j < ǫ k . The projections thus obtained are shown in Fig. 8. In accordance with our previous conclusion, M3 and M4, which both assume l = 2, give the most accurate description of the experimental data. A close comparison of M3 and M4, which only differ by the extra barrierpenetrability factors included in M4, reveals that M3 gives a slightly better description of the η projection, while M4 gives the best description of the ρ and ξ projections.
Branching ratio of the 8 Be(gs) channel
In order to extract a precise and accurate value for the branching ratio of the 8 Be(gs) channel, precise and accurate knowledge of the coincidence detection efficiency for both the 8 Be(gs) and the 8 Be(exc) channel is necessary. We use our Monte Carlo simulation program to determine the detection efficiencies for both channels. For the 8 Be(exc) channel we adopt the breakup model M4, since it was found to give the best fit to the measured Dalitz distribution. Detection efficiencies are determined separately for each of the 10 data sets listed in Table 2. The multiplicity-three detection efficiencies thus obtained range from 11 to 15% and from 0.2 to 0.8% for the 8 Be(gs) and 8 Be(exc) channels, respectively. The corresponding efficiencies for multiplicity two range from 23 to 30% and from 17 to 29%. Correcting for the efficiencies, we determine the branching ratio of the 8 Be(gs) channel to be 5.4(1.1)%, using multiplicity-two data, 5.1(5)%, using multiplicity-three data. The two values are mutually consistent, and furthermore they are consistent with the most recent literature value of 5.8(9)% [17], with our multiplicitythree value being more precise by almost a factor 2. Note that these values do not include the contribution due to the ghost of the 8 Be ground state [33], which was found to be about 20% in Ref. [17].
Discussion
In Section 5.1 we showed that a sequential breakup model, which includes Bose symmetry and a rough correction for final-state Coulomb repulsion, gives a reasonable fit to the experimental data. In contrast, the democratic, directdecay model was found to give a poor fit to the experimental data. The simple picture of a stepwise breakup thus appears to provide a fairly accurate description of the breakup of the 16.11 MeV state in 12 C. Our ability to clearly discriminate between the two breakup mechanisms hings on the fact that the total decay energy (E = 8.8 MeV) is significantly larger than the width of the firstexcited state in 8 Be (Γ ′ = 1.5 MeV). For lower-lying states in 12 C, the distinction is much less clear [14].
Our observation that d-wave emission dominates in the first decay, 12 C → α 1 + 8 Be, is in accordance with the observations of Refs. [5,6,7,8]. It is remarkable that d-wave emission is so strongly favoured over s-wave emission, given that both decays occur above the barrier (the mean decay energy is 5.8 MeV, while the barrier heights for the s-and d-wave channels are 2.2 MeV and 4.0 MeV, respectively) and hence neither is inhibited by barrier penetration. A similar observation has been made for the 2 − state at 16.57 MeV where f -wave (l = 3) is favoured over p-wave (l = 1) [5,20].
The small (5%) branching ratio of the 8 Be(gs) channel is another surprising feature of the breakup of the 16.11 MeV state. Considering only barrier penetrability factors, one would expect a branching ratio of 60% for the 8 Be(gs) channel with the 8 Be(exc) channel accounting for the remaining 40%.
Given that barrier penetration cannot explain the dwave dominance nor the slowness of the ground-state transition, we conclude that these anomalies are caused by the structure of the 16.11 MeV state, in particular its overlap with the first-excited state in 8 Be. This naturally leads to the question of how the 16.11 MeV state can decay to three α particles in the first place, considering that it belongs to an isopin triplet (T = 1). The α decay must occur through admixtures of one or several nearby (J π , T ) = (2 + , 0) states. The bound (2 + , 0) state at 4.44 MeV and the giant quadropole resonance around 26 MeV have previously been suggested as candidates [34], but in recent years evidence has been found for several, hitherto unknown, lowlying (2 + , 0) states in 12 C [35,36,37,38], providing additional candidates. It would be interesting to study the isospin mixing between these states and the 16.11 MeV state with modern microscopic cluster models.
Conclusions and outlook
The present high-statistics measurement of the 3α breakup of the (J π , T ) = (2 + , 1) state at 16.11 MeV in 12 C provides the most accurate understanding of the decay mechanism to date. A sequential model, which assumes a stepwise decay through the two lowest-lying resonances in 8 Be, is found to provide a rather accurate description of the breakup. Quantitative agreement with the experimental data is only obtained if Bose symmetry is included in the model. The agreement is further improved, though only slightly so, by including a rough correction for final-state Coulomb repulsion. In the end very good agreement is obtained though small systematic deviations remain.
The branching ratio to the ground state of 8 Be is determined to be 5.1(5)% in good agreement with previous findings, but more precise by a factor of two, and the decay to the first-excited state in 8 Be is found to be dominated by d-wave emission, also in agreement with previous findings. It is conjectured that these non-intuitive properties of the breakup are a consequence of the structure of the 16.11 MeV state, or more precisely, the structure of one or several (2 + , 0) states that are mixed into the 16.11 MeV state, enabling the decay into three α particles. The experimental and analytical methods used to investigate the breakup of the 16.11 MeV state here, can be applied directly to other resonances in the p+ 11 B reaction, e.g., the resonance associated with the (2 − , 0) state at 16.6 MeV, the breakup of which has recently been studied with a somewhat simpler detector set-up and analysis method [20]. | 9,587.2 | 2016-04-05T00:00:00.000 | [
"Physics"
] |