text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Enhanced Cryptography by Multiple Chaotic Dynamics A potential security vulnerability of embedding compression in a chaos-based cryptography is studied. Furthermore, a scheme for improving its security is proposed. This correspondence considers the use of multiple chaotic dynamics and drive chaotic trajectory by both plaintext sequence and initial values of a chaotic map. Chaotic trajectory is used for encryption that is never reused for different plaintext. This makes that scheme naturally resist chosen plaintext attack and cipher text-only attack. Its strong security is justified by the key space, key sensitivity, and tests of random number sequences. The results show that the security of the proposed scheme is stronger than the latest algorithm especially in resisting chosen plaintext attack, while its performance is not sacrificed. Introduction Data compression and encryption become more and more important in multimedia communication.In order to improve both performance and security of multimedia application, it is worthwhile to joint compression and encryption in a united process 1-4 .In comparison with the classical separate compression-encryption schemes, the united scheme is more secure and effective 5 .Meanwhile, in the AES system, the zero padding can degrade coding efficiency, due to the block nature of AES.Another classical approach to provide simultaneous compression and encryption is to join a traditional entropy coder in a stream cipher, for example, RC4.Unfortunately, inappropriate integration with an initialization vector of RC4 leads to severe security vulnerability 5 .In order to improve the performance, two distinct research directions are studied recently.One is embedding key-based confusion and diffusion characteristics in existing compression algorithms such as entropy coding Figure 1: Structure of the proposed scheme.multimedia systems 3, 16, 17 .Firstly, plaintext blocks with high probability are firstly encrypted by searching in the lookup table named block cipher, as is also named search model.After that, all the cipher text as well as plaintext with low probability is masked by a pseudorandom bitstream with stream ciphers, as is also named mask model. Here, vulnerability of mask model is presented.The random number sequences used in mask model are generated by chaotic trajectory, and it is simply given as 2.1 .The initial value x 0 and control coefficient b could be considered as pseudorandom number seed, and pseudorandom number sequences m are real key of this model.In a chosen plaintext attack, the attacker requests the ciphertext of P 1 {s 1 s 1 s 1 s 1 s 1 s 1 s 1 s 1 s 1 s 1 s 1 s 1 • • • }.The lookup table only has one symbol s 1 , the needed iteration of block cipher is one time.Thus, the output of block cipher is composed of number 1. Now, the attacker knows output of block cipher and ciphertext exactly, and then he can obtain mask sequences without any knowledge of pseudorandom number seed.Even though the scheme proposed in 2 is hybrid cryptosystem, after the mask sequences have been obtained, the attacker could recover part of ciphertext.Furthermore, the cryptosystem becomes a single operation of block cipher which has been found vulnerable 7 2.1 Basic Principle In the proposed scheme, the time series used in cryptosystem is generated by two chaotic systems, as shown in Figure 1.Moreover, the time series also could be fractal time series which is random as reported in 18-20 .Reference 21 discusses the random data generation of the type discussed in 22 .For sake of simplicity, only chaotic-based time series is studied in proposed paper.The first one is an arbitrary chaotic map stated in 3.1 , the second one, as given by 3.2 , is also a chaotic map and its chaotic trajectory is different from that of the first ones.There are two constraint conditions.Firstly, if the same type chaotic maps are adopted, the control coefficients must keep unequal.Secondly, the phase space of chaotic map must be in the same range Here, the searching model, that is, the first process of proposed scheme, is presented.The initial conditions are arbitrarily assigned with X 0 .First of all, the sender performs k 1 iterations on 3.1 until chaotic state X k 1 locates in the phase space mapped to the first plaintext block.After the first symbol was encrypted, X k 1 is selected as the initial state of the second chaotic map and used for encrypting the second plaintext block.The sender performs k 2 iterations on 3.2 and Y k 2 locates in the phase space of the second plaintext block.And then Y k 2 is selected as initial state of 3.1 in the second round.The third plaintext block is encrypted by 3.1 .These operations repeat until all the plaintext blocks have been encrypted.The cipher text is a collection of the number of iterations {k 1 k 2 k 3 • • • k m } obtained in each round.Therefore, there are new initial states related to previous plaintext block for encrypting each plaintext block. Chaotic Maps for the Proposed Scheme To illustrate the proposed scheme, logistic map is used as chaotic map which is a typical chaotic system widely used in cryptosystem 1, 2, 6 , as shown in 3.3 and 3.4 .The control coefficient b 0 and b 1 are unequal.Both of them are in the range of 3.9, 4 so as to avoid the nonchaotic regions according to 23 To examine the properties of the multiple chaotic dynamics, the largest Lyapunov exponent of the discrete time series is computed by the method described in 24 .The parameters of chaotic map b 0 and b 1 are arbitrarily selected from the range of 3.6, 4 .The iteration times of two chaotic maps are randomly selected.The largest Lyapunov exponent is tested thousands of times, all of test results are bigger than 0. It means that the system is chaotic at various situations that assure the chaotic status at encryption and decryption processes. Encryption Procedures Similar to 2 , the encryption procedure is hybrid one.More probable symbols are encrypted by searching in the lookup table, while less probable ones are masked by a pseudorandom bitstream as performed in stream ciphers.In the searching model, the phase space of the logistic map is divided into a number of equal-width partitions.Each partition associates with a possible plaintext symbol.The greater the probability of occurrence of the symbol is, the more phase-space partitions it maps to.A secret chaotic trajectory produced by the multiple chaotic dynamics system is used to search the partition mapped to the plaintext symbol.The number of iterations of the logistic map for searching each plaintext symbol is the length of the searching trajectory, which is then taken as the cipher text 6 .Meanwhile, the pseudorandom sequence is generated from the secret chaotic trajectory, and it will be used in mask model.After searching model has been processed, Huffman code is performed for all the collected number of iterations.At the last step, the intermediate sequence and the less probable plaintext sequence are masked by the binary mask sequence which is generated in searching model. Without loss of generality, the plaintext is assumed as a sequence of symbols.The encryption procedures are as follows. Step 1. Scan the whole plaintext sequence to find out the number of occurrence for each plaintext symbol.Then, select the top symbols, and construct map table according to the probability. Step 2. Encrypt each more probable plaintext symbol sequentially according to the method in 2 .The different is that the two chaotic maps are used rotationally.For instance, 3.3 is used for encrypting the jth plaintext block, the end state of 3.3 is adopted as the initial state of 3.4 .Then j 1 th block is encrypted by 3.4 .In the each iteration, eight masking bits are extracted from the least significant byte of the chaotic trajectory.They are appended to form a binary mask sequence for later use in mask model. Step 3.After all the plaintext blocks have been processed, a Huffman tree is built for all the collected number of iterations, including zero.When the Huffman tree is built, the number of iterations and the special symbol are replaced by the corresponding variable-length Huffman code to form the intermediate sequence r. Step 4. The intermediate sequence r should be masked by the binary mask sequence, which is a stream cipher named mask model.This mask model is different from the original method proposed in 2 as shown in 3.5 .The ciphertext of block j is given by 3.5 where r j , m j−1 and C j are intermediate sequences, mask sequences and ciphertext sequence packaged in 32-bit block, respectively.The m −1 is initial value pseudorandom number sequence that could be considered as cipher key of proposed scheme c j m j−1 r j mod 2 32 .3.5 Decryption Procedures Before the decryption, initial values of the chaotic map, which are the secret key, must be delivered to the receiver secretly.The key includes the control parameters b 0 , b 1 , m −1 and the initial values x 0 of the chaotic maps.Moreover, the plaintext probability information must also be available to the receiver for reconstructing the symbol mapping table.With this information, the receiver is able to reproduce the secret chaotic trajectory used in encryption. The decryption process is similar to the encryption one.The receiver regenerates the chaotic sequence from the secret key, and then looks up the plaintext symbol from the table.The decryption processes for decrypting jth are as follows. Step 1. Unmask the intermediate sequence r j according to m j−1 of 3.5 . Step 2. According to intermediate sequence r j , iterate the chaotic map using the shared secret parameters and the initial conditions to regenerate the chaotic trajectory as used in encryption.Decode the variable-length Huffman code using the shared Huffman tree to find out the number of iterations required.If the number is zero, this means that the block was only encrypted in mask mode and the block is copied directly as the output.Otherwise, iterate the chaotic map with the nonzero number of iterations and determine the final partition visited by the chaotic trajectory.Then, extract the mask bits from the binary representation of each chaotic map output to form the binary mask sequence m j .Go to Step 1 and unmask the intermediate sequence r j 1 . Step 3. Perform Steps 1 and 2 repeatedly until whole ciphertext blocks are decrypted. One-Time Pad Attacks The security of the original Baptista-type cryptosystem is analyzed in 7, 8 and some effective attacks were suggested there.The major problem of this cryptosystem is that the search trajectory is solely determined by the secret key.The same trajectory is used for all plaintext sequences unless the secret key is changed.As a result, the cryptanalyst can easily launch a chosen-plaintext attack to recover the whole chaotic trajectory.Details of the attack can be found in 7, 8 .Therefore, some researchers insist that Baptista-type cryptosystem can not meet even the most basic security requirements.The fatal vulnerability of Baptista's scheme is one-time pad attack which is a considered as a kind of chosen plaintext attack.The plaintext block of each state of chaotic trajectory is obtained by way of one-time pad attacks.As shown in Figure 2, the chaotic trajectory U is generated from key based chaotic map, and the corresponding plaintext sequence V is obtained by one-time pad attacks 7 .Since U remains unchanged for different plaintext sequence, if one-time pad requests the cipher text of the following plain text Examining the cipher text, the attacker knows that the value {v 5 , v 8 , v 10 • • • } is s 1 .Then the attacker could get partial information about V .To complete knowledge of the symbolic sequence V , the attacker requests the cipher text of all kinds of plaintext sequence similar to P 1 .After obtaining enough sequence V , attacker does not need to know initial value of chaotic map, while he can know exactly the plaintext corresponding to ciphertext. From the above analysis, it is known that the one-time pad attacker is to obtain corresponding plaintext block sequences V .The variables V 1 • • • V 2 denote the partial information of corresponding plaintext block sequences V .The way is to request the cipher text of the plaintext like P 1 , and then obtain partial information V 1 of V .After the attacker has requested the cipher text of all 256 kinds of plaintext, full knowledge of the symbolic sequence V is gained as shown in 4.1 where e is plaintext bit length.The value of e is set 8 because symbol in ASCII is 8-bit.In proposed scheme, the encryption procedure not only depends on the initial value, but also relates to the plaintext.Each distinct plaintext block leads to a different search trajectory for encryption.The knowledge of a search trajectory, for a particular plaintext block, is useless for other plaintext block.Moreover, the chaotic maps are iterated sequentially, the confusion of encryption is diffused.These lead to the difficulty Chaotic trajectory The corresponding plaintext sequence in attacking the proposed cryptosystem using known/chosen-plaintext attack.In Section 4.2, we know that the trajectory is different for different plaintext {P 1 P 2 P 3 • • • }.Therefore, 4.1 is unavailable in proposed scheme, and the attacker couldnot get the full knowledge of 4.1 The Security of Block Model and Mask Model The block cipher of Embedding Compression in Chaos-Based Cryptography could be considered as a variant of Baptista-type cryptosystem 6 .Here, the reasons of vulnerability were reviewed.Considering the ciphertext sequence C 1 1 2 2 1 3 1 • • • and C 2 2 1 3 2 1 1 • • • , the corresponding trajectories are U 1 and U 2 as shown in Table 1.For comparison, the original scheme Baptista-type cryptosystem 6 is present.It is found that the chaotic trajectory U 1 is distinct from U 2 in proposed scheme.However, they keep the same in Baptista's type cryptosystem. The scheme presented in 25 is a complex combination of multiple chaotic maps.In proposed scheme, the interaction among chaotic trajectories is existent.After encrypting current plaintext block, the last state of chaotic map is selected as the initial state of next chaotic trajectory to encrypt next plaintext block.Therefore, the initial value used for encrypting current plaintext block is related to last ciphertext, and thus different plaintext sequences lead to distinct initial values for encryption.It is useless to launch a chosen plaintext attack by regenerating the chaotic trajectory, since each different plaintext is encrypted by unique chaotic trajectory.Meanwhile, the additional chaotic dynamics results in a more secure cryptosystem for it increases the confusion in the encryption process and expands key space. After searching model has been performed, pseudorandom bitstream is generated by chaotic trajectory, the less probable symbols are encrypted by masking them with the pseudorandom bitstream.In proposed scheme, the pseudorandom number sequences are also generated by chaotic trajectory which is related to plaintext sequences.It is simply given as 4.2 where P is plaintext sequences.In a chosen plaintext attack, the attacker requests the ciphertext of In the same way presented in Section 2, it can obtain mask sequences without any awareness of pseudorandom number seed.However, the pseudorandom number sequences only relate to the plaintext P 1 .For other plaintext sequences, the mask sequences m are totally different.Hence this kind of Periodicity of Multiple Chaotic Dynamics The period of outputs in chaotic region can be looked on as infinite.However the actual period is limited by precision format of digital computer.Short period of chaotic trajectories generated by finite precision system 26 has been an obstacle in employing chaotic dynamics for cryptographic purpose.Short period makes the chaotic trajectories reused frequently in encryption process.This is equivalent to reuse key and weaken the security of the cryptosystem.The control parameter b 0 and b 1 should be chosen carefully to avoid nonchaotic regions 2, 23 whose Lyapunov exponent is negative.In proposed scheme, Lyapunov exponent is chosen as an indicator.It defines the region where the system behaves chaos 27 as shown in Figure 3.We calculated the period of logistic map when the control coefficient b 0 and b 1 is in the region where Lyapunov exponent is positive.Results show that all the length of period in chaotic region is far larger than 10 7 in double precision system which is a reasonable period length in practical cipher applications 26 . For plaintext perturbation and interaction of multiple chaotic maps, the period of multiple chaotic maps tends to infinity in theory.Even in the worst case, the period of multiple chaotic maps is the same as that of single map. Simulation Results To implement the proposed cryptosystem, the control parameters are arbitrarily selected as b 0 3.999999991 and b 1 3.999991.The initial condition x 0 is arbitrarily set to 0.3388.The plaintext block is encrypted in bytes and the phase space of chaotic map is divided into 256 equal-width partitions.The maximum number of iterations for the search mode is 15.The proposed algorithm is implemented in C programming language running on a personal computer with an Intel Core TM 2 2.00 GHz processor and 2 GB memory. Compression Ratio, Encryption and Decryption Speed To test the compression capability of the proposed scheme, 18 distinct files of different types are used, including text, executable file, geophysical data and picture.They are standard files from the Calgary Corpus.These files are encrypted using the proposed scheme and the algorithm suggested in 2 , respectively.Only 16 probable plaintext symbols are selected and they are all mapped to one table 2 .The simulation results are listed in Table 2.It shows that all files can be compressed effectively using proposed approach.The compression ratios of our scheme are same as those reported in 2 . The encryption and decryption speed of Baptista-type chaotic cryptosystem depends on the average number of iterations required.The encryption and decryption time on the Calgary Corpus files 28 can be found in Table 3.For comparison purpose, the encryption and decryption time using the algorithm in 2 are also listed.The results show that total processing time of proposed scheme is almost the same as that of the scheme proposed in 2 . Compared with existing chaotic cryptographic schemes, the proposed scheme does not take any additional computation.Tables 2 and 3 indicate that the multiple chaotic dynamics do not lead to a sacrifice of performance due to enhance of security.For the classical separate compression-encryption schemes, the test results of speed are directly chosen from 2 .Table 3 shows that speed of both proposed scheme and scheme in 2 is faster than that of the separate compression-encryption scheme. Key Space and Key Sensitivity To test the key sensitivity, the files from the Calgary Corpus are encrypted using different sets of secret key.Encryptions using proposed scheme were performed with only a small change of the parameters.The two resultant cipher text sequences are then compared bit-by-bit and the percentage of bit change is calculated.For parameters b 0 , b 1 , and x 0 , the 15th digit after decimal point is changed by a minimal value.For m −1 , the least bits in different position are crossover from 0 to 1 or 1 to 0. The measured bit change percentages are 50.01%,50.01%, 49.98% and 50.07%for b 0 , b 1 , x 0 and m −1 , respectively.The results show that bit change percentages are close to 50% and indicate that the cipher text is very sensitive to the key.The freely-chosen key of the proposed scheme consists of the control parameters b 0 , b 1 and the initial values x 0 , together with the 32-bit initial cipher block value m −1 .The key sensitivity test indicates that the 15th bit is effective that the ciphertext is sensitive to that bit.Therefore, the control parameters b 0 , b 1 and initial state x 0 are real numbers represented by double-precision format that are equivalent to 52 bits.According to Rule 5 of 23 , b 0 and b 1 should avoid the nonchaotic regions.Here, similar to 2 , the range of 3.9, 4.0 is adopted as chaotic regions and this is equivalent to 46 bits.The total key space is then 52 46 46 32 176 bits, which is longer than 130 bits used in 2 .Similar to 12, 13 , chaotic maps with a higher dimension can be chosen if a larger key space is required.As a result, the attacker has to meet with an even larger key space for resisting a brute-force search attack. Randomness of the Binary Mask Sequence The mask model is a stream cipher that masks plaintext symbols by a pseudorandom bitstream 2 .The randomness of the pseudorandom bitstream is also confirmed by the statistical test suite recommended by the U.S. National Institute of Standards and Technology NIST 29 .300 sequences, each of bits, are extracted for testing, and they all pass the statistical tests including frequency, block frequency, cumulative sums, runs, longest run, rank, and fast Fourier transform FFT .All of P -values are larger than 0.01.Therefore, the sequences are considered as random according to the NIST Special Publication 800-22 29 . Conclusion In order to strengthen the security of chaos-based cryptography with embedded compression, a general multiple chaotic dynamics is proposed to correlate the chaotic trajectory with the plaintext.As a result, both the two steps of embedding compression in chaosbased cryptography are enhanced.For the searching model, the security is enhanced to resist a chosen plaintext attack named one-time pad attack.In the mask model, a potential vulnerability is analyzed and repaired.Meanwhile, the key space of proposed scheme is enlarged to resist a brute-force search attack. Figure 2 : Figure 2: Chaotic trajectory and the corresponding plaintext sequence. Figure 3 : Figure 3: A plot of the Lyapunov exponent computed at increments of 0.0001 where control coefficient b is in the range of 3.6, 4 . Table 1 : Chaotic trajectories for various ciphertext sequences. Table 2 : Ciphertext-to-plaintext ratio of Calgary corpus files. Table 3 : Encryption and decryption time of calgary corpus files.
5,093.2
2011-01-19T00:00:00.000
[ "Computer Science" ]
An integrated microspectrometer for localised multiplexing measurements Please note that technical editing may introduce minor changes to the text and/or graphics, which may alter content. The journal’s standard Terms & Conditions and the Ethical guidelines still apply. In no event shall the Royal Society of Chemistry be held responsible for any errors or omissions in this Accepted Manuscript or any consequences arising from the use of any information it contains. Accepted Manuscript Lab on a Chip Introduction ][3][4][5][6] Although most of these systems have demonstrated the ability to accommodate several steps involved in an assay (e.g.separation, purification and concentration) on chip, many of them still rely on conventional instrumentation for detection.8][9] The needs for low cost, sensitive and portable optical microengineered spectroscopic detection systems are yet to be met. Fluorescence detection is perhaps the most commonly used optical method in biological analysis due to its high sensitivity and specificity.It remains the method of choice in many optical microsystems. 3,10,11Fluorescence detection requires the efficient separation of fluorescence from excitation light, which is commonly achieved by filtering technology. 12though many examples have demonstrated the integration of optical elements for on-chip fluorescence measurement [13][14][15][16][17][18][19] (using optical fibers, 15,16 planar waveguides 1,17 and on-chip filters 18,19 ), these systems are often fabricated for a specific range of wavelengths of a particular fluorophore.For applications that require simultaneous detection of multiple fluorophores (e.g.microflow cytometry), detection is still conducted off-chip using conventional filters and detectors. 20t the microengineering level, spectroscopic analysis (e.g.fluorescence, Raman, and IR spectroscopy) has been well established for a vast range of applications.Previously, we have demonstrated a proof-of-concept that a monolithic integrated Arrayed Waveguide Grating (AWG) microspectrometer can be used to discriminate different wavelengths in the visible wavelength range, 21 thereby enabling multiplexed fluorescence analysis.In this work, we now build upon our initial demonstration and show the development of a focusing AWG device for localized microspectroscopic measurements of a single point enabling new methods for controlling the light pathway and for implementing sample spatial location within the microsystem. Lenses are key optical elements for focusing light to a sample and collecting specific signals of interest.Although these can be easily achieved using commercially available objective lenses in the case of conventional bulky instrumentation, incorporation of on-chip microlenses (e.g.fluid-filled polymer lenses, 22 ball lenses, 23 and microlens arrays 17 ) requires demanding fabrication processes.5][26] However, when combined with an optical microsystem, the precise control of samples is restricted by the limitations of the microsystem (e.g.fabrication, detection and physical dimensions). In this work we purpose designed lens-aided waveguides, introducing a focusing effect into the AWG microspectrometer.To enable precise control of the samples for localized detection, the integration of the device with other instruments was explored in two formats, namely in a flow format with an integrated microfluidic device and in a microwell format with immobilized functional beads.The former allows continuous, multiplex analysis of samples with high sensitivity and spatial resolution (with potential for many applications, for example, as a miniaturized microflow cytometer with the unique capability of spectroscopic analysis).The latter formed a novel "bead-AWG" device, allowing the multiplexed detection of a series of associated events with minimal sample handlingall of these strengths make it an appealing platform for use in a remote setting. Device fabrication and integration The dense arrayed waveguide grating section was fabricated as described previously. 21To fabricate the designed lens radius and sample cuvette, e-beam lithography was used to define the curvature of the lens in a UVIII (Shipley) resist layer that was spun on top of the waveguide.This was followed by reactive ion etching (CHF 3 /Ar in an Oxford Instruments RIE80+ machine) to transfer the lens shape from the resist to the waveguide.These combined e-beam lithographydry etching pattern transfer processes developed in-house have a success rate of >90% for features of these dimensions.An integrated AWG-microfluidic device was made by bonding a PDMS microfluidic chip to the AWG device via oxygen plasma treatment (100 mW for 20 s). To create the "bead-AWG" device, functional beads were immobilized on the sample cuvette as detailed in the protocol, Scheme 1 & Fig. S1 (ESI †).Briefly, a gold disk with a diameter of 40 μm was patterned in the centre of the sample cuvette.The device was then immersed in a 5 mM aqueous solution of cysteamine hydrochloride overnight to generate an NH 2 -terminal functionalised gold surface.Water-soluble "long arm" biotin was covalently linked to the gold disk using the ethylIJdimethylaminopropyl)carbodiimide/ N-hydroxysuccinimide (EDC/NHS) conjugation method.The resultant biotinylated surface serves as a binding site for the immobilization of streptavidin coated microbeads and subsequent biotinylated quantum dot (QD) binding events.The streptavidin coated microspheres with internal fluorophores (Flash Red, excitation 660 nm, emission 690 nm, diameter 0.97 um) were from Bangs Laboratories.The quantum dots were purchased from Invitrogen (Life Technologies Corporation). Device characterization and on-chip fluorescence measurements The spectral range and throughput of each output channel were characterized using a white light source (Anritsu MG922A) and a conventional spectrometer (Triax 320 from Jobin Yvon).Optical images of the device were obtained using a fluorescence microscope.The fluorescence signal from the output channels of the integrated AWG device was recorded using a CCD camera (Fig. S2, ESI †).As a means of verification of the AWG outputs, measurements were simultaneously made using an objective lens placed above the cuvette and coupled to a spectrophotometer (Triax 320 spectrometer, Jobin Yvon) (Fig. S2, ESI †). Design and characterization of the AWG device with focusing waveguides The design of the integrated AWG device with lensed waveguides is shown in Fig. 1a.The AWG chip consists of five essential parts, including input waveguides (C-WG and E-WG), a 1st slab of arrayed waveguides, a 2nd slab region and output waveguides.These were designed around a center wavelength of 680 nm with a channel-channel wavelength spacing of 10 nm for the output channels, as detailed previously. 21The sample cuvette was located in front of the AWG input waveguide.The straight waveguide (C-WG) on the left is used for characterization while the curved waveguide (E-WG) is for the introduction of excitation light for fluorescence measurements. To introduce the focusing effects, lens curvatures were incorporated into the ends of the input and collect waveguides (I-AWG), as shown in Fig. 1b.The design of the curvature and the spacing of the end facets take into account the refractive indexes of the waveguides and the aqueous sample solution according to eqn (1) 27,28 where f is the focal length, R is the radius of the lens curvature, and n WG and n H 2 O are the refractive indexes of the waveguide material and water, respectively (n WG = 1.478, n H 2 O = 1.33).Simulation using the Beam Propagation Method (BPM) was performed to understand the beam shape from both lensed and flat end waveguides (Fig. 1b & c).A focusing and strong coupling effect was observed for the lensed waveguide (Fig. 1b).In contrast, diverging light paths were seen for the flat end waveguide (Fig. 1c).Based on the simulation, a twodimensional lens with a radius of 4 μm was incorporated at the end of each of the three waveguides which are intercepted by the sampling cuvette (Fig. 1b).The three identical lenses share the same focal area, which is located in the center of the cuvette.With lens curvature at the end of the integrated waveguides focused excitation and highly localized collection could be realized. The transmission spectrum of each channel in the lensed AWG device is shown in Fig. 2. The full width at half maximum (FWHM) of each output peak is ~10 nm, consistent with the design.Similar to a conventional diffraction grating, an AWG device generally has more than one order, which can be used for light dispersion.As an illustration, the transmission spectrum for two diffraction orders (m = 7 and m = 8) are shown in Fig. 2, where the free spectral range (i.e. the largest wavelength range for a given order that does not overlap with the spectrum of adjacent order) is 80 nm.This periodicity allows the selection of an effective wavelength range with order-sorting filter.For example, by inserting specific order-sorting filters (i.e. a bandpass filter) in front of the detector, the light to be investigated can be restricted to the wavelength range between 560 nm and 640 nm.N.b. by using a cascade design for AWG, a larger free spectral range can be achieved. Evaluation of focusing effects in flow A simple integrated AWG-microfluidic platform was fabricated to evaluate the focusing capabilities of the lensed waveguides (Fig. 3a).To investigate the spatial regions of the cuvette sampled by each of the waveguides, a fluorescent dye solution (10 μM Cy5) was delivered into the microfluidic channel at a flow rate of 6.7 mm s −1 .Laser light (632.8nm) was then introduced into the cuvette area through C-WG, E-WG and the appropriate output channel (i.e.Ch 8) waveguide in turn (Fig. 3b) (note: Ch 8 is the specific channel that will allow 632.8 nm light to traverse the slab regions of the AWG device and exit from the AWG input waveguide).Cy5 molecules in the light path were excited and emitted fluorescence at longer wavelengths (670 nm).This emission was imaged using an upright fluorescence microscope with a Cy5 filter cube, allowing easy visualization of the light path from each of the three waveguides (Fig. 3b (i, ii, and iii)).With the lensed waveguides, the focusing effect can be seen clearly in Fig. 3b.Fig. 3b(iv) is a composite image of the light paths corresponding to the three waveguides, using different colors to label each path.A tight intersection of the three light paths can clearly be seen in the middle of the sample cuvette, illustrating the capability of lens-aided focusing for localised collection.In contrast, for the AWG system without lensing, each of the light paths associated with the waveguide's field of view is inherently divergent, as shown in Fig. 3c. This lens-aided focusing capability for confining the beam shape offers obvious advantages by enhancing signal collection: focusing light to a small spot can increase fluorescence emission; likewise, collecting light from a highly localized area can greatly reduce unwanted signals from light scattered by other features/objects nearby.In addition, the AWG device can be integrated with hydrodynamic focusing microfluidics to deliver the objects of interest into the focused region. 29his, in conjunction with its capability for fluorescence multiplexing measurements, makes it an appealing platform for a broad range of applications such as microflow cytometry and fast diagnostics. Localized spectroscopic measurements with a "bead-AWG" format Assays that offer easy operation and fast "yes" or "no" answers are often highly desirable.In this context, bead based assays can be a promising choice.Beads have a large surface area, and therefore they can serve as a solid phase for capturing targets and simplifying purification processes.Incorporated bead assays within microfluidics have shown promise for applications from diagnostic tests 30,31 to multiplexing bioassays, 32 although signal detection still relies on bulky external instrumentation.The combination of bead assays with the "AWG" platform could therefore provide a miniaturized platform offering both easy operation and multiplexed measurements. As a proof of concept, streptavidin coated beads were immobilized on the biotin functionalized sample cuvette via specific streptavidin-biotin binding (the protocol is detailed in the ESI †).This allowed the whole process (i.e.capturing of targets, purification and detection) to occur at a defined location.Since streptavidin-biotin binding is reversible, the immobilised beads from one assay can be easily removed and replaced with fresh ones for the next assay. A patterned gold disk was used to immobilize microbeads at a defined location on the sample cuvette (Fig. 4a).An optimized incubation time of 30 minutes was employed to reduce non-specific absorption of beads.As shown in Fig. 4b, the beads closest to the E-WG excitation waveguide were illuminated most strongly and they partially occluded those on more distant parts of the gold disk.This indicates that in an optimized device it might be beneficial to reduce the size of the central sampling area.Nevertheless, the configuration used here still allowed for the effective collection of the fluorescence signal from the immobilized beads by the AWG input waveguide as detailed in the following section. Localized spectroscopic measurements allowing successive fluorescence detection Fluorescence detection via filter technologies requires the efficient removal of other wavelengths except for the fluorescence of interest.This can be a problem for samples that contain naturally fluorescent substances (e.g.GFP cells or bacteria cells containing carotenoid) or exhibit high levels of autofluorescence.As an optical (de)multiplexer, the AWG device is capable of detecting multiple narrow emission simultaneously by imaging the channel outputs directly onto a CCD.If a broad band of fluorescence measurements is required, the intensities in each of these images can be simply aggregated, with no loss in light collection efficiency.Thus, with the "bead-AWG" format, it is feasible to detect fluorescence signals from a series of successive events/targets through the collection of a series of CCD images from each channel. As a model system, streptavidin coated microbeads with Flash Red fluorescent labels (central emission peak at 690 nm), 605 nm biotin conjugated quantum dots (605-QDs) and 655 nm streptavidin conjugated quantum dots (655-QDs) were employed.The successive assembly of the 690 nm microbeads onto the gold disk, followed by 605-QDs and then 655-QDs was evaluated on-chip, step by step. Fig. 5 shows the fluorescence spectra (on the left) collected using the spectrophotometer and the CCD images of selected output channels (on the right).The intensities of each channel in the CCD images are also displayed as histograms for easy comparison and quantification.Note that only the AWG output channels, Ch4, Ch5 and Ch1 are shown since these correspond to the central wavelengths of the three fluorophores employed (i.e.682 nm was detected in Ch4, 605 nm in Ch5 and 655 nm in Ch1). It is clear that after the adsorption of the Flash Red fluorescent microbeads, the peak of the fluorescence spectrum was at 690 nm (Fig. 5a, left graph) and as expected, Ch4 was found to have the brightest spot (Fig. 5a, right graph) with weaker spots being found in the two other channels due to the broad spectrum of the Flash Red fluorescence emission.After the addition of the 605-QDs, a peak around 605 nm was observed in the fluorescence spectrum (Fig. 5b), and the highest level of light intensity was found for Ch5 in the CCD image.The high wavelength spectral tail in Fig. 5b (i.e.left graph) reveals that the CCD image was a combination of the fluorescence emission from both the quantum dots and the Flash Red fluorescent microbeads.In the same way, the addition of the 655-QDs caused the brightest spot to move to Ch1 (Fig. 5c).The relative fluorescence intensity for each binding step is presented in Table 1.The channel with the highest intensity corresponds well to the emission peak of the latest targets. Finally, to verify the reversibility of the immobilization process, the sampling cuvette was washed with a biotin solution three times.Having done this, it was found that the fluorescence signal was too weak to be detected either by the Triax spectrophotometer or by the CCD camera, indicating the absence of microbeads and quantum dots.This enables the device to be reused for further measurements. Taken together, these results conceptually prove the great potential of "additive" fluorescence assays with the "bead-AWG" device.It has no demanding requirements for eliminating non-targeted fluorescence, and it is capable of detecting a series of associated events in a simple readout (e.g. an optical image). Conclusions We have developed a portable AWG microspectrometer for localized multiplexed measurements.Using an integrated AWG-microfluidics platform, we demonstrated that the lensing function of the device confined the beam shape for focused illumination and signal collection.This capability can enhance signal collection and give better spatial resolution, and would benefit the analysis of small volume samples (e.g.cells) in flow.Future integration of the focusing AWG device with advanced microfluidics and lab-on-a-chip platforms will result in great potential for applications that require both sophisticated handling of samples and versatile detection capability, for example, those requiring simultaneous multiplexing measurements. In addition, we have developed a "bead-AWG" method capable of localized fluorescence detection of a series of events.This new detection method can detect the fluorescence of interest without the need for its isolation from the background (which would normally be required if using current technologies).Furthermore, it is capable of measuring several events simultaneously or successively and reporting these results in a simple image readout.Considering the small footprint of the device, simple assay procedure and visually direct readout, it can serve as a promising portable platform for fast analysis in remote settings where resources and specialized skills are limited.the partial support from the EPSRC (EP/H04986X/1 and EP/ J009121/1).We gratefully acknowledge the technical team of the James Watt Nanofabrication Centre (JWNC) at the University of Glasgow for the support in fabricating the devices.JMC acknowledges support from ERC Bio-Phononics (340117) and the receipt of an EPSRC Advanced Fellowship Award (EP/K027611/1). Fig. 1 Fig. 1 Figure showing (a) the configuration of a typical AWG device (C-WG: white light characterisation waveguide, E-WG: excitation waveguide, and I-AWG: AWG input waveguide); (b) The design and simulation of focusing waveguides; and (c) the design and simulation of flat end waveguides. Fig. 2 Fig. 2 Figure showing the normalized transmission spectral response of the AWG device for two different diffraction orders: m = 7 and m = 8.The 8 output channels are numbered from shorter to longer wavelengths. Fig. 3 Fig. 3 Figure showing (a) a schematic of the integrated AWG-microfluidic platform.(b) Fluorescence microscopy images of the optical system with the lens waveguides.(c) Fluorescence microscopy images of the optical system with the flat-end waveguides.For both systems, a Cy5 fluorescent dye solution was delivered into the microfluidic channel and laser excitation light (632.8nm laser) was introduced into different integrated waveguides: (i) C-WG; (ii) E-WG; and (iii) the AWG input waveguide.By filtering the excitation light, the fluorescence microscopy images of the cuvette area were acquired and (iv) is their composite image with different colors for each light path.The arrows in (i) to (iii) indicate the direction of laser excitation.In (iv), the intersection area of the three light paths is highlighted by a pink dashed circle. Fig. 5 Fig. 5 The fluorescence spectra and the corresponding AWG and CCD detection results of the flow (successive) assay.The figure comprises (a) a micrograph of the Flash Red fluorescent microbeads.A 650 nm long pass filter in the light path was used to totally eliminate the 633 nm excitation signal (n.b. the detected light corresponds to the AWG diffraction order m = 7).(b) As above, after addition of 605 nm biotin conjugated quantum dots and 532 nm laser excitation with a 550 nm long pass filter (selected diffraction orders: m = 7 and m = 8).(c) As for (b), after addition of 655 nm streptavidin conjugated quantum dots.On the left are the pre-acquired fluorescence spectra while on the right are the CCD images of the specific output channels and their histograms of light intensity maxima.The error bars are standard deviations from at least three independent repeated experiments. Table 1 Relative fluorescence intensity for each binding step a a A -Flash Red fluorescent microbeads.Bafter adding biotin conjugated quantum dots (605 nm).Cafter adding streptavidin conjugated quantum dots (655 nm).
4,435.2
2015-01-07T00:00:00.000
[ "Physics", "Biology" ]
Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. Introduction With the development of image processing technology, medical imaging technology has significantly improved. A wide variety of medical images are currently being produced. Several currently available imaging approaches include computed tomography, magnetic resonance imaging (MRI), and ultrasound. These techniques are extensively used in medical diagnosis, preoperative planning, treatment, and postmanagement detection. MRI is commonly used in actual clinical diagnosis. Compared with other technologies, MRI does not employ radiation on the human body. At the same time, high-resolution imaging of human soft tissue is attained, which can be achieved in any Italian dimensional imaging [1][2][3][4]. Although MRI technology is extensively used in medicine, MRI data and images can be generated under objective and subjective reasons for data transmission as well as the environment, and other instruments produce gradation unevenness and offset field effect. Limited resolutions produce similar noise effects. Therefore, improving MRI technology is important to enhance analysis. Under MRI, the skull is relatively bright white. The range of gray values in the skull and white matter usually overlaps. The skull bone and muscle exhibit gray values similar to those of brain tissue. In a segment containing white and gray matter, the skull is resolved together with the white matter. Therefore, the accurate segmentation of MRI images is important to eliminate interference. Methods for regional enlargement is suitable for achieving clear image segmentation of the target boundaries. If the target is unclear, then the image cannot be effectively extracted. Dynamic contour models generate enhanced segmentation effects but are disadvantageous because of long computing time. Meanwhile, deformable model methods are divided into two categories, namely, the parametric deformable model and variable-level set-shape model. These methods are achieved by iterative calculation, which takes a long time. Manual determination of iteration points must first be achieved. Given the involvement of personal and subjective factors, the segmentation attained by such method is unstable. Mathematical morphological imaging of the skull for removal treatment is effective, but a suitable threshold is more difficult to achieve using such technique. Falcao et al. [5] proposed the use of a live-wire segmentation algorithm. The algorithm can provide the user effective control of the segmentation process. In the approach, the user can intervene with the results of segmentation. Another method utilizes the artificial neural network (ANN), which is composed of many processing units (nodes). The ANN can simulate the biological, particularly the massively parallel, network of the human brain learning process. Input data acquire results quickly by training under ANN theory. With such strategy, the speed of image segmentation is effectively improved. The neural network algorithm does not entail prior knowledge of the probability distribution of image gray values; consequently, segmentation results are similar to the original image [6]. The neural network method shows its unique advantages in solving a series of complex image segmentations; however, several issues arise. First, the energy function in such case falls into local minimum values for minimized images. Second, the convergence of the neural network is related to the data; thus, a suitable value for testing network inputs is needed. The fuzzy clustering method is more extensively applied for image segmentation. The fuzzy clustering algorithm has the following advantages: the algorithm avoids the issue of threshold setting and does not entail human manipulation. Furthermore, the fuzzy clustering algorithm is particularly suitable for fuzzy and uncertain images. In this study, the fuzzy -means clustering algorithm was selected to segment MRI brain images. Scholars discovered that the effect of the initial value of the cluster centers of the fuzzy clustering algorithm is relatively large. The characteristics of a nonconvex function involve several local minima; thus, the initial value of FCM will fall into the local minima. This study utilized the rough set to compute for the initial value of the FCM. The harmony searching (HS) algorithm was developed by Korean scholars. Geem et al. [7,8] proposed a kind of intelligent optimization algorithm in 2001. The algorithm describes the process of musical improvisation; different musical tones are applied to a harmony vector to search for a harmony randomly. Then, the process attains an optimal harmony. Jang et al. [9] used the Nelder-Mead simplex HS algorithm, whereas Mahdavi et al. [10] adopted the adaptive HS algorithm (IHS). Omran and Mahdavi compared the performances, parameters, and noise effects related to the original HS algorithm [11], IHS, and global-best HS algorithm. H.-Q. Li and L. Li [12] employed the genetic algorithm and the HS algorithm to explore the three functions of Rastrigin, Griewank, and Sphere. Liang and coworkers [13,14] adjusted certain parameters to improve the HS algorithm and used a hybrid GA-HS algorithm to solve the critical sliding slope problem. Cheng et al. [15] adopted the HS algorithm with several other heuristic optimization algorithms for earth slope stability analysis. Dong et al. [16] proposed the HS -means clustering algorithm to change WEB text categorization. Bezdek [17] utilized an adaptive adjustment of parameters on the improved HS algorithm to solve the anomaly detection problems in digital images of biological tissue. In recent years, the HS algorithm has been adopted in several applications. However, the algorithm exhibits several disadvantages. The HS algorithm operates with weak robustness, considerable randomness, lack of specific direction, and slow convergence speed; it easily falls into the local optimal solution. The problem can be attributed to the search mechanism of the HS algorithm. This study proposed an improved HS algorithm for MRI brain image segmentation to overcome the aforementioned disadvantages. We used the rough set and memory bank of the HS algorithm together with the concept of rough set upper and lower boundary correction HS algorithm of the "optimal" and "worst" harmonies. By doing so, we prevented the HS algorithm convergences from falling into the local optimum. The HS algorithm should be employed to obtain the number of optimal solutions as the initial value for the average fuzzy clustering algorithm. This strategy would overcome the random determination of the initial value of the fuzzy clustering algorithm. Experimental results showed that the proposed algorithm achieved perfect convergence, and the segmentation effect was ideal for the MRI brain images. Besides ANN and fuzzy clustering, ensemble learning [18,19], feature ranking [20], and samples selection [21] were also employed in the biomedical research. Harmony Searching Algorithm The HS algorithm has been proposed as a new algorithm for the study of musical play. Each musician produces individual tones that can generate vector values. If the music produced is pleasant sounding, then the tone is recorded and tools are employed to generate a better harmony in the subsequent attempt. Musical harmony is analogous to the optimal solution vector, whereas the player riffs correspond to the optimization techniques in the local and global search programs. The HS algorithm uses a random search that selects probabilities and adjusts the pitch without information derived from the harmony. Compared with early heuristic optimization algorithms, the HS algorithm is conceptually straightforward, utilizing less mathematical expressions and a few parameters for the random search of theoretical values. Moreover, the algorithm can be more easily optimized for various engineering problems. These theoretical ideas can be adopted to formulate the solution vector = ( 1 , 2 , . . . , ), which refers to the evaluation function for ( ). The HS algorithm is mainly divided into the following steps. Step 1 (initialization parameter). The HS algorithm includes a series of important parameters, such as the number of iterations ; the harmony memory data base HM; the harmony memory probability values PAR max and PAR min ; the fine-tuning probability BW; the harmony memory size HMS; the dimension parameter optimization problem ; and the upper and lower boundaries U and L , respectively. Step 2 (harmony memory initialization). A harmony database is adopted to store the HMS of random harmonic vectors. The random harmonic vectors by weight of each dimension on the upper and lower boundaries U and L , respectively, are expressed as follows: The HM matrix expression is as follows: . (2) Step 3 (new harmony generation). In accordance with the change in objective function value, the adaptive setting of the harmony memory considers the probability HMCR. By the maximum and minimum sounds of the initial setup, probability dynamic adjustment PAR is achieved. After parameter adjustment, learning by the differences in operation, pitch adjustment, and random mutation process, new harmonic solution vectors are created. The process involved is as follows: Step 4 (memory bank updating). The new harmony is regarded as the worst harmony in the database, and the most optimal update for the worst harmony is utilized in the database. Step 5 (termination conditions). The current number of iterations is determined to achieve the maximum number of iterations. Once the number of maximum iterations is achieved, the terminate iteration cycle is commenced through Steps 3 to 5. Harmony Searching Algorithm Improvement Compared with other optimization algorithms, the HS algorithm is superior on the basis of the following reasons. (1) The HS algorithm requires minimal mathematical criteria and does not entail variable initialization. (2) The entire search process of the HS algorithm assumes a completely random pattern without considerable manual intervention. (3) The HS algorithm considers the entire available acoustic memory information to create a new harmony vector. Given these advantages, the HS algorithm has been the focus of attention of many foreign scholars since 2001. The randomness of the algorithm results in low precision. The HS algorithm is mainly adopted to improve the accuracy of the optimization problem. The HS algorithm is a strong randomness heuristic algorithm, has a simple structure, is easily operated, and involves only a few parameters and other characteristics. However, the HS algorithm also adopts sensitive parameters and generates slow convergence defects, thereby entailing further research on enhancements. The HS algorithm uses a few paramount parameters that directly affect the algorithm. These parameters include the harmony memory probability values PAR max and PAR min and the finetuning probability BW. In this study, the accuracy of the HS algorithms is improved to prevent attaining a premature local optimum. In this regard, the following enhancements were applied. Construction of a New Harmony HM Database. In the original harmony algorithm, harmony memory acquisition is random, thereby entailing a relatively large stochastic algorithm. This effect reduces the accuracy of the algorithm. In this study, a rough set was employed on the upper and lower boundaries to establish a new harmony HM database. Rough set theory was adopted to reduce the randomness of the harmony memory database and improve the latter's accuracy. Step 1. The relationship 1 ≤ ≤ was applied, where is the clustering center, to establish the initial average . Step 2. With the data points , 1 ≤ ≤ , the limits of the upper and lower boundaries, and , respectively, were almost reached. and are the limits of the clustering center . was adopted to denote the distance between two points − . Step 3. If corresponds to the extreme minimum value, then must be close to . If − is less than a given threshold, then ∈ ; otherwise, ∈ . Step 4. A lower limit is established on the matrix, as follows: According to the previously presented steps, alternate data were used for preliminary screening to establish the harmony algorithm. However, more accurate data were required by the algorithm. Thus, the -nearest neighbor (KNN) algorithm was employed to attain the appropriate harmony memory matrix. denotes a given clustering center based on prior knowledge. The KNN method [22] was originally proposed in 1968 by Cover and Hart. The KNN is a theoretically more mature classification algorithm. The core idea of the KNN algorithm is simple: if a sample feature vector space most similar (the nearest feature vector space) to the sample belongs to the major category, then the sample likely belongs to such category. The KNN method for the decision-making category is based solely on the nearest category or several categories of samples to which samples are designated to. The traditional KNN algorithm has been referred to as an example-based learning classification algorithm. By comparing each training sample, users find the text to be classified with the most similar text. Finally, the text that contains the greatest number of similar categories is selected and classified as category text. The related mathematical expression is as follows: where is the feature vector, sim( , ) corresponds to similarity, and ( , ) denotes the classification properties. If belongs to , then the value of the function is 1; otherwise, the value is 0. Herein, we used this kind of thinking process for classification. In distributing the matrix sample items across the class space, we applied Euclidean distance as a distribution rule as follows: If = { 1 , 2 , . . . , }, then each area involves a clustering center. The aforementioned methods were adopted to establish a suitable search memory database HM as follows: Step 5. When all the optional data maximum and minimum values were less than the threshold , the loop was terminated. Otherwise, Steps 2 to 4 were repeated to establish the appropriate database matrix of harmony. Herein, we considered up = 1 − low , 0.5 < low < 1. Matrix HM was established by using the new harmony matrix of rough set theory. By using rough set theory to establish a harmony matrix principle instead of a random matrix, we avoided the poor robustness and randomness of the HS algorithm. Probability PAR Adjustment. A study on the HS algorithm revealed that probability PAR tuning and volume BW are set randomly or by experience. In such case, no change in the convergence process is achieved. In fact, the effect of these two parameters on the convergence of the algorithm is relatively large, particularly in the latter part of the run. The original HS algorithm is not concerned with this aspect; it is not conducive for a fast algorithm that converges to the global optimum. In this study, the PAR and BW parameters in the original HS algorithm were improved to avoid falling into the local optimum. In the HS algorithm, adjusting the probability PAR is also an important component. In the literature [10], the value of a small PAR has been shown to enhance the local search ability of the algorithm. By contrast, the value of a larger PAR is beneficial for adjusting the search area. The expression is shown as follows: where is the iteration number and is the current number of iterations. In this study, the global search algorithm was improved by introducing a feedback mechanism and moving a step length. The number of iterations was also updated. To update the probability of the harmony memory database and step length, we adopted the following expression: The times moving steps were expressed as follows: BioMed Research International 5 By improving the HS algorithm using dynamic tone control, adjustable probability PAR values and bandwidth BW were attained, overcoming the shortcomings in probability PAR value and bandwidth generated by the fixed tone control in the basic HS algorithm. Compared with other algorithms, whether on test function or vector search solutions, the enhanced HS algorithm exhibited a better performance. Termination Conditions. At the maximum or minimum harmony database values less than the threshold , the loop was terminated. Otherwise, the original HS algorithm was repeated from Steps 2 to 4. Fuzzy Clustering Segmentation The FCM clustering algorithm was proposed using fuzzy set theory. The FCM uses fuzzy set theory for classification. Data under a certain degree of categorization is divided into various types, and cluster centers are calculated in accordance with all the updated data objects of each category. This ambiguity makes the classification process of the FCM algorithm better reflect the actual data distribution, particularly for the treatment of overlap between all categories. FCM clustering image segmentation treats pixels in an image as a cluster sample and the entire diagram as a sample set; each pixel feature vector is extracted from the image and regarded as the sample; then the pixels in the feature space are clustered. In essence, the pixels with similar characteristics are grouped in an aggregate class, whereas the pixels with dissimilar features are distributed into different classes. Finally, each pixel is completely tagged to image segmentation. In the fuzzy means clustering algorithm (FCM), the initial value setting is a more important direct effect of segmentation speed, accuracy, and effectiveness. Before starting, the cluster number must be given first. However, in the absence of human intervention and prior knowledge of the image, such as in an automated system, determining the cluster number is a difficult task. Therefore, values based on image segmentation problems are difficult to determine under fuzzy clustering. In traditional FCM, the initial value is random. Thus, the randomness of the algorithm is high and a local optimal solution is attained. In this study, the initial value is regarded as the number of optimal solutions obtained by the HS algorithm; the algorithm can achieve a favorable result by avoiding the local optimal solution. The MRI brain image segmentation effect attained by the improved algorithm is better than that achieved through the traditional FCM. A previous study [17] promoted the objective function of the FCM clustering algorithm; the related expression is as follows: where = [ ] * ∈ ℎ , is the fuzzy index, and is the distance between the clustering center and clustering objects. In this study, the Euclidean metric distance was adopted to compute the gray difference between any point and the cluster center. The Euclidean metric distance can be calculated with minimum steps. The minimum value refers to the direction of clustering in ∑ =1 = 1 under the condition of the constraint ( , ). By using the Lagrangian approximation solution, the degree of membership and cluster center under the extreme value are calculated as follows: Equations (9) and (11) were utilized by continuous iterative optimized clustering. Each iteration was adopted to calculate the membership degree matrix and cluster center until convergence was reached. The detailed steps for FCM calculation are as follows: (1) The optimal value of was obtained using the improved HS algorithm and rough set theory as the initial value for the FCM algorithm. (3) On the basis of (10), the membership degree matrix ( ) = [ ( ) ] * was updated, where denotes the iteration number. (5) The number of iterations or error parameter when < or | − | > was determined. Then, Steps 3 to 5 were repeated until the loop was terminated. Experiment In a simulation experiment, the improved harmonic search algorithm and the original harmony algorithm were used to obtain the optimal, worst, and average values for MRI brain images 1-4 (MRI1-4). In the data index, all values of the improved HS algorithm were superior to those of the original HS algorithm. The improved HS algorithm also obtained a better optimal solution. The different values were computed using Euclidean distance. The quantitative units were expressed to 10 5 . In the experimental data (Table 1), a smaller optimal value indicates a nearer distance to the clustering center and a more accurate selection of the cluster center. As the average value approaches the optimal value, the more optimal condition of the cluster center is achieved. More precise cluster centers attain better segmentation effects. The coefficient segmentation function pc and entropy segmentation function pe were utilized to qualitative analyze the experimental results [23,24]. The related expressions are as follows: In the simulation experiments, the improved search algorithm and fuzzy clustering segmentation were compared with the original fuzzy clustering (FCM) segmentation algorithm. The experimental results are shown in Table 2. The value of the coefficient segmentation function pc of the improved algorithm was greater than that of the FCM algorithm (Table 2). Conversely, pe values were lower in the improved algorithm than in the FCM algorithm. For image segmentation, a high pc or a low pe indicates perfect segmentation effects. Consequently, the segmentation effect of the improved algorithm is better than that of the FCM algorithm. The data of the FCM algorithm shown in Table 2 reveal that the values of pc and pe are closed. This result can be explained by the fuzzy clustering algorithm in images MRI1 and MRI2, which involved 20 and 23 iteration times into the local optimal solution, respectively. Moreover, in 21 and 23 iteration times, MRI3 and MRI4 fell into the local optimal solution. Thus, the existence of the local optimal solution rendered the FCM algorithm not ideal for MRI image segmentation. By using the improved algorithm, MRI1-4 brain image segmentation obtained different initial values of . The differences in initial value also changed the partitioning membership. The final segmentation results for MRI1 and MRI2 are shown in Figure 2. When the improved algorithm was used to determine the partition = 3, the original FCM algorithm was set to = 3. Meanwhile, the final segmentation results for MRI3 and MRI4 are shown in Figure 3. When the improved algorithm was used to determine the partition = 4, the original FCM algorithm was also set to = 4. Figure 1 displays the experimental data for images MRI1-4. Meanwhile, Figure 2 shows the segmentation results for images MRI1 and MRI2. The membership is associated with the initial value = 3. The first and second rows display the segmentation effects. The first row shows the experimental results for the improved HS algorithm and FCM. The second row shows the experimental results obtained using the original FCM algorithm. The segmentation results reveal that the fuzzy clustering method generated an oversegmentation phenomenon. For the actual MRI brain image segmentation effect, the algorithm proposed in this study performed better than the original FCM algorithm. In Figure 3, the segmentation effect was affected by membership; the membership degree was associated with the initial = 4. The first row shows the experimental results of the improved HS algorithm and FCM. The second row shows the experimental results obtained using the original FCM algorithm. The segmentation results reveal that the proposed MRI brain image segmentation effect obtained using the improved algorithm is better than that of the fuzzy clustering algorithm. In the fuzzy clustering algorithm, the initial value of uncertainty generated a local optimum algorithm, which affected the segmentation. Conclusion In this study, MRI brain image segmentation was achieved using the HS algorithm and the fuzzy clustering algorithm. The HS algorithm is more extensively used. However, given its drawbacks, the algorithm easily falls into the local optima. Thus, this study proposed an improved HS algorithm for MRI brain segmentation. Rough set theory was adopted to achieve an improved HS algorithm of an optimal harmonic database and important probability parameters for promoting harmony contraction convergence. Then, brain images were segmented using the fuzzy clustering algorithm. The initial value in the fuzzy clustering algorithm was random, which affected the segmentation. Therefore, the optimal harmony value obtained by the improved HS algorithm was used as the initial value of the fuzzy clustering algorithm. The uncertainty in the initial value of the fuzzy clustering algorithm was avoided, thereby preventing the algorithm from falling into the local optimum. The simulation experiments showed that the proposed method produces better segmentation effects than those of the original fuzzy clustering algorithm.
5,649
2016-06-15T00:00:00.000
[ "Computer Science" ]
Hints of dark photon dark matter from observations and hydrodynamical simulations of the low-redshift Lyman- α forest Recent work has suggested that an additional < ∼ 6 . 9 eV per baryon of heating in the intergalactic medium is needed to reconcile hydrodynamical simulations with Lyman- α forest absorption line widths at redshift z (cid:39) 0 . 1. Resonant conversion of dark photon dark matter into low frequency photons is a viable source of such heating. We perform the first hydrodynamical simulations including dark photon heating and show that dark photons with mass m A (cid:48) ∼ 8 × 10 − 14 eV c − 2 and kinetic mixing (cid:15) ∼ 5 × 10 − 15 can alleviate the heating excess. A prediction of this model is a non-standard thermal history for underdense gas at z > ∼ 3. In this Supplemental Material, we show results from a wider range of our hydrodynamical simulations that use different values of the A (cid:48) mass and kinetic mixing. In contrast to Fig. 2 in the main text, rather than show the best fit model, we show results from the individual simulations that span a range of A (cid:48) masses and kinetic mixings. As already discussed in the main text, this demonstrates the effectiveness of the low-redshift Ly- α forest data for obtaining best fit parameters for A (cid:48) , rather than just individuating a preferred region of the parameter space. Introduction.-The Lyman-α forest, a series of absorption features that arise from the distribution of intergalactic gas, is a powerful tool for investigating the properties of dark matter (DM). The absorption is typically observed in the spectra of distant, luminous quasars, where resonant scattering along the line of sight occurs as photons redshift into the rest frame Ly-α (n = 1 → 2) transition of intervening neutral hydrogen [1]. The 1D power spectrum of the Ly-α forest transmitted flux is an excellent tracer of the underlying DM distribution on scales ∼ 0.5-50 comoving Mpc, and has been routinely used to place tight constraints on warm dark matter [2][3][4][5][6], fuzzy DM [7][8][9][10], as well as primordial black hole (PBH) DM [11]. The physics behind Ly-α forest temperature measurements is straightforward. The hydrogen atoms in the IGM will in general not be at rest, but will undergo thermal motion described by a Maxwellian velocity distribution, leading to a line width ∆ν = ν α (b 2 th + b 2 nth ) 1/2 /c, where ν α is the resonance line frequency, b th = (2k B T /m H ) 1/2 is the Doppler parameter due to thermal motion, m H is the hydrogen atom mass and k B is Boltzmann's constant. Here, b nth accounts for any additional, non-thermal line broadening, including, smoothing of the absorbing structures by gas pressure, smallscale turbulence, peculiar motion or expansion with the Hubble flow. b nth is in general non-zero for all but the narrowest Ly-α lines. Hence, given a forward model for b nth , typically provided by cosmological hydrodynamical simulations, a determination of the Ly-α spectral widtheither by directly measuring Doppler parameters or using another statistic that is sensitive to the thermal broadening kernel -allows a measurement of the IGM temperature [33][34][35][36][37][38][39][40][41]. Thus far, all the calorimetric constraints on new physics from the Ly-α forest have relied on observations at redshifts z > ∼ 2. In this Letter, we pioneer the use of low-redshift (z 0.1) Ly-α forest observations for this purpose. Three independent studies [42][43][44] have now highlighted a discrepancy between the widths of Lyα forest absorption lines measured from Hubble Space Telescope/Cosmic Origins Spectrograph (COS) data at z 0.1 [45,46] and the predictions from detailed cosmological hydrodynamical simulations. The simulated line widths are always too narrow compared to the observations. Appealing to enhanced photoelectric heating alone is not a viable solution, as the integrated UV background would require an unphysically hard spectrum [47]. This implies there is a non-canonical heating process in the IGM neglected in the simulations, such that an additional < ∼ 6.9 eV per baryon is deposited into typical Lyα forest absorbers by z = 0.1, and/or the non-thermal line broadening has been underestimated [47]. Additional turbulence below the simulation resolution limit is possible [44,48,49], although current models have line widths that become narrower, rather than broader, with increasing resolution. Here, we instead explore the role of additional heating, and suggest that ultralight dark photon DM with a small mixing with the Standard Model (SM) photon provides an intriguing solution to the line width discrepancy. Dark photons can undergo resonant conversions into SM photons, which are subsequently absorbed by the IGM. The condition for resonant conversion is set by the dark photon mass and the local electron density, allowing dark photons to naturally explain the low-redshift Ly-α forest data without altering the established agreement with IGM temperature measurements at z > 2. Dark photon dark matter.-The dark photon A is a minimal and well-motivated extension of the Standard Model (SM) which kinetically mixes with the ordinary photon, γ [50][51][52][53][54][55][56][57]. Ultralight dark photons are also an attractive cold DM candidate, with several earlyuniverse production mechanisms that have been studied extensively in the literature that are capable of producing A non-relativistically [58][59][60][61][62][63][64][65][66][67][68][69]; its perturbations are therefore expected to be well-described by the standard ΛCDM matter power spectrum. The photon-dark photon Lagrangian reads where is the dimensionless kinetic mixing parameter, and m A is the A mass. F and F are the field strength tensors for γ and A respectively. The presence of kinetic mixing, and the resulting mismatch between the interaction and propagation eigenstates, induce oscillations of dark photons into photons, A → γ, and vice-versa. In the presence of a plasma, ordinary photons acquire an effective mass, m γ , given primarily by the plasma frequency of the medium [31,70,71]. At every point in space x and redshift z, the effective plasma mass is given by where n e is the free-electron number density. At points in space where m 2 γ (z, x) = m 2 A , the probability of conversion is resonantly enhanced [30,31,70,71]. This process can be understood as a two-level quantum system with an energy difference that is initially well-separated, but with one state having its energy evolve with time. Whenever the energy difference passes through zero, a nonadiabatic transition between the two states occurs, with transition probability described by the Landau-Zener formula [31,70,72,73]. If A constitutes the DM, the probability of conversion of A into photons is [29][30][31] where t res is the time at which the resonant condition is met. Here, h P ≡ 2πh is Planck's constant. For m A between 10 −15 -10 −12 eV c −2 , A DM converts into low-frequency photons with frequency ν = m A c 2 /h P , which rapidly undergo free-free absorption in the ionized IGM. The mean free path, λ ff , is given approximately by [74,75] where α 1/137 is the fine-structure constant, σ T is the Thomson cross section, T is the temperature of the IGM, and g ff (ν, T ) is the Gaunt factor, which only has a mild dependence on ν and T . Numerically, taking g ff = 15.7 [76], we find the local overdensity of baryons at which A conversion occurs. We assume that overdensities in both free electrons and baryons are equal, which is true for the IGM after reionization is complete. Since λ ff is much smaller than the typical size of a Ly-α forest absorber (∼ 100 kpc), we can safely assume that the absorption of these photons occurs locally in our simulations. Since A → γ conversions only occur when the resonance condition is met, at each point in time, heating only takes place in regions of specific ∆ b , such that m 2 γ (z, x) = m 2 A is satisfied. Moreover, the probability of conversion is governed by the rate at which ∆ b is evolving in each of those regions. In regions where conversions are happening, the energy deposited per baryon E A →γ due to such a resonance is simply where n b is the local number density of baryons, and ρ A is the mass density of A DM. We can estimate E A →γ by assuming that the number density of baryons everywhere evolves only through adiabatic expansion, i.e. n e ∝ (1 + z) 3 is the Hubble parameter, and the probability of conversion simplifies to P A →γ = π 2 m A c 2 /(3H(z res )h), with z res indicating the redshift at which the resonance condition is met. For m A ∼ 8 × 10 −14 eV c −2 , we expect baryons at the mean cosmological density to experience resonant conversion at z ∼ 2. Further assuming matter domination and ∆ dm = ∆ b , we obtain approximately where −14 ≡ /10 −14 , and m −13 ≡ m A /(10 −13 eV c −2 ). We see that even a mixing as small as −14 ∼ 1 leads to the absorption of enough photons to heat the IGM by several eV per baryon [29][30][31][32]. The resonant nature of A heating makes it an attractive model for reconciling low and high redshift Ly-α forest data. The smaller the value of m A , the later the resonance condition is met at fixed ∆ b . For a sufficiently light A , most of the resonant conversions occur at z < ∼ 2, broadening the absorption line widths at low redshifts, without depositing a significant amount of heat at z > 2. Furthermore, the resonance condition is set by during matter domination, giving approximately the density dependence required by observations [47]. Simulations.-To fully test the viability of heating from A dark matter, we now turn to implementing this model in Ly-α forest simulations for the first time. Cosmological hydrodynamical simulations of the Ly-α forest were performed with a version of P-Gadget-3 [77], modified for the Sherwood simulation project [14]. Following [47], we use a simulation box size of L = 60h −1 cMpc with 2 × 768 3 gas and dark matter particles, giving a gas (dark matter) particle mass of M gas = 6.38 The simulations were started at z = 99, with initial conditions generated on a regular grid using a ΛCDM transfer function generated by CAMB [78]. The cosmological parameters we use are Ω m = 0.308, Ω Λ = 0.692, h = 0.678, Ω b = 0.0482, σ 8 = 0.829 and n = 0.961 [79], with a primordial helium fraction by mass of Y p = 0.24. All gas particles with overdensity ∆ b > 10 3 and temperature T < 10 5 K are converted into collisionless star particles, and photoionization and heating by a spatially uniform UV background is included [13]. Mock Ly-α forest spectra were extracted from the simulations and processed to resemble COS observational data following the approach described by [47], where further details and tests of our numerical methodology can be found. In contrast to [47], however, in this work we also implement dark photon heating in our hydrodynamical simulations using Eq. (3). It will be numerically convenient to assume the baryons closely trace the dark matter in the IGM, and indeed, on scales exceeding ∼ 100 kpc (the pressure smoothing scale in the IGM), this is a good approximation. Defining the overdensity of a given cosmological species as ∆ i = ρ i / ρ i , we thus assume ∆ dm = z) is determined for each gas particle at position x and redshift z in our simulations. For each gas particle, we then set For a fixed value of m A , we may therefore determine where and when a resonant conversion happens for each gas particle (i.e. when the condition m 2 γ ( x, z res ( x)) = m 2 is met). The converted energy per baryon E A →γ is then calculated using Eq. (3), and directly injected into the gas particles. Results.-We first demonstrate the effect that dark photons have on the IGM temperature by considering the redshift evolution of gas parcels at fixed overdensity, ∆ b , heated by both the UV background and E A →γ . We adapt the non-equilibrium ionization and heating calculations performed by [47] for this purpose. In the upper panels of Fig. 1 we show the thermal history of gas at the mean density, ∆ b = 1. The solid gray curves correspond to UV heating only (i.e. no A ), following the synthesis model presented in [13]. The data points are IGM temperature measurements at the mean density derived from the Ly-α forest [38,41]. All other curves in Fig. 1 include A heating. These exhibit a sharp rise in the gas temperature when the resonance condition is met. In the top left panel, we have fixed the A mass to m −13 = 0.8 and varied the kinetic mixing parameter, . In this case, the timing of the energy injection does not change, but the amplitude of the temperature peak increases with . The top right panel instead shows the results for a fixed Contours show ∆χ 2 = χ 2 − χ 2 min = 1 and 4, corresponding to the projection of the 68% and 95% intervals for the individual parameters m A and . The dashed red curves show the results for a tight prior on the z = 2 IGM temperature at mean density from [41]. The solid blue curves show the effect of a weaker prior, where the 1σ uncertainty from [41] has been increased by a factor of four for consistency with the independent temperature measurement from [38]. Lower panels: The corresponding best-fit models compared to the COS observational data. The solid gray curve shows the UV heating model with no dark photon heating [13]. kinetic mixing, −14 = 0.5, for different A masses. In this case, smaller A masses result in later injection of heat into the gas parcel. From this, we may already conclude that A masses with m −13 > ∼ 0.9 and −14 = 0.5 are excluded by the data from [38,41] at 2 < z < 4. This is consistent with the bounds derived analytically in [30] using earlier measurements of the mean density IGM temperature (see their Fig. 9, as well as [29,32]). The lower panel of Fig. 1 shows the gas thermal evolution for a fixed pair of parameters [m −13 , −14 ] = [0.8, 0.5], but now for varying gas overdensities. For the adopted value of the A mass, m −13 = 0.8, the resonance at the mean background density occurs at z res = 1.8 (black solid curve). Later energy injection occurs for overdensities (log 10 ∆ b > 0), while earlier energy injection occurs for underdensities (log 10 ∆ b < 0). This density dependence allows low-redshift Ly-α forest observa-tions to place a bound on A heating that is complementary to constraints obtained at z > 2. The Ly-α forest at z = 0.1 is sensitive to gas at overdensities of ∆ b 10 [47], whereas at z > 2, it probes gas close to the mean density [36]. Hence, if appealing to A heating as a possible resolution to the COS line width discrepancy, we require m −13 > 0.6 for resonant conversion to occur in gas with ∆ b = 10 by z = 0.1; smaller masses would inject the energy too late. Coupled with the upper bound from data at 2 < z < 4, this implies 0.6 < ∼ m −13 < ∼ 0.9 for all kinetic mixing parameters that heat the IGM by more than a few thousand degrees. Note also that heating of the underdense IGM is expected at z > 2, even if the mean density IGM temperature constraints at 2 < z < 4 are fully satisfied. Intriguingly, there are indeed hints from the distribution of the Ly-α forest transmitted flux that underdense gas in the IGM at z = 3 is hotter than expected in canonical UV photo-heating models [80,81]. We intend to explore this further in future work. We now turn to obtaining a best-fit dark photon dark matter model from the low-redshift Ly-α forest assuming maximal A heating, i.e. with no other sources of noncanonical heating. Using the recipe outlined in the pre- Voigt profiles were fit to the mock Ly-α spectra, giving the Ly-α line column densities, N HI , and Doppler parameters, b. The b-distribution and column density distribution function (CDDF) were then constructed for each simulation following [47], and the model grid was used to perform a χ 2 minimisation on the covariance matrix derived from the COS observations. The resulting best fit A parameters are presented in Fig. 2. The ∆χ 2 = χ 2 − χ 2 min contours corresponding to the 68% and 95% intervals for individual parameters are displayed in the upper panel, while the lower panels show the best-fit models (corresponding to the crosses in the upper panel). We assume two different priors on the temperature of the IGM at z = 2: a tight prior of T 0,z=2 = 9500 ± 1393 K based on [41] (red dashed contours), and a weaker prior where the 1σ uncertainty on T 0,z=2 has been increased by a factor of four for consistency with the independent measurement of T 0,z=2 = 13721 +1694 −2152 K from [38] (blue solid contours). As expected from Fig. 1, a weaker prior has the effect of increasing the best-fit A mass. For the weak (tight) z = 2 prior, the best-fit model has m −13 = 0.84±0.06 (m −13 = 0.80 ± 0.04) and −14 = 0.46 +0.05 −0.04 ( −14 = 0.46 +0.06 −0.05 ) (1σ) for χ 2 min /ν = 1.13 (χ 2 min /ν = 1.37) and ν = 16 degrees of freedom, with a p-value of p = 0.32 (p = 0.14). For comparison, a model with UV photon heating only [13], shown by the gray curves in the lower panels of Fig. 2, has χ 2 min /ν = 3.50 and p = 2.5 × 10 −6 assuming the weak z = 2 thermal prior. The addition of A heating considerably improves the fit and leads to very good agreement with the COS data. For the weak z = 2 prior, the bestfit A parameters deposit an extra 5.3 eV per baryon into gas with ∆ b = 10 by z = 0.1, consistent with the limit of < ∼ 6.9 eV per baryon obtained by [47]. Conclusions.-In this Letter we have pioneered the use of the Ly-α forest at redshift z 0.1 as a calorimeter for investigating properties of the dark sector. Specifically, we studied a model of ultralight dark photons, A , that can naturally alleviate the tension [42][43][44]47] between the (too narrow) Ly-α absorption line widths predicted in hydrodynamical simulations compared to observational data at z = 0.1. Assuming a maximal contribution from A heating and a thermal prior of T 0 = 9500 ± 5572 K at z = 2 [38,41], our best-fit model has A mass m A = 8.4 ± 0.6 × 10 −14 eV c −2 and kinetic mixing parameter = 4.6 +0.5 −0.4 × 10 −15 (1σ). Although astrophysical sources, such as turbulent broadening or other non-canonical heating processes may also explain the line width discrepancy, our study is a first clear indication that DM energy injection can be a compelling alternative. We also highlight that our best-fit A parameters will have testable consequences for the temperature of the underdense IGM at z = 3, where there are already hints of missing heating in Ly-α forest simulations [80,81]. Finally, we note that dark photons in our mass range of interest can be produced around spinning black holes (BHs) through the superradiance instability. The superradiant cloud can affect the spin-mass distribution of BHs or produce gravitational waves, giving a promising way to look for A with m A ∼ 8 × 10 −14 eV c −2 . At the moment, BH measurements appear to be in tension with this A mass [82][83][84], but are currently subject to significant uncertainties [84,85] and model dependence [82,[86][87][88]. Sharpening our understanding of the superradiance phenomenon, as well as improving the experimental searches, will be pivotal in testing our model. Acknowledgments In this Supplemental Material, we show results from a wider range of our hydrodynamical simulations that use different values of the A mass and kinetic mixing. In contrast to Fig. 2 in the main text, rather than show the best fit model, we show results from the individual simulations that span a range of A masses and kinetic mixings. As already discussed in the main text, this demonstrates the effectiveness of the low-redshift Ly-α forest data for obtaining best fit parameters for A , rather than just individuating a preferred region of the parameter space. [42]. Solid gray curves show a simulation with UV photon heating only [13]. Note that the dark photon heating impacts both the Ly-α line widths and the shape of the CDDF, making a joint fit particularly powerful. Fig. S1 shows the effects of dark photon energy injection on the z = 0.1 Ly-α forest b-parameter distribution (left panel) and column density distribution function (CDDF, right panel) predicted by the cosmological hydrodynamical simulations. Note that, following [42,47], the neutral hydrogen densities in the models have been linearly rescaled to approximately match the amplitude of the CDDF at log 10 (N HI /cm −2 ) ∼ 13.5 (corresponding to a baryon overdensity ∆ b ∼ 10). The only difference here is we consider Doppler parameters in the range 20 km s −1 ≤ b ≤ 80 km s −1 (cf. 20 km s −1 ≤ b ≤ 90 km s −1 in [47]); the largest Doppler parameters are highly suprathermal and are insensitive to A heating. From the b-parameter distribution in the left panel of Fig. S1, it is clear that A with mass m −13 = 0.6 does not give substantially better agreement with the data compared to the case without A heating (gray line); both models overshoot the data for b < 25 km s −1 . In this particular case, the A heating occurs too late to affect the typical gas densities probed by the Ly-α forest at z = 0.1 (for m −13 = 0.6 the resonance redshift is z res = 0.06 for ∆ b = 10). Notice also that some of the A parameters excluded by the b-parameter distribution provide a reasonable fit to CDDF, while, on the other hand, some parameters that yield a reasonable fit to b-parameter distribution data are inconsistent with the CDDF data (see for example the orange dot-dashed curve in Fig. S1). In the particular case of m −13 = 0.8 (corresponding to z res = 0.28 for ∆ b = 10), gas with log 10 (N HI /cm −2 ) ∼ 13.5 (∆ b ∼ 10) is hotter in the models with larger kinetic mixing. However, gas at ∆ b > ∼ 16 (i.e. densities where z res < 0.1 for m −13 = 0.8) corresponding to the higher column density Ly-α absorbers) is not heated because the resonance threshold has not been crossed. This has the effect of flattening the gradient of the CDDF; if the lower density gas is hotter, it is also more ionized (the case-A recombination rate for hydrogen, α A ∝ T −0.72 , where T is the gas temperature). This results in relatively more of the stronger lines with log 10 (N HI /cm −2 ) > 13.8 as the kinetic mixing is increased. This demonstrates the interplay between these two observables which, when combined, provide a more powerful test of A heating than either measurement alone.
5,579
2022-01-01T00:00:00.000
[ "Physics", "Education" ]
Housework or vigilance? Bilbies alter their burrowing activity under threat of predation by feral cats Abstract Behavioral adjustments to predation risk not only impose costs on prey species themselves but can also have cascading impacts on whole ecosystems. The greater bilby (Macrotis lagotis) is an important ecosystem engineer, modifying the physical environment through their digging activity, and supporting a diverse range of sympatric species that use its burrows for refuge and food resources. The bilby has experienced a severe decline over the last 200 years, and the species is now restricted to ~20% of its former distribution. Introduced predators, such as the feral cat (Felis catus), have contributed to this decline. We used camera traps to monitor bilby burrows at four sites in Western Australia, where bilbies were exposed to varying levels of cat predation threat. We investigated the impact of feral cats on bilby behavior at burrows, particularly during highly vulnerable periods when they dig and clear away soil or debris from the burrow entrance as they perform burrow maintenance. There was little evidence that bilbies avoided burrows that were visited by a feral cat; however, bilbies reduced the time spent performing burrow maintenance in the days following a cat visit (P = 0.010). We found the risk posed to bilbies varied over time, with twice the cat activity around full moon compared with dark nights. Given bilby burrows are an important resource in Australian ecosystems, predation by feral cats and the indirect impact of cats on bilby behavior may have substantial ecosystem function implications. INTRODUCTION The relationship between predators and prey is an integral component of any ecosystem.Prey that fails to escape from a lethal predator will die, but even nonlethal injuries or nonconsumptive predator effects can result in reduced fitness (Peacor and Werner 2001;Hammill et al. 2015).Predator-prey interactions are, therefore, a major force driving evolutionary change in behavioral responses to either avoid or escape a predator (Cooper and Blumstein 2015).However, it is not only direct impacts of predators that can shape ecosystems.Even perceived predation risk influences prey species' behavioral responses, including their willingness to undertake social activities, group size, and structure (e.g., Hunter and Skinner 1998), where and when they feed (e.g., Hochman and Kotler 2006), and when to resume feeding after disturbance (e.g., Lima and Bednekoff 1999).The perception of predation therefore has cascading impacts down food chains, influencing whole landscapes (Laundré et al. 2010). Generally, moving prey are more likely to be seen by predators, and therefore increased predation risk can often result in reduced general activity levels (Lima and Bednekoff 1999).Some specific behaviors can also increase vulnerability.First, this may be because the behavior itself takes away time that can otherwise be spent being vigilant.For example, feeding can increase vulnerability, with animals needing to find an optimal balance between feeding and vigilance (Dall et al. 2001;Hochman and Kotler 2006), trading off their own food requirements with their safety risk.Social behavior such as allogrooming is also often balanced with predation risk (e.g., Blanchard et al. 2017).Second, some behavior further compromises the animal's ability to perceive danger.For example, Rattenborg et al. (1999) found that a higher proportion of mallard ducks (Anas platyrhynchos) sleeping on the edge of a group (a position in the group that has a higher predation risk) used uni-hemispheric sleep, allowing them to remain vigilant, and oriented the open eye away from the group's center compared to the birds sleeping in the center of the group.Digging and burrow maintenance are similarly likely to compromise an animal's ability to perceive danger as the animal has its head in the burrow (blocking vision and hearing) while the act of digging itself creates noise that would attract attention as well as mask the sounds of an approaching predator (Rabin et al. 2006;Chan et al. 2010).Carrying out these specific behaviors may increase the vulnerability for prey species and therefore their predation risk, resulting in altered activity budgets in the presence of a threat. As the risk of being preyed upon may vary greatly on a seasonal, daily, or even a minute-by-minute basis, perception of predation risk is likely to be influenced by temporal patterns in predator activity.Environmental factors such as lunar illumination influence nocturnal prey and predators' activity patterns (Prugh and Golden 2014), and by extension how prey respond to predation risk (Griffin et al. 2005;Navarro-Castilla and Barja 2014).The predation risk allocation hypothesis (Lima and Bednekoff 1999) predicts that if predators were more successful with increasing lunar illumination, prey species would become lunar phobic and shift activity to less bright lunar phases where possible (Palmer et al. 2017).Another possibility is that prey may shift away from behavior that increases their vulnerability during risky time periods, decreasing their exposure by altering their activity budget. Our study investigates the impacts of feral cats (Felis catus) on the burrowing behavior of the greater bilby (Macrotis lagotis).The bilby is a semi-fossorial, nocturnal mammal.The construction and maintenance of burrows by bilbies, during which they dig and clear away soil or debris from the burrow entrance, contributes to soil processes by increasing the heterogeneity of soil structure and increasing water infiltration, and such digging is important in shaping the ecology of Australian ecosystems (Fleming et al. 2014;Mallen-Cooper et al. 2019).Furthermore, their burrows provide shelter, foraging, and hunting opportunities for a variety of species, including birds, mammals, reptiles, and invertebrates (Hofstede and Dziminski 2017;Dawson et al. 2019).Bilbies therefore play an important role as ecosystem engineers (Jones et al. 1994).However, the bilby has experienced a severe decline in distribution and abundance due, in part, to predation from introduced predators such as the feral cat (Burbidge and McKenzie 1989;Woinarski et al. 2015).Feral cats target their hunting areas around areas of greater prey activity, and bilby burrows are foci for their attention (Moseby and McGregor 2022).Furthermore, many reintroductions of bilby populations are successful only in predator-free sanctuaries (Moseby and O'Donnell 2003;Berris et al. 2020), and in a translocated population exposed to cats, Ross et al. (2019) reported that all known fate bilby mortalities were consistent with cat predation. We used burrow maintenance activity as a measure of bilbies' perceived predation risk to feral cats.At a bilby population where feral cats were present, we used camera traps to compare bilby behavior at their burrows to quantify whether bilbies altered their (1) burrow use or burrow maintenance behavior after a visit by a feral cat, and (2) whether there was greater predation risk (likelihood of cat visitation to burrow) with moon phase.Comparing across four sites that had different levels of exposure to cats, we asked (3) whether bilby behavior differed by site in terms of the amount of time performing burrow maintenance between sites, or whether burrow maintenance was performed at different times of the night.Finally, we also asked a methodological question at one of the sites, (4) comparing whether scoring bilby behavior from photos was similar to scoring behavior from videos. Study sites This study was conducted at four sites across Western Australia (Table 1).Feral cats were present at the study site in the West Kimberley region with an overall average of 1 cat detection on camera per 100 trap nights over the monitored period.The other three sites were free from terrestrial mammalian predators.Mt Gibson Wildlife Sanctuary, located ~375 km north-east of Perth, was the largest fenced population studied, with a 7832-ha sanctuary surrounded by a feral predator-proof fence.Barna Mia Native Animal Sanctuary, located ~150 km south-east of Perth, has bilbies housed within two 4-ha feral predator-proof enclosures.The captive population at Kanyana Wildlife Rehabilitation Centre, located approximately 30 km east of Perth, housed each bilby in a small enclosure (~15 m 2 ) on sandy substrate (with a wire mesh floor) with an artificial "burrow" that consists of a wooden "nest" box with a PVC pipe leading out to the entrance. Camera traps At each site, the area was exhaustively searched for burrows in a systematic manner, with cameras placed on as many burrows as could be found, while eight enclosures were chosen at Kanyana based on the ease with which the opening of the artificial burrow could be viewed by the camera.Cameras were mounted on a metal stake (or the enclosure mesh for captive animals) at a height of 0.5 m, at a distance of 0.5-1.5 m from the burrow opening (distance varying depending on the surrounding vegetation).Cameras were aimed oblique to the burrow entrance to ensure animals would pass across the field of detection (rather than move toward it), maximizing detection probability (Meek et al. 2014)."Still" camera traps were set to passive infra-red (PIR) trigger, five rapid-fire images, no quiet period, high sensitivity, with an infrared flash.At Mt Gibson, each bilby burrow had a pair of cameras: one "still" camera trap and a second "video" camera trap set to PIR trigger, high sensitivity, and programmed to take a 15-s video per trigger. Image and video analysis Photos and videos of animals were identified to species or genus level and tagged in the metadata using the application "digiKam" (digikam team 2001-2021).The metadata from the images was extracted using the package "camtrapR" (Niedballa et al. 2016).We counted up the total number of species observed interacting with bilby burrows ("burrow commensals" as defined by Dawson et al. 2019).Any photos of a cat captured on the burrow cameras was defined as a cat visit, and included 6 visits during daytime and 56 visits during night time.No attempt was made to identify individual animals, but all bilby images in this study were further categorized based on the action of the animal as follows: (1) "Maintenance," where the bilby was actively digging soil away from the burrow entrance or clearing away debris such as branches (Figure 1a); (2) "Vigilance," where the bilby interrupted their activity to stand immobile, bi-pedal or on all fours with their head and ears erect (Figure 1b); or (3) "Interacting," included all other behaviors such as entering or exiting the burrow, passing by the burrow, and inspection of the burrow (Figure 1c,d). We used the proportion of images for each burrow for each night to quantify the proportion of time spent exhibiting a particular behavior.Videos were scored for these same categories (as above) as state events using "Behavioral Observation Research Interactive Software" (BORIS) (Friard et al. 2016).The time budget function in BORIS was used to calculate the proportion of time spent on a particular behavior. Statistical analyses All data analyses were conducted in R (R Core Team 2021).The first two experimental questions, comparing (1) bilby burrow use or burrow maintenance behavior after a visit by a feral cat, and (2) whether there was greater predation risk (likelihood of cat visitations at burrows) with moon phase, included only "active" burrows (24 burrows where both bilbies and cats were recorded at least once during the monitored period) across five populations in the West Kimberley (the only study site with cat activity).We compared the 10 days before and the 10 days after a feral cat visit to maximize our ability to detect a response to the cat visit.This period was not based on any precedent, but rather on a post-hoc assessment of the frequency of feral cat detections within the dataset.The monitoring period was truncated if the camera had been deployed less than 10 days before a cat visit, or was removed within 10 days of recording a cat.We considered cat visits to be independent of each other if they were separated by more than 21 days from other cat visits, but for instances where consecutive cat visits to a burrow were within a shorter time frame, only data for the first cat visit was used and the "after" period was truncated.The average duration of monitoring was therefore 8.59 ± 2.74 days "before" a cat visit and 9.21 ± 2.40 days "after" a cat visit.Differences in monitoring duration were accounted for by calculating the detection rate (number of events divided by the number of days that the camera was active, i.e., "trap nights"). Do bilbies alter their burrow use or burrow maintenance behavior after a visit by a feral cat? To test for the effect of cat visits on bilby burrow use, we fitted Generalized Linear Models (GLMs) using two dependent variables: (1) bilby presence (the number of days a bilby was present as a ratio of the number of days of monitoring for that burrow), and (2) bilby activity (camera detection rate).The predictor variable was cat exposure (before or after a cat visit), where each row of the dataset was of an independent cat visit, that was scored "0" if before the cat visit, or "1" if after the cat visit.We evaluated model fit using the quartile-quartile plot function in the "DHARMa" package (Hartig 2021), which indicated an overdispersion of residuals due to the high proportion of zeroes.Given that our data consisted of continuous values and were highly zero-inflated, the GLMs were fitted with a Tweedie link using the "tweedie" package (Dunn and Smyth 2008) in R, where the variable power was specified to maximize model fit. To test for the impact of cat exposure on bilby burrow maintenance behavior, we fitted a GLM with a Tweedie link to the proportion of bilby camera trap photos classified as burrow maintenance as the dependent variable, with cat exposure (before or after a cat visit), day of monitoring (10 days before and 10 days after a cat visit), and the interaction term of cat exposure and day as predictor variables.The tweedie package does not allow for the inclusion of random factors (to address potential pseudoreplication due to multiple data points collected from the same burrows); however, the data had minimal risk of pseudoreplication, with two-thirds of the burrows analyzed only having a single cat visit that met our criteria for inclusion, while only 7 out of 22 burrows analyzed had two cat visits that met our criteria for inclusion.The proportion of images where bilbies were performing burrow maintenance (expressed as a proportion of the total of bilby detections) was calculated for each day for the 10 days before and the 10 days after each independent cat visit.Relationships were plotted using the "ggeffects" package (Lüdecke 2018). Do bilbies face greater predation risk with moon phase? Cat activity (independent detections per 100 trap nights) was calculated on a given night that the camera was open across the 24 bilby-active burrows monitored in the West Kimberley.To test if cat activity at bilby burrows was influenced by the moon phase, we fitted a GLM with a Tweedie link to the number of independent cat detections as the dependent variable, with lunar illumination (0 as a new moon, and 1 as a full moon extracted of each day of monitoring using the "lunar" package; Lazaridis 2014), bilby activity (number of detections per 100 trap nights), and the interaction of lunar illumination and bilby activity as predictor variables. Does bilby burrow maintenance behavior differ between sites? To test if bilbies spent a different proportion of their time performing burrow maintenance across the four sites, we fitted a GLM with a Tweedie link to the proportion of bilby images showing burrow maintenance at each burrow as the dependent factor, with site (West Kimberley, Mt Gibson, Barna Mia, Kanyana) as the predictor variable.We then compared the means of the proportion of burrow maintenance for each site using a Tukey analysis using the "emmeans" package (Lenth et al. 2018).To test if bilbies performed burrow maintenance at different times of the night across the four sites, we fitted non-parametric kernel density curves using the "overlap" package (Ridout and Linkie 2009).If required, we minimized duplicates by altering identical timestamps by 0.00001 s in the raw data. Is scoring bilby behavior from photos similar to scoring bilby behavior from videos? Footage from Mt Gibson was used to test if scoring bilby behavior from photos was as consistent as scoring behavior from continuous footage collected through video cameras.Because there is published evidence of observer bias influencing the interpretation of camera trap and video footage (Foster and Harmsen 2012;Cruickshank and Schmidt 2017), video and image analysis was performed by the same observer.Videos were collected from eight burrows.Five nights with the highest bilby activity were selected from each of the eight burrows, where activity ranged from 3 to 156 videos per night.Bilby behavior observed from videos was scored using the same categories as photos.The proportion of each behavioral category per night for videos were derived using the "Time Budget" function in BORIS.To test if there was a significant correlation in the activity budget derived from photos or videos, we fitted a Generalized Linear Mixed Model to the proportion of each behavioral category from video cameras (as a proportion of all the footage collected on each night) as the dependent variable, with the proportion from photos (proportion of the respective behavioral categories derived from analysis of photos on the corresponding night) as the predictor variable, and burrow ID as a random factor. We also quantified the logistical constraints of collecting videos from camera traps compared to photos, quantifying the average file space on SD cards and battery life from 16 "video" and "photo" camera pairs.We used the "survival" package (Therneau 2021) to create a survival plot showing the remaining number of active cameras over time. RESULTS In the West Kimberley, 127 burrows were monitored for a total of 5414 trap nights, during which time 74 of the monitored burrows were active (at least one bilby detection), resulting in a total of 21 426 bilby photos.Cats (total of 74 photos) were detected at 24 of the 74 bilby-active burrows.At Mt Gibson, there were nine monitored burrows, for a total of 582 trap nights, with 23 972 bilby photos.At Barna Mia, all 17 monitored burrows were active, for a total 748 trap nights, with a total of 1264 bilby photos.At Kanyana, there were eight monitored bilbies in separate enclosures, for a total of nine trap nights, with a total of 7491 bilby photos. Do bilbies alter their burrow use or burrow maintenance behavior after a visit by a feral cat? Cats observed at the bilby burrows displayed a range of behaviors from moving past the burrow with no apparent interest, to investigative behavior (such as sniffing, and inserting their head in the burrow entrance).At least one cat was observed urinating at the entrance of the bilby burrow (Figure 1b).There was no evidence that bilbies altered their frequency of use after a visit to the burrow by a feral cat for 24 active bilby burrows in the West Kimberley that also recorded at least one cat (distributed across five separate populations, and monitored for total of 2433 trap nights).There was no significant effect of cat exposure on bilby presence (P = 0.200, Table 2a-i), or bilby detection rate (P = 0.429, Table 2a-ii).However, bilbies significantly altered their time budget around burrows after exposure to a feral cat.Before a cat visit, bilbies spent an average 15% of their time in front of the camera performing burrow maintenance (Figure 2a).By contrast, after a cat visit, the average dropped to 5% for the first five days, before increasing again seven days after the cat visit (Figure 2a).There was a significant difference in the overall proportion of bilby burrow maintenance with cat exposure (P = 0.050, Table 2b).The proportion of burrow maintenance also showed a significant interaction between cat exposure and day of monitoring (P = 0.010, Table 2b) capturing the gradual return to burrow maintenance after a cat visit (Figure 2b). Do bilbies face greater predation risk with moon phase? There were 59 independent cat visits recorded across five West Kimberley populations.Cat activity at bilby burrows was significantly positively correlated with lunar illumination (P = 0.010, Table 2d; Figure 3b), with 1.93 times more cat detections at bilby burrows for full moonlit nights compared with new moon nights.This relationship indicates substantially greater predation threat for bilbies on full moon nights, although bilby detection rate was not influenced by lunar illumination (P = 0.346, Table 2c).There was no significant relationship between cat detection rate and bilby detection rate (P = 0.365, Table 2d), indicating that cat activity and bilby activity at the same burrows were not correlated, although there was a suggestion of an interaction between lunar illumination and bilby detection rate (P = 0.059, Table 2d). Does bilby burrow maintenance behavior differ between sites? A total of 108 bilby-active burrows were monitored across the four study sites, with a total of 7918 camera trap nights.Overall, bilbies in the West Kimberley spent a greater proportion of their time performing burrow maintenance, while bilbies at Barna Mia and the captive animals at Kanyana spent a smaller proportion of their time conducting burrow maintenance (P = 0.001 for pairwise comparisons, Figures 4 and 5).Dawson et al. (2019) identified 45 taxa that actively interacted with bilby burrows at the West Kimberley.Raw species richness at Mt Gibson and Barna Mia shows that there were 16 and nine species (respectively) that interacted with bilby burrows, while the animals at Kanyana were housed individually (i.e., no commensals). The temporal activity of bilbies and cats in the West Kimberley had a 61.1% overlap in circadian timing of photo captures, with the activity peaks of both species being significantly different Is scoring bilby behavior from photos similar to scoring behavior from videos? A total of 6492 images and 3.55 h of video of bilbies (for 8 burrows in Mt Gibson collected over 38 nights) were analyzed to compare the behavior time budgets derived from photo and video analyses.There was a significant correlation for the proportions of each behavior category estimated from the photos and videos (P = 0.001 for each behavior, Figure 6).In terms of logistic comparison between data collected from photos or videos, on average, video cameras took up approximately twice the file space (4.57GB) of a "still" camera (2.65 GB) and twice as many video cameras ( 9) had depleted batteries compared to "still" cameras (4) by the end of the 38-night study period. DISCUSSION Contrary to our predictions, bilbies did not avoid burrows that were visited by a feral cat.However, bilbies did alter their behavior around these burrows, decreasing the proportion of time spent performing burrow maintenance for ~5 days after a visit by a feral cat. Our results support the assumption that burrow maintenance is a risky activity that can increase vulnerability by attracting the attention of potential predators or masking the sound of their approach (i.e., distracted prey hypothesis; Chan et al. 2010).This result is consistent with predictions under the predation risk allocation hypothesis (Lima and Bednekoff 1999) that animals should minimize risky behavior in the presence of a potential predation threat.This result suggests that burrow maintenance is balanced with foraging and vigilance behaviors according to the bilby's environment and, critically, their exposure to predation risk. Do bilbies alter their burrow use or burrow maintenance behavior after a visit by a feral cat? The lack of burrow avoidance in response to cat presence may be due to multiple reasons.First, intact bilby burrows may be too valuable to abandon.Burrows are highly important as a refuge against predators.For example, Mojave Desert tortoises (Gopherus gassizzi) increased refuge-seeking behavior when they encountered cues of their principal predator, the coyote (Canis latrans) (Nafus et al. 2017).Burrows represent a significant investment in time and effort spent digging and are therefore not readily replaced, and consequently a burrow is not likely to be abandoned if perceived risk is insufficient to deter the animal from continuing to use it, or there is some (low level) perceived risk at all burrows.In contrast with our study, for a fenced sanctuary without cats, Moseby et al. (2012) found that "trained" bilbies (which were hand-captured and handled with cat scent equipment and a cat carcass) were more likely to move away from burrows treated with cat scent compared to "untrained" bilbies (with no prior cat exposure).Steindler et al. (2018) similarly showed that bilbies altered their behavior around burrows in the presence of dingo scat, with bilbies less likely to be photographed fully emerged at the burrow entrance (noting that they showed no difference in bilby behavior between cat and rabbit scats or a procedural control).Ross et al. (2019) found that cat-exposed bilbies (from a population that had been living with cats for 2 years) placed into a pen with an artificial burrow were warier than predatornaive bilbies, spending less time moving and more time in cover. Van der Weyde et al. ( 2022) also found that quoll-exposed bilbies were generally more wary and neophobic compared to quollnaive bilbies.In our study in the West Kimberley, the bilbies are likely to encounter cats or signs of cats (feces, scent marks) both at the burrow and away from it as they move between refuge and foraging places.While we had no evidence that bilbies avoided burrows that had been visited by cats, we showed a significant change in their behavior around the burrows, spending less time in burrow maintenance for a period of ~5 days after the cat visit. The data show a range of different bilby behavioral responses to cat presence, suggesting that they are likely to modify their behavior around burrows as well as in other parts of the landscape generally.Second, our interpretation that bilbies avoided using burrows that had been visited by a feral cat is limited by the camera trapping methodology used.We were unable to determine if bilbies resided in the same burrow as they were last detected during the day, as the cameras were unable to reliably detect bilbies entering or exiting the burrows.Additionally, our cat detections only included known cat visits to monitored burrows and was limited to activity immediately around the entrance of the burrows.Our cat detection rates, therefore, do not reflect the actual cat density or activity in the West Kimberley. Do bilbies face greater predation risk with moon phase? Feral cats were twice as active around bilby burrows on full moonlit nights compared to new moon nights.If predation risk increases with lunar illumination, the predation risk allocation hypothesis (Lima and Bednekoff 1999) predicts that prey species would decrease their activity during the full moon (Griffin et al. 2005;Prugh and Golden 2014).A number of studies have examined the impact of lunar illumination on prey species supporting predictions of thepredation risk allocation hypothesis; for example, Linley et al. (2021) found that native prey species decreased their activity with increasing lunar illumination.However, there has been surprisingly few studies that have examined the impact of lunar illumination on cat hunting behavior.Moseby and McGregor (2022) found that cats spent longer around prominent prey cues such as bilby burrows and signs of bilbies, although they did not investigate the effect of lunar phase.Gilmore (2016) examined the impact of lunar illumination on the hunting behavior of a number of introduced mammalian predators in New Zealand, but found no effect of moonlight for cat activity.Given our finding that cats are more active on moonlit nights, we expected bilbies to hide in their burrows and therefore, Model showing the relationship between the proportion of each behavioral category derived from photos or from videos at 8 burrows in Mt Gibson recorded across 39 nights.Each data point is the bilby activity on a given night at one burrow.Videos for each night range from 3 to 156."Interacting" shown in black circles; "Maintenance" shown in red triangles; "Vigilance" shown in blue squares.. not be seen around the entrance.However, bilbies did not appear to respond to the apparent increase in predation risk as bilby detection rate was not significantly affected by lunar illumination. Does bilby burrow maintenance behavior differ between sites? Bilbies showed differing time budgets between sites where cats were present and feral predator-free sites, although there was no direct relationship with the amount of feral cat activity.We predicted that bilbies in sites with higher predation risk, such as those in the West Kimberley, would spend less time performing burrow maintenance to reduce their vulnerability in the presence of predators compared to sites with no cat presence.Contrary to this prediction, the West Kimberley bilbies performed more burrow maintenance compared to the fenced or captive populations.Factors other than predation threat could potentially explain this phenomenon. First, environmental factors such as temperature, rainfall, and soil type may influence the amount of digging required across the different study sites.For example, hotter temperatures may result in deeper burrows to maintain cooler temperatures and stability (Chapman 2013), as bilbies are prone to heat stress due to their inability to sweat (Johnson 1989), which in turn may lead to greater levels of burrow maintenance for deeper burrows.Soil type is also expected to influence the amount of digging required, particularly the soil components that influence its adhesive properties.We were unable to account for temperature, rainfall, or the effect of soil type (apart from noting that bilbies preferred sandy areas) in the present study, however, due to the small number of study sites that do not capture variability in these measures.Second, burrow maintenance could be influenced by the number and type of burrow commensals.Lastly, male and female bilbies are also likely to perform different amounts of burrow maintenance.For example, females have smaller home ranges than males (Moseby and O'Donnell 2003;Berris et al. 2021) and may spend more time maintaining fewer burrows, particularly when they are caring for young and must remain at a maternal burrow over a longer period.The differing temporal profiles at each site suggest bilbies at sites without cats (e.g., Mt Gibson bilbies) perform burrow maintenance at a time when cats would be most active.Additionally, the bilbies' varying temporal profiles at each site may also be influenced by food availability, and perhaps environmental factors such as temperature and rainfall.Burrow maintenance is an energy-costly behavior, and we hypothesize that perhaps in areas of low to moderate food availability, bilbies cannot afford to expend their energy in burrow maintenance before foraging.For example, the captive bilbies in Kanyana performed the majority of their burrow maintenance in the first few hours of the night.Food is readily available to them, and they did not need to forage as much as wild animals would.In the West Kimberley and Barna Mia, peak burrow maintenance occurred in the middle of the night, possibly after the animal had returned from foraging.Even in Mt Gibson, where burrow maintenance activity was crepuscular, the majority of maintenance occurred toward the end of the night, when the animal would have returned to the burrow after foraging.Second, external factors such as weather conditions could also impact the time of night that burrow maintenance is performed.Particularly, temperature may influence when bilbies emerge from their burrows; Jones (1924) notes that bilbies reduced activity in cold weather, and Gibson and Hume (2004) suggest that bilbies may avoid low temperatures to minimize thermoregulatory costs with bilbies finding different conditions to be advantageous for foraging. Is scoring bilby behavior from photos similar to scoring behavior from videos? We found that scoring behavior from photos and videos were strongly correlated, which supports the analysis of behavior from photo images.We had expected scoring of "Vigilance" to be underestimated for photo captures as vigilance behavior generally minimizes movement of the animal, therefore the likelihood of continually triggering the camera, and could also be scored simply as "interacting" if it was taken out of context, although the strong correlation indicated that this was not the case. Videos arguably allow better detection of the nuances in behavior, although the observer experienced the onset of fatigue in a shorter period when scoring videos compared to scoring photos.Photos were not scored in isolation, and if possible, the five images in sequence (from each trigger) were viewed together, allowing for some interpretation of behavioral context.As a result, it was less time consuming and efficient to score thousands of photos spanning multiple nights, than hundreds of videos that were recorded on a single night.For logistical reasons of file size and battery life, scoring from photos was also preferable.There are both advantages and disadvantages to using either videos or photos to score behavior, and the choice to use either (or both) is dependent on research goals and the type of data and data analysis methods that are desired. Limitations Our conclusions about cat presence were based on single photos and cat detection rate from camera captures at bilby burrows.This may have resulted in an overestimation of cat activity because we did not distinguish between individuals.We were also not able to detect or estimate the actual density and activity of feral cats in the West Kimberley. Recording behavior from camera trap images was also problematic for the captive population at Kanyana, where the small enclosure meant that the camera's field of vision consisted of half the enclosure.As a consequence, the cameras were constantly triggered, even when the bilby was away from the entrance of the burrow, draining the batteries within one or two nights.However, we do not believe that having the cameras open for a longer period would have changed the recorded behavioral profile of Kanyana bilbies, as the behavior of these captive animals was unlikely to change significantly. Finally, our time budget analyses were based on the proportion of time that bilbies spent carrying out burrow maintenance and not the total amount of time spent on this activity.The marked differences in activity between burrows precluded a meaningful comparison of absolute time spent carrying out burrow maintenance. CONCLUSIONS We used burrow maintenance as a novel method to measure the bilbies' perceived predation risk to feral cats and showed that although the visit of a cat did not appear to change bilbies' burrow use, even their presence altered bilbies' behavioral time budget for up to 5 days after its visit.Bilbies in the West Kimberley are also potentially more vulnerable with increasing lunar illumination when there are more cat detections at burrows, demonstrating context dependence to this predation risk.As bilbies are an ecosystem engineer, digging activity (even as they forage) influences multiple ecosystem processes such as soil properties, plant germination rates, and their burrows also provide refuge for other burrow commensals.Therefore, our study has shown that cats reduce bilbies' digging activity after a visit, and this behavioral change not only has implications for bilbies but also for soil, flora, and other fauna species in areas where cats are present. Figure 1 Figure 1 Example photos showcasing feral cat (Felis catus) and greater bilby (Macrotis lagotis) behavior categories scored in the present study.(a) Cat investigating a bilby burrow, and (b) a cat urinating at the entrance of a bilby burrow.Bilby behavior included (c) Maintenance: actively digging or clearing away soil or debris from burrow entrance, (d) Vigilance: interrupting activity to stand (bipedal or on all four legs) immobile (as evident across a series of images) with head and ears erect, (e) Interacting: all other actions and behaviors (animal moved across a series of images) including entry into burrow, exit out of burrow and social interactions as shown in (f) (bilby brawl). Figure2(a) Overall behavioral profile of bilbies across 24 monitored bilby burrows (across five populations in the West Kimberley, Western Australia), showing the 10 days before and 10 days after a visit by a feral cat (designated as Day 0; highlighted with the bold outline).Each bar shows the average proportion of images showing "Interacting," "Vigilance," and "Maintenance" behavior on each day.There is no data for the sixth day as none of the bilbies returned to these monitored burrows on that day.(b) Model showing mean (±95% CI) burrow maintenance as a proportion of bilby photo captures 10 days prior to and 10 days following a visit by a feral cat.Values were derived from the regression model, using the "ggeffects" package with all other continuous variables held at fixed median levels and categorical values at the most frequent category. Figure3(a) Feral cat (Felis catus) investigating a bilby burrow in the West Kimberley.(b) Model showing the predicted mean (±95% CI) relationship between lunar illumination and cat activity (detection rate per 100 trap nights) at 24 bilby burrows across five populations in the West Kimberley, accounting for the interaction between lunar illumination and bilby detection rate (not shown).Values were derived from the regression model using the "ggeffects" package with all other continuous variables were held at fixed median levels and categorical values at the most frequent category. Figure 4 Figure 4Overall behavioral profile of bilbies across four sites expressed (a) as number of events per monitored burrows, and (b) as a proportion of all events.Each bar shows the proportion of images classified as "Maintenance," "Vigilance," and "Interacting" at each of the four sites in this study: seven wild populations in the West Kimberley (74 burrows), fenced populations at Mt Gibson (9 burrows) and Barna Mia (17 burrows), and a captive population at Kanyana (8 burrows).Letters represent significant differences in the mean proportions of events showing bilby burrow maintenance between sites, according to Tukey's HSD test.(c) Proportion of bilby burrow maintenance behavior at each of the 108 bilby-active burrows across the four sites; only the West Kimberley populations had feral cats present. Figure 5 Figure 5 (a) Temporal activity of all cat activity at bilby burrows compared with bilby burrow maintenance behavior at the West Kimberley site only.(b) Temporal activity of bilbies' burrow maintenance behavior across four sites; seven wild populations in the West Kimberley (74 burrows), fenced populations in Mt Gibson (9 burrows) and Barna Mia (17 burrows), and a captive population in Kanyana (8 burrows). Figure 6Model showing the relationship between the proportion of each behavioral category derived from photos or from videos at 8 burrows in Mt Gibson recorded across 39 nights.Each data point is the bilby activity on a given night at one burrow.Videos for each night range from 3 to 156."Interacting" shown in black circles; "Maintenance" shown in red triangles; "Vigilance" shown in blue squares.. Table 1 Details of study sites in Western Australia, including research partners, presence or absence of predators, the number of cameras deployed at bilby burrows, and the period each site was monitored for (Dawson et al. 2019collected as part of a monitoring project(Dawson et al. 2019).b Two cameras were each deployed at eight bilby burrows, with one burrow having one camera. Table 2 Results of GLMs with a Tweedie link testing for the impact of feral cat exposure (10 days before vs. 10 days after a known feral cat visit) on (a) bilby burrow use and detection rate, or (b) bilby burrow maintenance behavior across five populations in the West Kimberley, Western Australia. Also shown is (c) the relationship between bilby activity (detections per 100 trap nights) with lunar illumination, and (d) the relationship between cat activity (detections per 100 trap night) with lunar illumination and bilby detection at 24 monitored burrows that recorded both at least one cat and one bilby Data for burrows visited by cats (10 days before vs. 10 days after a known feral cat visit).Data for each day of survey for 24 monitored burrows that recorded both at least one cat and one bilby (c) Bilby activity Bold values indicate either a significant P-value.
9,051
2023-10-31T00:00:00.000
[ "Environmental Science", "Biology" ]
Fermi-GBM Observations of GRB 210812A: Signatures of a Million Solar Mass Gravitational Lens Observing gravitationally lensed objects in the time domain is difficult, and well-observed time-varying sources are rare. Lensed gamma-ray bursts (GRBs) offer improved timing precision to this class of objects complementing observations of quasars and supernovae. The rate of lensed GRBs is highly uncertain, approximately 1 in 1000. The Gamma-ray Burst Monitor (GBM) onboard the Fermi Gamma-ray Space Telescope has observed more than 3000 GRBs making it an ideal instrument to uncover lensed bursts. Here we present observations of GRB 210812A showing two emission episodes, separated by 33.3 s, and with flux ratio of about 4.5. An exhaustive temporal and spectral analysis shows that the two emission episodes have the same pulse and spectral shape, which poses challenges to GRB models. We report multiple lines of evidence for a gravitational lens origin. In particular, modeling the lightcurve using nested sampling we uncover strong evidence in favor of the lensing scenario. Assuming a point mass lens, the mass of the lensing object is about 1 million solar masses. High-resolution radio imaging is needed for future lens candidates to derive tighter constraints. INTRODUCTION Strong gravitational lensing is a tool that serendipitously enhances our observing capabilities and offers new opportunities to study the Universe (see e.g. Oguri 2019). Gamma-ray bursts (GRBs) are energetic transient sources at cosmological distances, involving relativistic jets from stellar-mass black hole (BH) central engines. GRBs last from a fraction of a second to about 1000 s and typically show non-thermal spectra (see e.g. Kumar & Zhang 2015;Beloborodov & Mészáros 2017, for reviews). Given that the distance scale of GRBs spans a wide range (up to redshift z 9, Cucchiara et al. (2011)), a fraction of GRBs will show the imprints of strong gravitational lensing. Strong gravitational lensing produces multiple images of the same source. The images differ in their intensity, but importantly their spectral shapes will be the same. Similarly, in time-varying sources, the temporal profile will be the same but shifted in time for different images (Schneider et al. 1992). The temporal and spectral invariance is the main defining feature of gravitational lensing. Lensed images are separated by up to arcseconds which are clearly below the resolution of current gamma-ray detectors. Gamma-ray instruments, however, have excellent time resolution, and temporal structures can be recorded with unparalleled accuracy (Meegan et al. 2009). In one incarnation of the lensing scenario (Refsdal 1964;Rodney et al. 2021) applied to GRBs, also called macrolensing, a single event triggers the same instrument twice, and can be separated anywhere from a few hours to decades. The two triggers will have similar lightcurve shapes and spectra. The delay between the events scales linearly with the lens mass. Lens candidates for this scenario have masses in the 10 8 − 10 12 M range. Objects in this mass range include supermassive black holes (up to few ×10 10 M ), galaxies and clusters of galaxies. In practice, however, all traditional strong lensing time delay measurements are from galaxies or clusters of galaxies. GRB lensing rates are highly uncertain but somewhere on the order of 1 in 1000 GRBs should be affected (Mao 1992). After roughly 10,000 observed GRBs during three decades, no convincing macrolensing candidate GRB pair has been found (Nemiroff et al. 1994;Davidson et al. 2011;Veres et al. 2009; arXiv:2110.06065v1 [astro-ph.HE] 12 Oct 2021 Hurley et al. 2019;Ahlgren & Larsson 2020). The negative result is likely a combination of two effects: First, GRB detectors typically in low Earth orbit will miss a sizeable fraction of GRB lens echoes (see, however Hurley et al. 2019;Hui & MoonBEAM Team 2021, for existing and future all-sky instruments). Second, for weaker GRBs, with pulses close to the noise level, it is difficult to distinguish between the lensing scenario and just two unrelated but similar looking GRBs (Ahlgren & Larsson 2020). In a different scenario, also called millilensing (Nemiroff et al. 2001, because the expected separation between the images is on the order of milli-arcseconds), the gravitational lens signature is imprinted upon the lightcurve of a single trigger. In this case, we have, e.g., two emission episodes with similar lightcurve patterns which can be separated by timescales spanning from a fraction of a second to a few minutes. Recently, there has been an increase in claims of millilensing events. Paynter et al. (2021) presented convincing evidence for lensing in the short duration BATSE GRB 950830. Mukherjee & Nemiroff (2021a) raised some concerns based on the inconsistent flux ratio between the two pulses. Yang et al. (2021) and Wang et al. (2021) independently argued that the likely short GBM GRB 200716C shows milli-lensing signatures. Kalantari et al. (2021) reported on a different GBM lensing candidate, the long duration GRB 090717, selected based on the analysis of the auto-correlation function. The claim of Kalantari et al. (2021) was challenged by Mukherjee & Nemiroff (2021b) arguing that the two pulse shapes differ significantly. For the abovereported lensing candidates the flux of the first and the second emission episode is either at the same level or their ratio is 1.5. We present observations of the long duration GRB 210812A and show that it is consistent with a gravitational lensing scenario. It is the first lensing claim with a flux ratio 3. We list multiple lines of evidence to support the lensing interpretation. We perform spectral and temporal analysis using Fermi-GBM data and complement it with additional data from INTEGRAL-SPI/ACS and Swift/BAT. In Section 2, we present the observations, followed by tests for gravitational lensing in Section 3. We discuss the power of the tests in Section 5 and conclude in Section 6. Veres & Fermi-GBM Team 2021). Fermi-GBM consists of 12 NaI (referred to as n0, n1,..., n9, na and nb) and 2 BGO (referred to as b0 and b1) detectors covering the entire unocculted sky in the 8-1000 keV and ∼0.1-40 MeV energy range, respectively. As reported by the automatic pipeline, the GRB location is RA= 39.7, dec= 69.7 degrees, with an error radius of 1.1 degrees (statistical only). At the trigger time, this position corresponds to a LAT boresight angle of 149 degrees. The GRB location was behind the spacecraft, meaning most GBM detector normals have a large angle to the source. Based on previous experience (e.g. Connaughton et al. 2016), this type of geometry results in a large number of detectors showing approximately equal count rates compared to ∼3 detectors with dominating signal for typical GRBs with small LAT boresight angle. Upon visual inspection of the NaI detectors' lightcurves in the 50-300 keV range, where GRBs are brightest, we find that all 12 NaI detectors detected the brighter first peak. Moreover, detectors n1 and n6 through nb also detected the fainter, second pulse. Both of the pulses were detected by the 2 BGO detectors. Detectors n8, na, and nb showed the strongest signals. We used data from these detectors, along with b1, for spectral analysis. For the temporal analysis, we used detectors n1, n6 through nb, b0, and b1. For both spectral and temporal analysis, we used the 128 energy channel time-tagged event (TTE) data. We chose the pre-binned, 8 energy channel (ctime) data to carry out the flux-ratio test. The targeted search (Blackburn et al. 2015;Goldstein et al. 2019) was designed to search for coherent sub-threshold signals (weak signals that did not trigger the instrument). As expected, we recovered both pulses of GRB 210812A with high significance. The search also provides a location for the burst (RA= 40.5 • , dec= 69.4 • ), consistent with the location of the automatic pipeline used for standard GRB analysis. We also find that the locations of the two pulses are consistent, meaning they do indeed belong to the same source. Other observations The location of GRB 210812A was outside the coded mask of Swift-BAT (Gehrels et al. 2004;Barthelmy et al. 2005) at the time of the trigger. The first peak of GRB 210812A is clearly present in the continuous 4channel data 1 , 2 , but the second peak is only discernible in the summed lightcurve. INTEGRAL/SPI-ACS (Winkler et al. 2003;von Kienlin et al. 2003) detected GRB 210812A and clearly shows the two-peak structure. 3 We calculate the time delay between Fermi-GBM and ACS is due to the higher altitude of the INTEGRAL spacecraft to be dt ACS = −0.396 ms. We applied this correction to the ACS lightcurve in Figure 2. Temporal properties GRB 210812A consists of two pulses, separated by a quiescent period of about 30 s ( Figure 1). Using the GBM targeted search, we found no emission between the two pulses. For background estimates, we fit a third degree polynomial based on quiescent segments of the lightcurve before, after and in-between the two pulses. We also report no discernible emission around 33 s after the second pulse, indicating that this is not a periodic source. We further note that low level emission discernible in the summed lightcurve just before the start of the second pulse (T0+26.5 s to T0+29.6 s) is inconsistent with coming from the location of GRB 210812A, and thus it is unrelated. GRB 210812A has a duration of T 90 = 39.9 ± 3.6 s (10-1000 keV; T 90 marks the time interval between 5% and 95% of the cumulative flux) 4 . Analyzed separately, the two pulses have consistent duration: the duration of the first pulse is T 90,1 = 5.31 ± 0.68 s while the second pulse is T 90,2 = 3.84 ± 1.64 s long. Taking the first pulse as a separate GRB, we classify it based on the GBM T 90 distribution (Bhat et al. 2016;von Kienlin et al. 2020), as a likely long GRB originating from the collapse of a massive star, with a probability of 87%. Conversely, the likelihood of GRB 210812A being a short GRB from a compact binary merger, based on the T 90,1 information, the observed distribution of Fermi-GBM GRBs and calculated consistently with Goldstein et al. (2017);Rouco Escorial et al. (2021) is 13 %. Autocorrelation function -We measure the time delay between the two pulses using the autocorrelation function of the lightcurve between 10 and 1000 keV, with 64 ms resolution. To accurately determine the autocorrelation peak, we fit a 9th degree polynomial to the peak region. We estimate the uncertainty of the delay by adding Poisson noise to the original lightcurve (see e.g. Ukwatta et al. 2010;Hakkila et al. 2018, for a similar approach). After repeating the peak finding procedure, we take the uncertainty as the 1σ confidence region of the delays of the modified lightcurves and get: This is the first among the many time delay measurements between the pulses. We note here that this accuracy is typical of what one would expect for a lensed GRB separated by a longer timescale (macrolensing). The peak of the auto-correlation curve shows a 3.15σ excess over the smoothed curve calculated using the Savitzky-Golay filter. This excess satisfies the criteria of Paynter et al. (2021) for lensing, which classifies GRB 210812A as a lensing candidate for further scrutiny. Spectral lag -The lag is a measure of the delay between high and low-energy photons (e.g. Norris et al. 2000). Typically, a GRB is harder initially, and the higher energy photons (100-300 keV) arrive earlier than the low energy (25-50 keV) photons. This relation is also used to estimate the redshift based on the empirical correlation between lag and luminosity (Norris et al. 2000). For the first pulse we find that the lag is τ lag,1 = 221.6 +123.4 −139.6 ms, while the second pulse has: τ lag,2 = −89.2 +255.9 −206.5 ms. The error on the second pulse is much larger, because of its weakness, especially in the 25-50 keV range. Spectral Analysis The spectrum of the first pulse is best fit by a power law with exponential cutoff, also known as the Comp- tonized model (see Table 1 for the parameters and Figure 4 for the spectrum). The fluence in the 10-1000 keV range is The second pulse is weaker, and it is fit equally well by a simple power-law and the Comptonized model. We chose the Comptonized model, because it is more physical (the power law with photon index > −2 integrates to infinite energy). The difference in the goodness of fit measure (∆C-stat) when going from the simpler power law (2 parameters) to the Comptonized model (3 parameters) is ∼6. This is just below the ∆C-stat ≈ 8 used in e.g. Poolakkil et al. (2021) to select the more complex model. However, all the parameters of the Comptonized model are well constrained (Table 1), which justifies the use of this model. The fluence of pulse 2 is F 2 = (21.6 ± 2.6) × 10 −7 erg cm −2 . We can also compare the time-resolved spectrum of the two pulses. Because of the relative weakness of the second pulse, we chose to fit the simple power-law model in bins of 0.256 s. We show the temporal evolution of the power-law indices for the two pulses and a linear fit to both as a function of time ( Figure 5). We shifted the second pulse by 33.3 s to highlight their similar behavior. INDICATORS OF LENSING ORIGIN Proving the gravitational lensing origin involves showing that the two pulses have identical pulse shapes, and their spectra are similar as well. It is conceivable that some physical mechanism produces two pulses with identical pulse shapes and spectra with no lensing involved. Without imaging the sources based solely on the gamma-ray observations, a lensing scenario is difficult to prove beyond the shadow of a doubt. In this section, we will show, however, that the gamma-ray properties of GRB 210812A pass all the tests in the literature for a lensing origin. Furthermore, we apply the Bayesian evidence criterion that can select among models, and this method indicates strong evidence in favor of the lensing scenario. The basic idea for establishing the statistical likelihood of lensing in the temporal domain is comparing two scenarios. First, we assume no lensing. We fit the pulse model and allow every parameter to vary freely. In the alternative scenario, where lensing is assumed, the second pulse is forced to have the same shape as the first one and differ only by a normalization factor and a time delay. We use two types of pulses (Norris et al. 1996(Norris et al. , 2005. The first pulse with 5 parameters, referred to as "N1", is defined as: Here A is the amplitude at peak time, t max . σ r and σ d mark the rise and decay timescales of the pulse and ν is a shape parameter. The second pulse with 4 parameters, "N2", has the following temporal dependence: where A is the amplitude, ξ is the asymmetry parameter, ∆ is the start time of the pulse, and τ is a duration parameter. Indirect evidence We explore a few properties of GRB 210812A that are not direct proofs of lensing, but they are necessary to any such claim. Hard-to-soft evolution: Gamma-ray bursts typically become softer with time (e.g. Kaneko et al. 2006). This trend can be observed in the individual pulses of GRB 210812A as well (see Figure 5). For GRBs in general, the hard-to-soft evolution can be observed even across pulses. Specifically, for GRBs that show two pulses separated by a quiescent period, the second pulse is, in general, softer (e.g. Lan et al. 2018;Zhang et al. 2012). We find that the second pulse of GRB 210812A has the same spectral shape within errors or, subsequently, the same hardness, which is uncommon for typical GRBs. Leading pulse is brighter: In case of simple lens models, the light ray traveling closer to the lens arrives later. It has a lower magnification than the first arriving light ray with the larger impact parameter (Krauss & Small 1991). We find that the first pulse is indeed visibly brighter and thus consistent with the expectation from a lensed source. We note that this criterion may not hold for complex lens models (Keeton & Moustakas 2009). Spectrum of the pulses For the spectral analysis we first selected the interval containing the second pulse by visually identifying contiguous temporal bins with significant signal. For the first pulse, we selected a source interval which is the same length as the second pulse (see Table 1 and Figure 2). The spectral fits of the two pulses yield consistent spectral shapes within errors: the peak of the energyper-decade or νF ν spectrum is E peak,1 = 324 ± 28 keV and E peak,1 = 283 ± 90 keV. To compare the two spectra, we plot the 1 and 2 σ confidence regions of the spectral shapes (Figure 4), accounting for the correlations between the parameters. We multiply the second pulse by the fiducial 3.5 number to show that the two spectral shapes are consistent. Count ratio test If the pulses are gravitationally lensed, the ratio between pulses should not depend on energy. Mukherjee & Nemiroff (2021a) investigated the count ratio of the two pulses as a function of energy for GRB 950830 and found a 2σ inconsistency. This test has the advantage that it is independent of the particular spectral shape. For GBM we used the 8-channel ctime data to carry out this test on GRB 210812A. We considered all the detectors where the second pulse was visible in any channel (n1, n6 through nb, energy channels 1 through 6, and b0 and b1, energy channels 0 and 1), and data from ACS and BAT (25-350 keV). We find that the ratio of the pulses in all the channels and all three instruments is consistent with the mean value within 1.6 standard deviations (see Figure 5). We thus conclude that GRB 210812A passes the count ratio test for lensing. χ 2 test A simple and robust test that doesn't assume any pulse shape was introduced by Nemiroff et al. (2001). This test was recently applied to the claim of Kalantari et al. (2021) on GRB 090717 by Mukherjee & Nemiroff (2021b). Mukherjee & Nemiroff (2021b) conclude that based on the χ 2 test, the claim of gravitational lensing can be excluded at the 5σ level. This test considers the binned lightcurves of the two pulses as representing two distributions and asks if they are consistent with coming from the same parentdistribution. After appropriately re-scaling the first pulse and taking into account the background, we perform a χ 2 -test for the hypothesis that the two lightcurves are drawn from the same distribution. The test statistic is defined the following way: where the t is the time,r < 1 is the scaling factor, P i marks the background subtracted counts in the two pulses, and B i is the background counts (i = {1, 2}). As an example (see Figure 3, top left), we consider data from T 0 − 2 s to T 0 + 7 s interval (first pulse) and compare it with the interval shifted by ∆t = 33.3 s (second pulse). We use 0.256 s resolution, summed over the detectors with good signal in the 45-300 keV range, and added the signal of the BGO detectors (120-400 keV). We use the tte instead of ctime data, because the beginning of the first pulse has uneven temporal binning in the ctime data. First, we find the minimum of the χ 2 expression in Equation 3 as a function ofr and we getr = 0.231 (see Figure 3, top-left). Next, we calculate the minimum χ 2 = 24.3 value for 34 degrees of freedom, which corresponds to a p-value of 0.89. Thus there is no statistically significant difference between the two distributions. We compare the lightcurves of the two pulses for different temporal resolutions, energy ranges and instruments, in the panels of Figure 3. We consistently find there is no statistically significant difference between the two pulses. For example, Mukherjee & Nemiroff (2021c) performed a preliminary χ 2 analysis on GRB 210812A and concluded there is a ∼ 2.8σ discrepancy between the two pulses. Using the same energy range, and detector selection as Mukherjee & Nemiroff (2021c) (assuming in their notation detectors are numbered 1 through 12, and ctime energy channels 1 through 8) we performed the χ 2 test on 512 ms resolution lightcurves and in the energy range 22-800 keV (NaI detectors only, Figure 3, top-right). We findr = 0.231 minimizes χ 2 at a value of χ 2 = 11.5 for 16 degrees of freedom (p-value of 0.78), indicating the two lightcurves are consistent with being drawn from the same distribution. The BGO lightcurve (512 ms resolution) yields χ 2 = 14.027(dof = 16) and p-value of 0.597, and for ACS (0.4 s resolution) χ 2 = 24.662(dof = 22) and pvalue of 0.314 (bottom two panels of Figure 3). Bayesian Model Comparison Applying an idea from gravitational wave model selection Paynter et al. (2021) introduced the Bayesian evidence to compare the lensing scenario to the case where there is no lensing. In the no-lens scenario we fit the lightcurve with two pulses, with all the parameters left to vary. We derive the Bayesian evidence Z NL by integrating the likelihood over the multi-dimensional parameter space. E.g. for the N1 pulse, this involves 10 parameters: I(t) = I N 1 (t|A 1 , σ r,1 , σ d,1 , t max,1 , ν 1 ) + I N 1 (t|A 2 , σ r,2 , σ d,2 , t max,2 , ν 2 ). In the lensing scenario the second pulse is constrained. It has the same shape parameters as the first pulse, only differing in the normalization (r) and the shift in the peak time (∆t), resulting in 5 + 2 parameters: I(t) = I N 1 (t|A, σ r , σ d , t max , ν)+I N 1 (t|A/r, σ r , σ d , t max +∆t, ν). Z L is the evidence in this case. Formally, the Bayesian evidence (or simply evidence) has the following meaning (e.g. Speagle 2020): from Bayes' Rule, the probability of the model parameters Calculating the evidence Z is computationally intensive. It requires integrating the likelihood over a multidimensional parameter space. We use the bilby python package (Ashton et al. 2019) and dynesty nested sampler (Speagle 2020) to carry out the integration over the parameters to find Z. As in Paynter et al. (2021) the difference in ln Z is the natural logarithm of the Bayes Factor (ln BF = ln Z L − ln Z NL ) and it can be used to decide between the models. The Bayes Factor is additive, values from different, independent (e.g. different energy ranges) measurements can be added to perform model comparison. We present the results of this analysis in Table 2. Conveniently, the nested sampling yields also the best fitting flux ratio and time delay. Data from every instrument and energy range where the second pulse was detectable provided positive Bayesian evidence in favor of the lensing scenario. The simplest mass model is the point mass lens, when the mass of the lens is concentrated in a projected region smaller than the Einstein radius of the source. In this scenario, we can derive the lens mass from the flux ratio and the time delay (e.g. Mao 1992): where z l is the lens redshift and M l is the lens mass, c is the speed of light, G is the gravitational constant. MCMC lightcurve fitting We fit the summed NaI and BGO lightcurves with the Markov-chain Monte Carlo method using the emcee (Foreman-Mackey et al. 2013) python package with both the N1 and N2 pulses (see Figure 1). This method cannot select between the lensing and no-lensing scenarios; however, it is fast and robust compared to the more computation-intensive nested sampling (see section 3.5). We can take the result of the lightcurve fits in the lensing scenario and derive the lens mass, M l (1 + z l ) in the point-mass approximation. The N1 pulse model leads to a flux ratio of r = 4.47 +1.06 −0.73 and delay time of ∆t l = 33.16 +0.19 −0.21 s. The corresponding point lens mass value is: (1 + z l )M l = 1.07 +0.16 −0.15 × 10 6 M . For the N2 pulse the flux ratio is r = 4.19 +0.30 −0.25 and the time delay ∆t l = 33.11 ± 0.06 s. These lead to a point mass lens of mass: (1 + z l )M l = 1.13 ± 0.06 × 10 6 M . Table 2. Bayes factor compilation for different instruments, energy ranges and pulse models using the bilby nested sampling method. The flux ratio and the time delay resulting from the fits are shown in the last two columns. Singular Isothermal Sphere (SIS) lens model This lens model is characterized by the line of sight velocity dispersion, σ v of its mass distribution. While in the point mass lens case, we could constrain the lens mass, here, it is only possible to restrict the velocity dispersion up to a distance scale D, where D = D OL D LS /D OS and D ij mark the angular diameter distance combinations between the observer (O), lens (L) and source (S), respectively. We assumed a GRB redshift z s = 1 and a lens redshift z l = 0.4. A simple mass estimate based on the virial theorem yields a mass M ≈ 8 × 10 5 (σ v /15 km s −1 ) 2 (R/10 pc)M , where R = 10 pc is an assumed size considered typical for e.g. globular clusters for which the SIS model is a good approximation. The mass is broadly consistent with the point mass lens approximation. DISCUSSION In the previous section, we performed tests to confirm GRB 210812A is affected by strong gravitational lensing. Here we discuss the strengths of each test, analyze the unlensed GRB properties and present future detection prospects. Spectrum The most basic spectral test for a lensing scenario is the flux ratio test. Because gravitational lensing is achromatic, the flux ratio of the two pulses has to remain constant across energy ranges and different instruments. This is a robust measure because it doesn't depend on the assumed spectrum. The only caveat to consider if the GBM detectors' pointing has changed between the two pulses. In that case, the spectral responses change significantly between the pulses and the recorded counts cannot be compared across the emission episodes of GRB 210812A. The pointing of the detectors however has not changed by more than 5 degrees between the pulses, which means the detectors' response in the direction of GRB 210812A is essentially the same throughout the duration of the burst. The flux ratios across the energy ranges and instruments are clearly consistent with being equal (Figure 6). The weighted mean is 3.45, and we measure the most significant deviation for the ACS data point (green), which is 1.6 standard deviations away. Mukherjee & Nemiroff (2021c) showed a preliminary analysis of the hardness ratio (HR) for ctime channels 3 and 4 (4 and 5 in their notation) and claimed a 2.2 σ discrepancy between the HR of the two pulses. Formally, the count ratio (CR) test is equivalent to the HR test: the hardness ratio of Pulse 1 is HR(P1)=C(P1, Ch2)/C(P1, Ch1), where C(P1, Ch2) denotes the count rate in pulse 1 for channel 2 (higher energy channel for the hardness), and C(P1, Ch1), HR(P2) can be calculated analogously. The count ratio in the energy channel 1 is CR(Ch1) = C(P1,Ch1)/C(P2,Ch1). Thus from CR(Ch1)=CR(Ch2), it follows that HR(P1)=HR(P2). Because there are no strong outliers in the count ratio test, we expect the data to reflect this in the HR test. For the first pulse we find: HR= 1.405 ± 0.063 and for the second pulse: HR= 1.127 ± 0.155. We confirm the finding of Mukherjee & Nemiroff (2021c) that the second pulse indeed shows a lower HR. However, taking their difference and adding the errors in quadrature, we find that the discrepancy is only 1.66σ, which does not invalidate the lensing scenario. Next, we carried out a spectral analysis of the two pulses using the GBM data. In the Swift-BAT data, the second pulse was only visible in the summed lightcurve, and INTEGRAL SPI-ACS had only data in 1 energy channel. Therefore we only used Fermi-GBM data for the detailed spectral analysis. The spectral parameters are consistent within errors (Table 1), and a plot of the spectral shapes (Figure 4) also shows that the two spectra overlap when considering the confidence regions. We thus conclude that the spectra of the pulses are consistent with the lensing interpretation. We note the advantage of the continuous 128 energy channel data of GBM over the 4 channel tte data available for BATSE, for which precise spectral fits were not feasible (Paynter et al. 2021). Time history Gamma-ray instruments have better temporal than spectral resolution. E.g., the number of spectral resolution elements in 10-1000 keV is 10, while the 5 s duration pulse with ∼0.1 s resolution has 50 temporal resolution elements, where 0.1 s is the approximate timescale of variations for GRB 210812A (Bhat et al. 2012). For this reason, the temporal study of the lensing scenario can provide more constraints. The weaker second pulse is closer to the noise level of the detectors. Thus the shorter duration of the second pulse is in line with expectations for a weaker bursts with similar pulse shape. Nonetheless, the duration of the two pulses is still consistent within errors as expected from a lensing scenario. Independent of any pulse models, we first performed a χ 2 test to compare the two lightcurves. The χ 2 test determines if the two lightcurves are consistent with being drawn from the same distribution. The χ 2 test showed that there is no significant difference between the lightcurves in different energy ranges and across instruments ( Figure 3). We note, however that weakness of the second pulse results in relatively large Poisson errors, and fine temporal structures in the second pulse, if there are any, are washed out. This somewhat reduces the power of this test, showing only that the general shapes of the pulses are consistent. Next, we introduced pulse models from the literature and fit the lightcurve in different energy bands and different instruments. Using nested sampling, we evaluated the evidence in favor of the lensing scenario using the bilby code. Independent of the detector, energy range, or pulse model (Table 2), the evidence is consistently for the lensing scenario as opposed to the non-lensing scenario (BF>0 in all cases). We note that the logarithm of the Bayes Factor is not necessarily positive for the model with less parameters (the lensing model in our case). Indeed, e.g. Wang et al. (2021) (their Table 1) shows some cases with negative ln(BF). The Bayes Factor differs depending on which pulse model we apply. We find that in energy ranges where the second pulse is relatively weak, the evidence for lensing is not as strong (but it still favors the lensing). The evidence using the N2 pulse model is more compelling than in the case of the N1 shape. This can be due to the larger number of parameters in the case of the N1 pulse shape. We consider the NaI data alone, and the N2 pulse shape fits. The sum of ln (Bayes Factors) for the NaI detectors yields ln(BF) ≈ 10.7. Following Kass & Raftery (1995) and Thrane & Talbot (2019) we can assign colloquial meaning to this number. A difference of more than 8 is considered strong evidence in favor of the lensing model. We conclude that the Bayesian evidence thus supports the lensing interpretation. We note that the BGO and ACS lightcurves also provide additional evidence and, to a lesser extent, the Swift-BAT data as well. The nested sampling provides both a selection criterion between the models through the Bayesian evidence and, at the same time, provides the parameters for time delay and flux ratio. We show the values in Table 2 and Figure 7. Different instruments, detectors, and energy ranges all yield a consistent solution, pointing to the lensing origin. Most importantly, we can derive the lens mass for both pulse shapes and arrive at a consistent picture indicating a ∼ 10 6 M lens (Figure 8). The MCMC method yields smaller errors than the nested samplig. This is due to the different fitting approaches of the two methods (see e.g. Speagle 2020, for details). GRB properties If GRB 210812A had not been lensed, its fluence would have been F 0 = F 1 1 − 1 r ≈ 6.2 × 10 −6 erg cm −2 , where F 1 is the fluence of the first pulse (see Section 3.2) and we took r = 4.5 for the numerical value. Using this flux value, we can get a broad range for the possible redshift of this GRB using empirical correlations between gamma-ray properties. We scan the 0.1 to 5 redshift range and find that z 0.5 is consistent to within one sigma with the Amati relation (Amati et al. 2002) between the isotropic equivalent energy, E iso and the redshift-corrected peak energy, (1 + z)E peak . While this is a very crude estimate, it is in line with the average measured redshift of GRBs (Bagoly et al. 2006;Jakobsson et al. 2006) and it is consistent with our fiducial value of z s = 1. The lag-luminosity relationship (Norris et al. 2000;Gehrels et al. 2006) provides another estimate of the redshift (L ∝ (τ lag /(1 + z)) −0.74 ). Taking the median value of the lag (221.6 ms) and scanning the 0.1 to 5 redshift range, we find the redshift of GRB 210812A within the range 0.9 < z < 1.5 is consistent at 1σ level with the lag-luminosity relation. While this is similarly an empirical relation, it further reinforces that a redshift of z s = 1 is a reasonable approximation. Future events Even though GRB 210812A had no multiwavelength follow-up, as more lensed GRB candidates are observed, we expect to have a well-localized counterpart eventually. Identifying the afterglow of a lensed GRB, showing two consistently fading images, would be the smoking gun evidence for the lensing scenario. The angular separation of the two lensed images on the sky are on the order of the Einstein radius, which for a 10 6 M point mass is θ E = ((4GM/c 2 )(D LS /D OL D OS )) 1/2 ≈ 3 mas(M/10 6 M ) 1/2 (D/0.6 Gpc) −1/2 (assuming z l = 0.4 and z s = 1, mas = "milli-arcsecond"). This falls just short of the resolution of 10m class optical telescopes (≈40 mas). The only conceivable way of resolving the two images is through very large baseline interferometry (VLBI) radio imaging (Casadio et al. 2021). The sensitivity of VLBI can be as low as 10 µJy (Venturi et al. 2020) and radio afterglows are detected in significant numbers at or above this flux (Chandra & Frail 2012). VLBI imaging of the two sources will provide additional information on the source and lens redshift and help constrain the lens model. Capturing a lensed GRB with a well-understood lens will allow fully exploit the accurate, sub-second time delay measurement achievable for GRBs and will allow for time-delay cosmography. Lensing object It is difficult to identify the type of lens that produced the two pulses of GRB 210812A. Possible objects include black holes or globular clusters. Populations of objects can be ruled out based on their number density and their contribution to the total lensing probability (Nemiroff 1989). The total number of millilensed GRBs can be estimated based on a search for lensing candidates in the entire Fermi-GBM GRB catalog. Our goal in this paper was to report only on GRB 210812A, and we leave population-level studies for a future work. Nonetheless, we can make a few general observations, based on e.g. the previously mentioned claims by Kalantari et al. (2021) and Wang et al. (2021); Yang et al. (2021). For 1-3 lens events and the total number of Fermi-GBM GRBs, N > 3100, the lensing rate is (3−9)×10 −4 . This is on par with the rate based on BATSE observations by Paynter et al. (2021). A black hole mass of M ≈ 10 6 M lies at the lower end of the supermassive black hole population with measured masses (e.g. Woo et al. 2010) and at the upper end of the intermediate-mass black hole population (Greene et al. 2020). Without detailed counterpart observations it is unclear in which group, if any, the lens of GRB 210812A belongs to. Further lensed events however can provide essential constraints on the rates and origin of black holes in this mass range. Globular clusters are similarly good lens candidates and their masses can indeed reach 10 6 M (Baumgardt & Hilker 2018). A SIS model is a good approximation to the velocity dispersion in a globular cluster. Paynter et al. (2021) found, however, that even globular clusters with one order of magnitude smaller mass, ∼ 10 5 M , do not exist in sufficient numbers to produce the 1/2700 rate of lensed GRBs for BATSE. In our case, we re-quire similar lensing probabilities but with ∼ 10 6 M globular cluster population. 10 6 M globular cluster lies above the approximately 2×10 5 M turnover mass in the mass function (Jordán et al. 2007). This means that the larger 10 6 M mass cannot compensate in total lensing probability the drop in number density. We thus conclude that the globular cluster lens can be tentatively ruled out, and a point mass lens e.g. a black hole is more likely. For more definitive statements on the nature of the lens precise localization and high resolution observations will be necessary. CONCLUSION In this paper, we presented multiple lines of evidence for GRB 210812A being gravitationally lensed. The two peaks in GRB 210812A have consistent spectrum, time profile, and spectral evolution. We determined the flux ratio and time delay with multiple methods and arrived at a consistent picture. The first pulse is approximately 4.5 times brighter and the delay between the pulses is 33.3 s. Assuming a point mass lens, this flux ratio and delay corresponds to a lens mass of (1+z l )M l = 10 6 M . There are only a few unchallenged claims in the literature for lensed GRB lightcurves. GRB 210812A presents the first strong evidence for lensing a long GRB with a flux ratio larger than 2. Future events will benefit from high resolution radio observations for definitive proof of lensing origin and detailed lens modeling.
9,052.4
2021-10-12T00:00:00.000
[ "Physics" ]
Extraction of the Sivers Function from SIDIS, Drell-Yan, and W =Z Data at Next-to-Next-to-Next-to Leading Order We perform the global analysis of polarized semi-inclusive deep inelastic scattering (SIDIS), pioninduced polarized Drell-Yan (DY) process, and W =Z boson production data and extract the Sivers function for u, d, s and for sea quarks. We use the framework of transverse momentum dependent factorization at next-to-next-to-next-to leading order (NLO) accuracy. The Qiu-Sterman function is determined in a model independent way from the extracted Sivers function. We also evaluate the significance of the predicted sign change of the Sivers function in DY process with respect to SIDIS. Introduction.-The three-dimensional (3D) hadron structure is an important topic of theoretical, phenomenological, and experimental studies in nuclear physics. In the momentum space, the 3D nucleon structure is described in terms of transverse momentum dependent distributions and fragmentation functions, collectively called TMDs, which depend both on the collinear momentum fraction and the transverse momentum of parton. The TMD factorization theorem [1][2][3][4][5][6][7] provides a consistent operator definition and evolution of TMDs. Among TMDs, the Sivers function f ⊥ 1T ðx; k T Þ [8,9] is the most intriguing since it describes the distribution of unpolarized quarks inside a transversely polarized nucleon and generates single-spin asymmetries (SSAs). The Sivers function arises from interaction of the initial or final state quark with the remnant of the nucleon and thus, many of its features reveal the gauge link structure that reflects the kinematics of the underlining process [10]. Above all, the difference between initial and final state gauge contours leads to the opposite signs for Sivers functions in semi-inclusive deep inelastic scattering (SIDIS) and Drell-Yan (DY) kinematics [11][12][13], In the limit of the large transverse momentum the Sivers function is related [14] to the key ingredient of collinear factorization of SSAs, the Qiu-Sterman (QS) function [15][16][17][18], which describes the correlation of quarks with the null-momentum gluon field. Therefore, the measurement of the Sivers function and the exploration of its properties is a crucial test of our understanding of the strong force, and one of the goals of polarized SIDIS and DY experimental programs of future and existing experimental facilities such as the Electron Ion Collider [19,20], Jefferson Lab 12 GeV Upgrade [21], RHIC [22] at BNL, and COMPASS [23,24] at CERN. In this work, we perform the global analysis of transverse spin asymmetries at next-to-next-to-next-to leading order (N 3 LO) perturbative precision within the TMD factorization approach and extract Sivers function. Several important features make our results stand out from the previous results [25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40]. First of all, we use unprecedented N 3 LO perturbative precision, together with the ζ prescription [41]. Second, we use unpolarized proton and pion TMDs extracted from the global fit of SIDIS and DY data [42,43] at the same perturbative order and scheme, which allows us for the first time to describe with very good quality, SIDIS, DY, and W AE =Z experimental data. Lastly, we use the novel model-independent approach to obtain QS function from the Sivers function. Also, we estimate the significance of the sign flip relation, Eq. (1). The variable P hT is the transverse momentum of the produced hadron h 2 in the laboratory frame. The azimuthal angle ϕ h and ϕ S are measured relative to the lepton plane [48]. The single-spin Sivers asymmetry in SIDIS is defined as the ratio of structure functions and can be written in the TMD factorization as where M is the mass of the nucleon h 1 , and where f and D are the TMD parton distribution function (PDF) and fragmentation function (FF), J n is the Bessel function of the first kind and the summation runs over all active quarks and antiquarks q with electric charge e q . The arguments μ and ζ are the ultraviolet and the rapidity renormalization scale, correspondingly. The Q dependence of the ratio in Eq. (3) is due to the scales ζ 1;2 , which obey ζ 1 ζ 2 ¼ Q 4 [4,[49][50][51][52][53]. To respect it, we fix ζ 1 ¼ ζ 2 ¼ Q 2 , and also μ 2 ¼ Q 2 . The dependence on ðμ; ζÞ of a TMD distribution is dictated by the pair of TMD evolution equations [4,41,54], which, in turn, relate measurements made at different energies. In this work we use the ζ prescription [41] which consists in selecting the reference scale ðμ; ζÞ ¼ ðμ; ζ μ ðbÞÞ on the equipotential line of the field anomalous dimension that passes through the saddle point. In this case, the reference TMD distribution is independent on μ (by definition) and perturbatively finite in the whole range of μ and b. The solution of the evolution equations can be written [41,55] in the following simple form: and similar for other TMDs. The function f ⊥ 1T;q←h ðx; bÞ ¼ f ⊥ 1T;q←h (x; b; μ; ζ μ ðbÞ) on the right-hand side of Eq. (5) is the optimal Sivers function [55]. The function ζ μ ðbÞ is a calculable function of the universal nonperturbative Collins-Soper kernel Dðb; μÞ [56]. The N 3 LO expression used in this work is given in Ref. [42]. Drell-Yan process.-The relevant part of the differential cross section for DY reaction The variables φ and q T are the angle and the transverse momentum of the electroweak boson measured in the center-of-mass frame and y is its rapidity. The experimentally measured transverse spin asymmetry is where M is the mass of the polarized hadron h 1 , and where f 1 and f 2 are TMD PDFs for hadrons h 1 and h 2 . Often, the experiment provides measurements related to A TU [Eq. (7)]. In particular, the process h 1 ðP 1 Þ þ h 2 ðP 2 ; SÞ → l þ l − þ X (i.e., with the polarized hadron h 2 ) measured by COMPASS [58] is described by where the exchange of Sivers and unpolarized TMD PDFs takes place in the numerator of Eq. (7) and M refers to h 2 . Another important case is the asymmetry A N [59] measured by the STAR Collaboration and defined such that A N ¼ −A TU [60]. The STAR measurements are made for W AE =Z-boson production, and thus B DY n [Eq. (8)] should be updated replacing P q e 2 q by an appropriates structure, which can be found, e.g., in Ref. [42]. Nonperturbative input.-In addition to the Sivers function, SSAs (3), (7) contain nonperturbative unpolarized TMDs and the Collins-Soper kernel. We use these functions from Ref. [42] (SV19). SV19 was made by the global analysis of a large set of DY and SIDIS data, including precise measurements made by the LHC, performed with N 3 LO TMD evolution and NNLO matching to the collinear distributions. The unpolarized TMD PDFs for the pion were extracted in the same framework in Ref. [43]. In these extractions the Collins-Soper kernel is parameterized as , D resum is the resummed N 3 LO expression for the perturbative part [61], and c 0 is a free parameter. The linear behavior at large b of Eq. (9) is in agreement with the predicted nonperturbative behavior [62,63] and coefficient c 0 can be related to the gluon condensate [63]. It is customary in the TMD phenomenology to match TMDs to collinear distributions at small b [4,50,64,65]. In the present work, we do not use the matching of the Sivers to QS function [34,65,66], since it is not beneficial in the Sivers case. The reason is that QS function is not an PHYSICAL REVIEW LETTERS 126, 112002 (2021) autonomous function, but mixes with other twist-three distributions [67]. Therefore, a consistent implementation of the matching requires introduction of several unknown functions-subjects of fitting. Instead, we use the reversed procedure. We consider the optimal Sivers function as a generic nonperturbative function that is extracted directly from the data. QS function is then obtained from the smallb limit of the extracted Sivers function. For the Sivers function, we use the following ansatz: We will distinguish separate functions for u, d, s quarks, and a single sea Sivers function forū,d, ands quarks. The Sivers function does not have the probabilistic interpretation and can have nodes [68,69], which is realized by the parameter ϵ. The presence of a node is predicted by various models [68,[70][71][72]. We set β s ¼ β sea and ϵ s ¼ ϵ sea ¼ 0, since they are not restricted by the current experimental data. In total, we have 12 free parameters in our fit. Notice that the absence of the small-b matching is advantageous for our analysis as it allows us both to circumvent the difficulties of evolution of QS functions and to reach N 3 LO precision [73]. Such a strategy is allowed in the ζ prescription, and would also work in other fixed scale prescriptions [62], but is not consistent in the resummation-like schemes, e.g., used in Refs [35,38,39]. In the latter scheme, one would need to use the matching function for the Sivers function at N 3 LO, which it is not yet available [65]. Fit of the data.-The TMD factorization theorems are derived in the limit of large Q and a small relative transverse momentum δ, defined as δ ¼ jP hT j=ðzQÞ in SIDIS, δ ¼ jq T j=Q in DY process. We apply the following selection criteria [42,43] onto the experimental data: hQi > 2 GeV and δ < 0.3: The Sivers asymmetry has been measured in SIDIS and DY reaction [58,59,[74][75][76][77][78]. In total, after data selection cuts [Eq. (12)], we use 76 experimental points. We have 63 points from SIDIS measurements collected in π AE and K AE production off the polarized proton target at HERMES [74], off the deuterium target from COMPASS [76], and the 3 He target from JLab [78,79], and h AE data on the proton target from COMPASS [80]. We use 13 points from DY measurements of W AE =Z production from STAR [59] and pioninduced DY from COMPASS [58]. Let us emphasize that the recent 3D binned data [74] from HERMES allowed us to select a sufficient number of data (46 points) from SIDIS measurements. COMPASS and JLab measurements in SIDIS are done by projecting the same data onto x, z, and P hT . In order not to use the same data multiple times and for better adjustment to the TMD factorization limit, we use only P hT projections. The evaluation of the theory prediction for a given set of model parameters is made by ARTEMIDE [81]. The estimation of uncertainties utilizes the replica method [82], which consists of the fits of data replicas generated in accordance with experimental uncertainties. From the obtained distribution of 500 replicas, we determine the values and the errors on parameters and observables, including, for the first time, propagation of the errors due to the unpolarized TMDs. We use the mean value of the resulting distributions due to SV19 uncertainty as the central fit value (CF value), which is our best estimate of the true values for the free parameters. The uncertainty is given by a 68% confidence interval (68% CI) computed by the bootstrap method. The resulting replicas are available as a part of ARTEMIDE [83]. We performed several fits with different setups. In particular, we distinguish the fits with and without the inclusion of DY data. We found that the Sivers function extracted in the SIDIS-only fit nicely describes the DY data without extra tuning. Indeed, the N 3 LO SIDIS-only fit has χ 2 =N pt ¼ 0.87 and without any adjustment describes also the DY data with χ 2 =N pt ¼ 1.23. The combined SIDIS þ DY fit reaches a very good overall χ 2 =N pt ¼ 0.88 for all 76 DY and SIDIS data points, with χ 2 =N pt ¼ 0.88 for SIDIS and χ 2 =N pt ¼ 0.90 for DY cases. [74], COMPASS pion-induced DY process [58], and STAR W AE =Z data [59]. Open symbols: data not used in the fit. Orange line is the CF and the blue box is 68% CI. Parameters of the Sivers function resulting from SIDIS-only and SIDIS þ DY fits are compatible with each other [84]. The quality of data description in the SIDIS þ DY N 3 LO fit can be seen in Fig. 1. We have performed a fit without the sign change of the Sivers function from Eq. (1) in order to estimate the significance of the sign change from the data. The resulting fit does exhibit tensions between DY and SIDIS data sets, however, the fit has χ 2 =N pt ¼ 1.0 and cannot exclude the same sign of Sivers functions in DY and SIDIS kinematics. The sign of the sea-quark Sivers function plays the central role here. Indeed, the sign of the DY cross section is mostly determined by the sea contribution due to the favored q þq → W=Z=γ subprocess, whereas the sea contribution in SIDIS is suppressed. Therefore, with the current data precision, the flip of the sign for the N sea parameter alone is sufficient to describe the data and almost compensates the effect of the overall sign-flip [Eq. (1)] at the level of the cross section. The future data from RHIC and COMPASS together with EIC and JLab will allow us to establish the confirmation of the sign change [Eq. (1)]. Extracted Sivers functions.-The extracted Sivers functions in b space for u and d quarks are shown in Fig. 2. One can see that our results confirm the signs of the u quark (negative) and d quark (positive) at middle-x range known from the previous analyses [25][26][27][28][29][30][31][32][33][34][35][36][38][39][40], and also show a node for the u quark at large x. We have not explicitly used the positivity relation [85] of Sivers functions because it is only a LO statement and can be violated in higher order calculations. However, we verified numerically that our results do not exhibit any substantial violation of positivity bounds. The magnitude of s and sea quarks contribution in our fit is substantially different from other extractions where the biased ansatz f ⊥ 1T ðxÞ ∝ f 1 ðxÞ is used [27,[29][30][31][32][33][34][35][36]38,39] and the nonvalence contribution is artificially suppressed. In our case, the sea-and s-quark Sivers functions are comparable in size with u and d quarks, at x > 0.1 (and vanish at x < 0.1). Our unbiased extraction of the Sivers function reproduces large SSAs measured in the DY W AE =Z processes, see Fig. 1. Determination of the Qiu-Sterman function.-The Sivers function at small b can be expressed via the operator product expansion (OPE) through the twist-three distributions [65,66,86]. At the OPE scale μ ¼ μ b ≡ 2 expð−γ E Þ=b the NLO matching expression [65] depends only on the QS function and can be inverted. We obtain the following relation: whereȳ ¼ 1 − y, α s is the strong coupling constant, and T q and G ðþÞ are QS quark and gluon functions. This expression is valid only for small (nonzero) values of b. We use The solid black line shows the CF value and blue band shows 68% CI. The light green band shows the band obtained by adding the gluon contribution G ðþÞ . We compare our results to JAM20 [40] (gray dashed lines) and ETK20 [39] (violet dashed lines). b ≃ 0.11 GeV −1 such that μ b ¼ 10 GeV. The resulting QS functions are shown in Fig. 3. To estimate the uncertainty due to the unknown gluon contribution we allow for G ðþÞ ¼ AEðjT u j þ jT d jÞ. The resulting 68% CI uncertainty band and comparison to Refs. [39,40] are also shown in Fig. 3. Conclusions.-In this Letter, we have presented the first extraction of the Sivers function that consistently utilizes previously extracted unpolarized proton and pion TMDs, and uses SIDIS, pion-induced Drell-Yan, and W AE =Z-bozon production experimental data. The extraction is performed at unprecedented N 3 LO perturbative precision within the ζ prescription that allows us to unambiguously relate the Sivers function and QS function. This relation has been used to obtain the QS function and to evaluate the influence of the unknown gluon QS function. We also examined the significance of the predicted sign change of Sivers functions in SIDIS and DY processes. Our results compare well in magnitude with the existing extractions [25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40] and confirm the signs of Sivers functions for u and d quarks while we also obtain non-negligible Sivers functions for antiquarks. We will study the impact of the future Electron-Ion Collider data on the knowledge of the Sivers function in a future publication. Our results set a new benchmark and the standard of precision for studies of TMD polarized functions and are going to be important for theoretical, phenomenological, and experimental studies of the 3D nucleon structure and for the planning of experimental programs.
3,950.8
2021-03-17T00:00:00.000
[ "Physics" ]
Laser Control of Dissipative Two-Exciton Dynamics in Molecular Aggregates There are two types of two-photon transitions in molecular aggregates, that is, non-local excitations of two monomers and local double excitations to some higher excited intra-monomer electronic state. As a consequence of the inter-monomer Coulomb interaction these different excitation states are coupled to each other. Higher excited intra-monomer states are rather short-lived due to efficient internal conversion of electronic into vibrational energy. Combining both processes leads to the annihilation of an electronic excitation state, which is a major loss channel for establishing high excitation densities in molecular aggregates. Applying theoretical pulse optimization techniques to a Frenkel exciton model it is shown that the dynamics of two-exciton states in linear aggregates (dimer to tetramer) can be influenced by ultrafast shaped laser pulses. In particular, it is studied to what extent the decay of the two-exciton population by inter-band transitions can be transiently suppressed. Intra-band dynamics is described by a dissipative hierarchy equation approach, which takes into account strong exciton-vibrational coupling in the non-Markovian regime. Introduction Molecular aggregates continue to provide inspiration and challenges to experiment and theory [1,2,3]. In terms of a microscopic understanding of the dynamics of elementary excitations recent advances due to time-resolved nonlinear spectroscopy have been tremendous [4,5]. However, despite the success of laser pulse control in rather different areas of research [6], relatively little emphasis has been put on exciton dynamics. On the experimental side, feedback laser control has been applied to manipulate the branching ratio between internal conversion (IC) and energy flow in the light-harvesting antenna of purple bacteria [7]. There has been a number of simulations for light-harvesting systems by May and Brüggemann et al. where the focus was put on the transient generation of a single localized excitation within the aggregate [8,9,10,11]; non-biological aggregates have received no attention so far. The quantum dynamics of excitons is well founded within the Frenkel exciton approach, which starts from a classification of aggregate's excitation states in terms of the number of simultaneously excited monomers [12,13]. Under weak irradiation conditions only a single excitation will be present, but in nonlinear optical experiments or in the presence of strong irradiation multiple excited states play an important role. The organic building blocks of molecular aggregates have a multitude of electronically excited states, i.e. there is always a state S n such that the S 0 -S 1 excitation energy approximately matches that of a S 1 -S n transition. As a consequence one needs to distinguish between local double excitations (LDE) and nonlocal double excitations (NDE) where the two excitations are localized at different monomers. Note that the NDE should not be confused with bi-exciton states, i.e. bound states formed by two excitons in the presence of different permanent dipoles in the involved electronic states [13]. LDE and NDE can couple via the Coulomb interaction. The manifestation of this coupling in nonlinear spectroscopy has been investigated in Refs. [14,15,16]. The presence of LDEs has important consequences for the exciton dynamics since it leads to exciton-exciton annihilation (EEA) by virtue of an intramolecular IC triggered by nonadiabatic electronic transitions. The presence of this process in molecular aggregates is well established [17,18,19] and various theoretical approaches exist [20,21,22,23,24], although it must be stated that a first principles determination of the respective IC rates is still out of reach. Understanding exciton dynamics in aggregates is impossible without taking into account the effect of exciton-vibrational coupling [3,12,25]. Here one can distinguish between approaches which either account for all vibrations on the same footing, i.e. in terms of a heat bath, or treat a few active vibrational degrees of freedom explicitly, but coupled to the heat bath of the remaining modes. Needless to say, the latter approach is more demanding as the dimension of the density matrix increases rapidly with the number of explicit modes. Early investigations along these lines therefore have been restricted to molecular dimer models with one active vibrational coordinate per monomer [26,27,28,29]. More recent approaches include a Green's function description of exciton dynamics [30], a multi-configuration time-dependent Hartree simulation [31] or the Frenkel excitonic polaron treatment [32]. These methods cannot treat finite temperature effects due to the coupling to a further heat bath in a microscopic manner. The coupling of the electronic degrees of freedom to a single or site-specific heat bath is usually treated using dissipation theory [3]. Recently, the so-called hierarchy equation (HE) method [33,34,35,36,37,38,39] has enjoyed great popularity as it promises a nonperturbative and non-Markovian description of exciton dynamics, which is numerically equivalent, e.g. to the influence functional approach within a path integral formulation [40]. In the present context one should note that a distinction between different types of modes, i.e. low-frequency solvent modes versus high-frequency intramolecular vibrations, can be introduced via the spectral density. This is of particular relevance since the chromophores used to assemble artificial molecular aggregates usually show vibronic side bands in their absorption spectra pointing to the prominent role of intramolecular high-frequency vibrations [41]. The focus of the present contribution is on the laser control of dissipative twoexciton dynamics in linear aggregates. Thereby we take into account the effect of EEA and of a coupling to a heat bath being composed of an effective high-frequency mode as well as of a continuous distribution of low-frequency modes. For the solution of the density matrix equations of motion we will use a HE approach. Two-exciton populations are usually quenched by EEA. Since the latter is a local process we will ask the question whether an excitonic wave packet can be prepared such as to transiently suppress EEA by its composition in terms of LDE states. In the following Section 2 we will outline the density matrix approach and its interface to optimal control theory (OCT). Section 3 will start with a discussion of the field free dynamics focussing on the comparison between the Markovian and non-Markovian limits. Subsequently, laser driven dynamics is discussed. The results are summarized in Sec. 4. Frenkel Exciton Hamiltonian Consider an aggregate composed of N monomers, each carrying three adiabatic electronic states (a = g, e, f ) corresponding to the S 0 , S 1 , and some S n state. Thus we have the adiabatic electronic basis of local states |m, a , m = 1, . . . , N . The electronic states of the aggregate can be classified as the zero excitation (ground) state [3] the singly-excited state |m = |m, e Π n =m |n, g , the doubly-excited (LDE and NDE) states |mn = |m, e |n, e Π k =m,n |k, g . Restricting ourselves to these excitation the completeness relation reads The electronic Hamiltonian can be written aŝ with the bare exciton Hamiltonian, being composed of the ground state part where E 0 is the electronic ground state energy, the single-exciton Hamiltonian and the two-exciton Hamiltonian Here, E m,e and E m,f are the energies of electronic excitation at site m for S 0 -S 1 and S 0 -S n , respectively. Further, J mn is the coupling between site m and site n leading to single exciton transfer and J (ef) mn is the Coulomb coupling responsible for the two-exciton dynamics. It is customary to introduce Frenkel exciton eigenstates which follow from the diagonalization of the exciton Hamiltonian The eigenstates can be decomposed in terms of local excitation states, |a (a = 0, m, mm, mn), as follows The eigenstates separate into M-exciton manifolds (here M = 0, 1, 2). Frequently, we will also use a notation where the states in the M-exciton manifolds are counted according to increasing energy, i.e. |M k . In Eq. (6) the aggregate is assumed to interact with an external laser field treated in dipole approximation, i.e. where the field E(t) is oriented parallel to the transition dipoles. In the local Frenkel exciton basis the dipole operator for the aggregate readŝ µ = m µ m,e |m 0| + µ m,f |mm m| + n =m µ n,e |mn m| + h.c.. (14) Here, µ m,e (µ m,f ) is the transition dipole for S 0 -S 1 (S 1 -S n ) transition of monomer m. Finally, we consider the IC process described byĤ IC in Eq. (6). IC has its origin in the breakdown of the Born-Oppenheimer approximation, e.g. at conical intersections. The respective non-adiabaticity operator triggering transitions between adiabatic electronic states is proportional to the momentum operatorP m,ξ of the involved nuclear degrees of freedom (coordinatesQ m,ξ ) counted by the index ξ at monomer m. The non-radiative life time of the S 1 state is usually in the nanosecond range. Therefore we will restrict ourselves to the IC between adiabatic states |m, f and |m, e . Hence the coupling becomeŝ m,ξP m,ξ , In the following we will assume that the coupling strength is site-independent, i.e. g (IC) m,ξ = g (IC) ξ Exciton-Vibrational Coupling: System-Bath Model Exciton-vibrational coupling leading to intra-band phase and energy relaxation is introduced in the spirit of the system-bath approach, i.e. we havê The bath modes are treated in harmonic approximation, i.e.Ĥ vib is the standard harmonic oscillator Hamiltonian. Concerning the system-bath coupling we will distinguish between two types of modes (see also Ref. [42]). First, local high-frequency modes, {q m,ξ }, which usually give rise to a vibrational progression in the monomer absorption spectrum [41]. Second, global low-frequency modes, {x ξ }, contributing via a continuous spectrum. Hence we havê For the coupling to the local high-frequency modes we will use the model Hamiltonian m,ξ,f |mm mm| m,ξ,e |mn mn| . In the following we will assume that the coupling is the same for all monomers, i.e. g For the coupling to the low-frequency modes we will invoke the same approximations and arrive at the Hamiltonian 2.3. Dissipation Models 2.3.1. Internal Conversion The IC rate between S n and S 1 states is rather large for typical chromophores, reaching time scales on the order of about 100 fs [44]. On this time scale the S n population decays into the S 1 state, thereby passing many electronic states (for the case of perylene bisimides n would be of the order of about 30 [41]). Hence it is justified to assume that the memory time associated with the IC process is even shorter than 100 fs and EEA can be treated in Markovian approximation. Treatinĝ H IC in Eq. (15) as a perturbation of the bare exciton Hamiltonian within second order perturbation theory one can write the contribution to the Quantum Master Equation for the reduced exciton density operatorρ in terms of the Lie operator R (IC) which operates onρ as [3] i HereΞ m is defined aŝ and α m,ξP m,ξ of Eq. (15) (note that we assume that the momenta at different sites are uncorrelated). By using the completeness relation for the eigenstates, one haŝ Here, J Separation of Time Scales in the Response Function of the Oscillator Bath In general the influence of the bath degrees of freedom due to exciton-vibrational coupling cannot be treated in Markovian approximation. It is fully characterized by the response function, α(t), of the bath, which in turn is determined by the spectral density function J(ω) Here, g ξ , µ ξ , and ω ξ are the coupling constant, reduced mass, and frequency, respectively, of the ξth oscillator and β = 1/k B T . In the following we will specify the spectral density to the coupling models discussed in Section 2.2. First, we consider the case of the local high-frequency mode, Eq. (18) for which we assume a damped Brownian oscillator model to hold. This can be described by the following spectral density [45] Below we will assume the underdamped limit, where the central mode frequency ω 0 is much larger than the cutoff Λ H . For this spectral density the response function can be expanded into a sum of exponentials as follows where is the kth Matsubara frequency of the bath, and N H is the smallest integer satisfying ν N H +1 Λ H . As discussed in Ref. [33] the response function can be split into two parts: a long memory contribution (the first two terms in the last line of Eq. (27)) and a short memory part α (short) H (t). The latter one contains all the terms with short memory in the Matsubara summation and will be treated in Markovian approximation within the hierarchy equations to be discussed below. The low-frequency bath will be described by the Debye spectral density where η L is the coupling strength and Λ L is the frequency cutoff of the bath. The response function in Eq. (25) then can be expressed as follows where ν 0 = Λ L , ν k = 2πk/( β)(k > 0) is the kth Matsubara frequency of the bath, and N L is the smallest integer satisfying ν N L +1 Λ L . The same time scale separation has been introduced as for the high-frequency bath. Hierarchy Equations for the Dissipative Multi-exciton Dynamics In order to describe the non-Markovian and non-perturbative exciton dynamics we will utilize a HE approach to propagate the reduced exciton density matrix in eigenstate representation. Specifically we have employed the stochastic decoupling procedure due to Shao and coworkers which is briefly sketched in the Appendix [37,38]. Since the HE approach starts from the influence functional [33,46], it is based on the fact that the response function for bath is written as a sum of exponentials (see previous section). The resulting HEs of motion for the dissipative exciton dynamics of the aggregate are given by where A is an N × 2 integer matrix, with its matrix element A mj corresponding the contribution from Ω j in m-th molecule (Eq. (27)). B is an N × N H integer matrix with its matrix element B mj counting the j-th Matsubara frequency in the m-th molecule (Eq. (27)). And V is an N L + 1 dimensional integer vector. V 0 refers to the cutoff frequency and V j (j = 0) to the j-th Matsubara frequency in the low frequency bath (Eq. (29)). The operator " + / − " is defined as M ± mj nl = M nl ± δ mn δ jl for the matrix and V ± j k = V k ± δ kj for the vector. R is the Redfield super-operator accounting for the exciton-exciton annihilation and the short memory effects of the high and low frequency environments, [·, ·] denotes the commutator and {·, ·} denotes the anti-commutator. For the definition of α k,± and c (H) 0 , see Appendix. In the above equations, the first termρ 000 is the reduced density matrix for the exciton and the other elements reflect the finite memory effect due to the excitonvibration coupling. The initial condition for the hierarchy isρ 000 (0) =ρ(0) and ρ ABV (0) = 0 if any of the elements of A, B and V is nonzero. In the Markovian limit, the HEs are reduced to the Quantum Master Equation where R is the Redfield superoperator Here R (IC) is the superoperator accounting for the IC as defined in Eq. (21).Ξ (tot) andΞ m are the dissipative operators defined in the similar way as Eq. (22) for the low frequency and high frequency bath in m-th molecule, respectively. Optimal Control Theory For the design of laser fields for driving the exciton dynamics we will employ optimal control theory (see, eq. Ref. [47]; the present implementation follows Ref. [3]). Here the goal is to find a laser field E(t) such as to optimize a target at a certain time t = T . This can be cast into an optimization problem for the functional whereÔ is the projection operator onto the target state, λ is the penalty factor for strong fields. In this work λ is treated as a Lagrangian multiplier such as to keep the integrated intensity close to I 0 . To simplify matters we will not employ the HE approach at this point but resort to the Markovian approximation, which leads to a set of two coupled equations. Optimizing the functional J with respect to the field one gets the expression whereσ(t) is an auxiliary operator propagating backward in time, starting fromÔ at t = T with the following differential equation Eqs. (31) and (35) The fields obtained by the OCT equations will be characterized by their XFROG (cross-correlation frequency-resolved optical gating [48]) trace defined as where G(t) is the gate function having the form In the above equation τ is the width of the gate and ∆ is the width of the shoulder. Parameters We will apply the formalism outlined in Section 2 to a model mimicking the situation in perylene bisimide aggregates. The photophysical properties of PBI monomers have been studied in Ref. [41]. As far as the present model is concerned the following properties will be used: The S 0 -S 1 transition energy is E m,e = 2.13 eV. The monomer S 0 -S 1 transition dipole moment is 3.34 ea 0 . This transition is coupled to an effective vibrational mode of frequency 1415 cm −1 with a Huang-Rhys factor of 0.44. Further we used κ = 1 in Eq. (19). While the effective mode captured the vibrational side band observed in the experiment, it could not account for the general line broadening, which amounts to 1110 cm −1 thus indicating substantial coupling to a continuous distribution of low-frequency bath modes. Hence, for the local high-frequency mode we take in the spectral density, Eq. The situation is more complicated for the monomeric excited state absorption (ESA). On the experimental side, the ESA spectrum has been obtained from a pumpprobe spectrum, recorded after equilibration in the S 1 state. Since the procedure requires knowledge about ground state bleach and stimulated emission spectra at a detail which cannot be obtained, the extracted ESA spectrum may contain spurious features. On the theoretical side calculating highly excited states for a molecule as large as PBI is a challenge, in particular because standard time-dependent density functional theory will fail to describe transitions of double excitation character. In Ref. [41] we applied the multireference variant of density functional theory to a reduced PBI system. Combining experimental and theoretical data it can be stated that (i) the excited state responsible for the S 1 -S n transition is at n > 20. In other words, the internal conversion down to the S 1 state proceeds via a dense manifold of electronic states, whose microscopic description is out of reach. This suggests using a description in terms of an effective internal conversion rate as outlined in Section 2. For the actual value we have used an inverse rate of 100 fs. Within the spectral density model, Eq. (24), this amounts to choosing Λ IC =1000 cm −1 and η IC = 0.4. (ii) Since reliable information on the transition dipole moment µ m,f are not available we will use the value obtained within the harmonic oscillator picture, i.e. µ m,f = √ 2µ m,e [15]. (iii) Concerning the anharmonicity ∆ m = E m,f − 2E n,e we first note that in the range where E m,f ≈ 2E n,e two transitions have been observed. Since the magnitude of ∆ m decides about the mixing between local and nonlocal double excitation states [15] we will consider two cases. In case A the anharmonicity is large, ∆ m = −0.26 eV, whereas in case B it is small, ∆ m = −0.04 eV. In Ref. [41] a PBI monomer has been investigated only. The Coulomb coupling between adjacent monomers in the PBI aggregate has been estimated from the experimentally observed absorption line shifts upon aggregation. It has also been calculated in Ref. [50]. Below we will use the calculated value of J mn = −515 cm −1 and only nearest neighbor coupling will be considered. The coupling between excited state transitions is not known and we will again use the harmonic oscillator approximation giving J Figure 2. Population dynamics in the field-free case after initial population of a specific two-exciton eigenstate. Shown is the total population of the two-exciton manifold in comparison with the monomer case. Figure 3. Population dynamics of the dimer in the field-free case after initial population of the highest state of the two-exciton band. Shown are the populations of the one-and two-exciton eigenstates. N A/N B). The one-and two-exciton eigenstates are shown in Fig. 1 Population Dynamics in Field-Free Case In Fig. 2 we compare the internal conversion dynamics for the different models as a function of the initial preparation of the two-exciton manifold. Also shown is the decay of the monomer LDE state, which is always faster than that for the aggregate. Furthermore, it is found that the decay in case B is always slower as compared to case A, with the difference becoming more apparent upon increasing the aggregate size. This observation can be traced to the fact, that the mixing of LDE and NDE states in case B leads to a different intra-band dynamics and in particular to a slow-down of the interband dynamics as can be seen from Fig. 3. From this figure we notice that even though the decay of the initial state population is faster in case B, the population gets trapped for a longer time in state |2 1 , i.e. at the bottom of the two-exciton band. In case A the state |2 1 is of more local character as compared with case B and therefore inter-band relaxation is faster. Similar arguments apply to the aggregates with N = 3 and 4. So far we have presented results from the HE simulation. In Fig. 4 HE and the Markovian dynamics if the system is prepared in the uppermost two-exciton state. In order to scrutinize this effect we have plotted the population dynamics of the two-exciton eigenstates in Fig. 5. First, we notice that in general the populations from the HE are showing an oscillatory behavior, which is not present in the Markovian approximation. Despite this difference in details, the total two-exciton population is rather similar in the two limits. Only for case A and for initial preparation in state |2 3 we notice a marked deviation for the dimer considered in Fig. 5. In the HE case state |2 3 relaxes rapidly and states |2 2 and |2 1 become populated before the overall decay due to IC sets in. Notice that the energy gap between states |2 3 and |2 2 amounts to 2518 cm −1 . The spectral density for the system-bath interaction is composed of a lowfrequency part with cut-off at 100 cm −1 and a high-frequency contribution at 1415 cm −1 . Hence relaxation within second order perturbation theory as implied in the Markovian, i.e. truncated HE, approach will be very inefficient due to the smallness of the spectral density. The HE, on the other hand, accounts for higher order effects and yields a rapid energy relaxation. Notice that in case of model B this energy gap is only 1203 cm −1 , i.e. within the frequency range covered by the spectral density. According to the argument given one would not expect a marked difference between HE and Markovian dynamics in case B, what is in line with the results shown in Fig. 4. Inspecting Fig. 1 we note that a similar energy gap exists also for larger aggregates what leads to a corresponding behavior of the population dynamics in Fig. 4. However, upon further increase of the aggregate size this gap closes and it can be expected that HE and Markovian dynamics behave rather similar at least from the total population point of view. In passing we note that ultrafast inter-band relaxation suppresses the effect of excitonic-polaron formation, which, however, will play a role in the one-exciton manifold (see, Ref. [39]). Laser Control of Two-Exciton Dynamics In the following we will investigate the possibility to control two-exciton dynamics with ultrashort shaped laser pulses. In particular we will ask the question whether one can transiently suppress EEA. Since EEA is a local process one might argue that a two-exciton wave packet, which is prepared in a way such that the contribution coming from LDE is minimized will lead to a slow down of EEA since the latter is mediated by intra-band relaxation processes mixing LDE and NDE states. For this purpose we will use the target state |1N within the OCT scheme, i.e. the state corresponding to the situation where the two NDEs have the largest separation in real space. The fastness of the IC process puts some restriction on the pulse duration which has been set to T = 50 fs. As mentioned in Sec. 2.5 the field-optimization will be performed in the Markovian limit, starting with a very broad band pulse. The field is then used to generate the dynamics using the HE approach. In Fig. 6 we show the optimized fields (XFROG) for cases A and B together with the population dynamics of target states as well as the exciton eigenstates for the dimer model. Since dipole transitions are possible between adjacent exciton bands only, the dynamics necessarily involves a two-step process. First, the one-exciton band is excited (transition frequency ω 1 1 ,0 = 2.06 eV) and this process is followed by a one-to twoexciton transition. In case A this way a superposition of states |2 1 and |2 3 is prepared (transition frequencies ω 2 1 ,1 1 = 1.88 eV, ω 2 3 ,1 1 = 2.06 eV), which, however, rapidly decays due to intra-and inter-band relaxation processes. For case B, where the LDE and NDE states are strongly mixed, the compromise found by the OCT equations has been to prepare just state |2 1 which has a 42 % overlap with the target state. Although state |2 3 would have had a larger overlap, its transition dipole matrix element is about a factor of 13 smaller for that case (transition frequencies: ω 2 1 ,1 1 = 2.04 eV, ω 2 3 ,1 1 = 2.30 eV). Similar to the scenario of the field free dynamics in Fig. 3, the overall decay of the two-exciton population is slower in case B as compared to case A. Finally, we notice that the maxima of the XFROG traces do not match with the bare exciton transitions in all cases, since superposition states are prepared by the broadband excitation. Further, exciton-vibrational coupling and the dynamic Stark shift will modify the bare excitonic resonances. One might argue that in the dimer EEA will always be very effective since the NDE are localized at neighboring sites. This situation might change for larger aggregates. As an example we consider the tetramer model in Fig. 7. The convergence of the OCT algorithm is rather slow in this case and for the given constraints only a small population of the target state can be reached. In both cases the pulse initially excites the lowest transition of the one-exciton band ( ω 1 1 ,0 = 2.03 eV). For case A the subsequent oneto two-exciton transition populates mainly state |2 5 ( ω 2 5 ,1 1 = 2.12 eV) which has the largest overlap with the target state (32 %). In case B state |2 1 is dominantly populated ( ω 2 1 ,1 1 = 2.01 eV) although it has only a small overlap with the target state (3%). However, as compared with state |2 4 , which has a 37 % overlap with the target state, the transition dipole moment to state |2 1 is larger by a factor of about 5. Overall, the comparison between cases A and B resembles again Fig. 6. Comparing the dimer and tetramer cases one notices that the initial hypothesis that the longer the aggregate the longer the time scale during which one can maintain a two-exciton population does not hold in general. Instead the difference between cases A and B points to the importance of mixing between LDE and NDE states within the two-exciton band. However, focussing on case B only there is the anticipated increase of the time-scale of transient two-exciton state population. In this case the optimized pulse populates the lowest state of the two-exciton manifold whose overlap with the target state decreases with increasing system sizes due to the "dilution" of the zeroorder excitation states. For the same reason, however, the overlap of state |2 1 with LDE states decreases yielding a longer time scale for the two-exciton decay. Next we comment on the use of pulses derived by using the OCT equations and Markovian dynamics within the HE scheme. Inspection of the different cases shows that the resulting population of the target state is comparable in the two cases, i.e. this procedure does not lead to a degradation of control for the given pulse shape. Needless to say that in line with the discussion of Fig. 4 case A case B Figure 8. Comparison of laser-driven dynamics for the dimer models using the pulses obtained from the OCT equations (red) and a fit of these pulses to two Gaussians (2G, black). The lower panels show the resulting target and two-exciton manifold populations. HE might result in different pulses. However, in view of the difficulties arising from the short life time of the two-exciton population this will have little relevance for the present discussion. We need to emphasize that our model is limited to the two-exciton space. In principle for the given field intensities it is likely that higher exciton manifolds could be excited as well (for a study of intensity-dependent multi-exciton dynamics within a Bloch model, see Ref. [52]). The expected rapid decay of these states would lead to an additional channel for populating the two-exciton manifold, what could in principle influence the details of the dynamics. Needless to say, that this effect could be suppressed by reducing the field intensity, but at the expense of the population of the two-exciton manifold. Finally, the question arises whether all the details of the control pulses play a role or in other words how robust is the achieved population control and can the pulse be simplified. Exemplarily, in Fig. 8 we show a comparison of the dynamics obtained for the OCT derived pulses (cf. Fig. 6) and their fits to two Gaussian shaped pulses for the dimer model. For case A we observe a strong sensitivity, i.e. the two-exciton population drops down considerably when using two Gaussian pulses. Apparently, the preparation of the |2 1 and |2 3 superposition state depends on the details of the control field. In contrast, in case B where dominantly the state |2 1 is prepared a simple Gaussian-shaped field is sufficient to achieve a control comparable to the optimized pulse. Summary Two-exciton dynamics has been studied on the basis of a non-perturbative and non-Markovian hierarchy equation approach. The dissipative dynamics has been combined with optimal control theory for obtaining laser fields designed such as to trigger longlived two-exciton state populations. Specific results have been obtained for short aggregates made of J-aggregate forming perylene bisimide dyes for which Frenkel exciton parameters are available [41]. Establishing a two-exciton population one has to compete with the very efficient internal conversion, which is a consequence of the breakdown of the Born-Oppenheimer approximation. For the monomer this process takes place on a time scale of about 100 fs, which therefore sets the upper bound for laser control. The fact that two-exciton populations can be maintained on a longer time scale is due to the mixing between local and non-local double excitations of the aggregate; the latter do not decay on an ultrafast time scale. The considered dyes support two possible double excitation states in the relevant energy range. Therefore, we considered two scenarios corresponding to small and substantial mixing between the two type of aggregate excitations. It turned out that the case of strong mixing allows for maintaining a twoexciton population on a longer time scale (in the present cases about 1 ps as compared with 0.5 ps for the weak mixing case). This effect should be observable in a pump-probe experiment. At this point it should be noted that two-exciton populations in aggregates made of organic dyes like PBI have been observed for the first time only very recently [53], what demonstrates feasibility of preparation and spectral identification. The present investigation highlights the importance of zero-order state mixing within the two-exciton band for the inter-band transitions. The more diluted the zeroorder states the more the decay is slowed down. However, this situation might change if static disorder is taken into account which will lead to a localization of the two excitations on different parts of the aggregate. Appendix: Derivation of the Hierarchy Equations of Motion For the Caldeira-Leggett model of dissipation, Eq. (19), withĤ S−B = f (ŝ)g(b), one can employ a stochastic procedure to derive the equation of the motion for the reduced density matrix of the relevant systemρ if the whole system is prepared as a factorized state, i.e.ρ tot (0) =ρ(0) exp(−βĤ B )/Tr[exp(−βĤ B )] [37,38]. Within this approach, the influence of the bath is completely characterized by its response function and its effect is described by a random forceḡ(t) in the Itô stochastic differential equation where w 1,t , w 2,t are two independent complex-valued Wiener processes and w * 1,t , w * 2,t are their complex conjugates. In Eq. (A.1)ḡ(t), which plays the same role as the influence functional in the path integral treatment, can be regarded as a stochastic field fully characterizing the influence of the environment. It is defined as For simplicity, here it is assumed that the response function of the bath is where Ω 1 = Ω * 2 (cf. Section 2.3.2). Note that the extension to multiple exponentials is straightforward. For a more general form of the response function consult Ref. [38,33]. = |b 1 b 2 |. Notice that using a scaling like in the above equation has been suggested by YiJing Yan and co-workers [36,51]. The present prefactor is slightly different from that suggested in Refs. [36,51], where the numerator was set equal to one. This choice will keep the terms in the same order size consistent in cases where the decay constants in the response functions are rather different. Carrying out the stochastic average for the random variables in Eq. (A.1), elementary stochastic calculus yields the differential equation forρ mn (t) This equation needs to be solved for the initial conditionsρ 00 (0) =ρ(0) andρ m,n (0) = 0 (m = 0 or n = 0). Application to the present system-bath model yields the equations of motion, Eq. (30).
8,113
2012-04-25T00:00:00.000
[ "Physics", "Chemistry" ]
Acehnese's Digital Literacy Skills in Verifying News from Social Media Communication technology has recently become a new medium where society can choose information and the media they want to consume. The media can still influence their minds. Based on the National Survey of Internet User Penetration Data, in March 2019, by APJII (Internet Service Provider Association), the internet penetration rate in Aceh was recorded at 64.40% who accessed the internet. This research aims to find Acehnese's Digital Literacy Skills in Verification News from Social Media. The data collection method used for this study is a questionnaire instrument via Google Form to people in Nanggroe Aceh Darussalam (NAD) province. Meanwhile, secondary data was collected through literature studies and previous research. The result shows that the people of Aceh, especially millennials and Generation Z, have enough digital literacy skills to access digital media, understand content, and verify news. Introduction Communication technology has recently become a new medium.Meanwhile, broadcast media such as radio, television, internet, and print media have been distributed on various platforms and are well-received (Dal Zotto & Lugmayr, 2016).The media plays a vital role in building public understanding of information (Cox, 2013;Hamid et al., 2014).The media is also believed in various ways to influence the formation of understanding and awareness of information in the minds of individuals and society.However, nowadays, society can choose information and the media they want to consume.The media can still influence their minds.The premise of the media is still recognized as a form of personal understanding of what is happening around them. In terms of content, digital platforms are not only a means of delivering helpful information.Digital platforms have also become fertile ground for producing and reproducing 'useless' messages that range from the 'softest' such as hidden messages to dangerous messages that are intentionally designed to create turmoil and chaos in society in the form of hate speech (Fauzi et al., 2019).In this new situation, media literacy is not enough.A new type of media literacy is needed, namely critical literacy.Critical literacy is the ability to analyze, evaluate, and produce printed, oral, and visual forms of communication (Kress, 2003).Based on data from (Dewan Pers, 2019), the number of press companies verified both administratively and factually was 26 consisting of seven print media and 19 cyber media.Ten of 26 media that had been administratively and factually verified consisted of six print media and four cyber media.The 16 other media were administratively verified media consisting of six print media and 20 cyber media. In Nanggroe Aceh Darussalam province, the number of internet users stood ar about 5,135,100, accounting for 3.00% of the total number of internet users in Indonesia which reached 171.17 million.Based on the National Survey of Internet User Penetration Data, in March 2019, by APJII (Internet Service Provider Association), the internet penetration rate in Aceh was recorded at 64.40% who accessed the internet.Meanwhile, 35.60% have no access to the internet.BPS (Central Bureau of Statistics) data also states that 83.64% of Aceh internet users access the internet for activities on social media.Furthermore, 67.51% of them access the internet for information or news, 549.44% for entertainment, 34.67% to do school tasks, and 23.94% to send or receive email.(Badan Pusat Statistik, 2021). The Ministry of Communication and Information Technology of the Republic of Indonesia (Kementerian Komunikasi dan Informatika Republik Indonesia, 2020) noted that from January 23 to August 14, 2020, there were at least 1,037 hoaxes related to the COVID-19 pandemic.Over the period of time, six hoaxes were produced and distributed every day by certain parties for various purposes.This disinformation phenomenon has been going on for a long time, even before the COVID-19 pandemic.This post-truth era emerged when objective data began to be neglected in influencing public perception by strengthening public sentiment towards "alternative truths" (Keyes, 2004).Even though it is complicated, this fight against disinformation must be continued, welcoming the industrial revolution 4.0.Therefore, all parties must move quickly to ward off various kinds of disinformation that mislead public perceptions so that public trust is maintained correctly. Theoretical Framework Paul Mihailidis (2019) said that applying core media literacy abilities looks as follows: Access; Media literacy approaches access as a fundamental right.Without access to media, people cannot meaningfully participate in daily life.Access considers both platform-how am I receiving this information-and contentwhat kind of content am I receiving from this platform.Access to media ensures that citizens are able to find enough information, ideally from diverse viewpoints, to help them understand issues and participate from an informed position; Analyze: The function of analysis is core to media literacy.Deconstructing text is perhaps the most basic function of media literacy pedagogy and practice.Analyzing messages often takes the approach of "identifying the author(s), purpose and point of view, and evaluating the quality and credibility of the content." Evaluate: To evaluate in media literacy is to be able to make sense of an analysis, by "considering potential effects or consequences of messages.";Create: Creation denotes the ability to create content in multiple forms and use various production techniques.;Act: Acting, according to media literacy scholar Renee Hobbs, is to " [work] individually and collaboratively to share knowledge and solve problems in the family, workplace and community, and by participating as a member of a community." Japelidi obtained by combining various concepts that several experts have offered.However, based on the experience of Japelidi in realizing its programs, it is found that collaboration is one of the essential competencies.The collaboration is in line with the gotong-royong (mutual help) culture of the Indonesian people, which can be used as a competency to overcome the complexities of digital society problems.Besides that, there are also ten indicators of Digital Literacy Competency defined by JapelidI (Kurnia & Astuti, 2017): Access Competence in obtaining information by operating digital media.In this competence, people quickly get information in their lives by using digital media.In comparison, as explained above, the presence of digital media is very influential and plays a role in people's lives nowadays.With the ability of the internet that is incorporated into digital media, it can help people to access all the information or needs they want to get instead of using digital media.In operating digital media, users can also get other benefits, such as gaining insight from users of the digital media itself.;Selection Competence in selecting and sorting various kinds of information obtained from digital media, it can be obtained from several sources that are accessed and considered to be useful for media users.However, to avoid hoax news found by users, they can select information or news on the internet (part of digital media) by paying attention to sources that publish specific information or news. This can prevent digital media users from getting hoax and irresponsible news or information.;Understanding Competence to understand the information that has been selected.In the "understanding" competency section, the information that digital media users have obtained goes directly to the selection competence.After entering the selection process, the users will then try to "understand" the information that has been selected.Understanding information or news obtained is also vital so that there are no misperceptions and misconceptions of news that the users obtain.;Competency Analysis briefly explains that digital media users must analyze information by looking at the pluses and minuses of information or news found by digital media users who have previously understood.News obtained from digital media must be analyzed because of the many types of information or news that the media editor has previously constructed. Thus, a piece of news must be analyzed after being understood to get news according to the needs and interests of digital media users in accessing news and information.;Competency Verification from verifying in question is a competence that cross-confirms with similar information.Digital media users in accessing news or distributing information can crossverify so that what they get from various digital media platforms can be trusted and accurate.Necessary verification is also carried out for digital media users who want to distribute information or news to produce news or information to be accurate and reliable.;Evaluation Competence is competence in considering risk mitigation before distributing information by considering the method and platform to access the information.In brief, in evaluating this competence, what is meant is that digital media users in writing or reading news obtained from digital media must be evaluated, where the evaluation aims for digital media users to place words and also news compiled according to the digital media platform.Media users need to evaluate the mitigation risks before distributing any information or news so that the digital media used by users can be used wisely.;Distribution Competence in distributing is meant to share information by considering who will access the information.Competence in distributing digital media users can also learn to consider and analyze the platform to which information or news is distributed.By distributing news or information on various platforms, users must learn to be sensitive by considering the audience from each digital media platform, especially by considering who will access the news or information and who will open their platforms from digital media.;Production Competence in compiling new information is accurate, precise, and ethical.In production competence, it is briefly explained that digital media users who can write news or information and want to be produced on one of the previous digital media platforms must ensure that the information they want to be presented on digital media is accurate in terms of the message conveyed in the information.After all, the nature of the information presented on the internet or digital media platforms can be accessed by the whole community and read by the public.In addition, even today, the use of digital media is regulated by a legally valid law called the ITE Law; therefore, ethics is crucial to be considered by digital media users in producing competence; Participation competence plays an active role in sharing good and ethical information or news through social media and other online communication activities.Basically, in sharing news or information, digital media users must pay attention to the ethical level of the information.It avoids the occurrence of problems between communities or social media users, given the nature of social media, namely "user-generated content," where communicators and communicants in social media can provide news or information and vice versa can provide comments or criticism.Ethical news avoids problems and teaches users of digital media and other online media to be wise in using digital media.;Collaboration competency created by Japelidi is collaborating intending to take the initiative and distribute truthful, accurate, and ethical information or news by distributing information with other stakeholders.Media users distribute information to make it more stable and trusted by other users who access it.Collaborating is also important to support the distribution of information carried out in collaboration with stakeholders who have a strong relationship with information or news that wants to be distributed.Collaboration is also crucial following the Indonesian people who work together based on the background of digital media users in terms of age, gender, place of residence, digital media needs, and various other factors. Among those 10 competences, this research try to figure out only access competence and verification competence, as illustrated in the framework below: Based on this, mass communication theory is very visible in characterizing media in the delivery of information.There are different characteristics in each mass media company, this indicates respondents chose media according to their consumption.The Uses and Gratification theory provides a framework for understanding when and how individuals consume media as a product.Respondents also become more or less active and have an impact on increasing or decreasing the individual's involvement (Muhtadi, 2016).This theory focuses on the use of media to get gratification from people needs (Nawiroh, 2016). Thus, the way the audience is consuming the media is influenced by the characteristics of the media and the characteristics of the audience and the opportunity to access the media will affect the pattern of news consumption.Individuals today have managed to escape the shackles of the media.Although some are still shackled, most Indonesians are information literate (Muhtadi, 2016). BPS data showed that the habit of reading newspapers in Aceh in 2019 reached 27.66%, tabloids/magazines 5.03%, storybooks 13.79%, school lessons 29.07%, knowledge books 25.53%, and other readings 15.38%.(Badan Pusat Statistik, 2021).Meanwhile, the habit of listening to the radio in Aceh was 13.07%, and watching television programs was 89.84%.From these data, it can be concluded that people in Aceh preferred watching television to reading and listening to the radio.Meanwhile, the results of the Indonesia National Assessment Program in 2019 conducted by the Center for Educational Assessment (Pusat Penelitian Kebijakan Pendidikan dan Kebudayaan, 2019) at the Ministry of Education and Culture, revealed that the average national distribution of literacy on students' reading abilities in Aceh remained in poor category at 46.7% .Only 10.04% were in suitable category, and 43.26% were in excellent category. Regarding the description of the mainstream media in Indonesia, there were interesting findings based on data from late January/early February 2021 as reported by the Reuters Institute (Reuters Institute, 2021).It stated that the sources of news for Indonesians were online media, social media, television, and print media. Material and Methodology The data used in this study was primary data collected using a questionnaire instrument via Google Form, and secondary data was collected through literature studies and previous research. The Questionnaires were distributed to the people of Aceh within 2 months, between January-February 2022.The data collection technique was purposive random sampling in Nanggroe Aceh Darussalam province with a sample size of 300 respondents.The collected data was then analyzed using frequency distribution, and Cross-Tabulations.Analysis of the data was aimed at describing quantitatively between indicators.The analysis used SPSS (Statistical Program for Social Science) software. Result and Discussion Based on the data from the questionnaire distribution, the vast majority or 53.3% of respondents were women and 46.7% of them were men.This means that women contacted in this study were more eager than men to fill out online questionnaires.However, data/information from the Ministry of Women Empowerment and Child Protection suggests that women's internet access in Indonesia was lower than men's (Purnamasari, 2021).However, the participation level of women in this study was more dominant than men.This means that internet access for women had the opportunity to improve.Women, as one of the target audiences of the media, had the same rights as men in obtaining access to information; therefore, the media also had a role in opening access to information for women. In this survey, 60% or most of the respondents were generation Z aged 13-25 years.The millennial generation aged 25-40 years represented 26.7% and generation X aged 40-56 years made up 13.3%. The daily newspaper is the media that has been chosen by respondents aged above 40 years.This shows that a shift in mass media usage has happened.Conventional media remains alive, but it is not adequate to survive with the old styling in a new environment of media convergence and fast social transformation (Hidayat, Saefuddin & Sumartono, 2016).Source: Questionnaires #1 The results above show that the people of Aceh already had the basic competency of digital literation, which is the internet access competence to gather information with operating digital media.This will surely help them get more information and have an impact on the daily life of the people in this era, especially to add knowledge to the digital media users themselves.Based on gender, both women and men chose to access Instagram, and cyber media as the first media accessed as seen in Figure 3 above. The availability of access is one of the important factors that make literacy activities possible to be done (Miller, John W. & McKenna, 2016).Alternative dimension that visualize usage of electronic device and information technology to access literacy sources is the third lowest with 13,43% (Pusat Penelitian Kebijakan Pendidikan dan Kebudayaan, 2019).Source: Questionnaires #4 The existence of mass media is still superior, especially from the data of trust level towards mass media.There are 5 (five) categories on the trust level, in general, it could be seen that daily newspaper is on the highest of trusted information source and weekly newspaper is the most not accessed mass media by millennials generation (Table 2).In addition, daily newspapers and cyber media are trustable media, even though this day daily newspapers are rarely consumed by the respondents.However, television is the most accessible media among other media.This shows that television/streaming television is likely accessed by respondents to obtain information.It is also confirmed by other researches that these days communication students tend to get information through social media rather than mass media.According to (Azman, 2018), those students are more often to read information through social media rather than mass media that is already uncommon for them.But the finding also shows that the trust level of students towards the mass media is still categorized as high and they trust the truth of mass media's information rather than social media even though social media is more dominant in terms of usage amongst millennials and Gen Z.Although the use of social media is higher than that of mass media, if we look at the use of media for truth-seeking, 23,3% respondents use online mass media and 66,7% use both Social Media and Mainstream Mass Media (Table 3).Source: Questionnaires #5 Related to news and information verification, the result above shows that only 2.3% of respondents didn't do the process of news verification.It can be interpreted that almost all respondents did the information verification process.It is interesting to see the findings from the results of this study which can be seen in figure 3 above, where Generation Z believes that daily newspapers are a reliable medium for confirming news, but they also admit that they have never accessed this media.They prefer digital Newspapers because of easy access.The result above also visualizes that the process of information verification that they did, especially through online/cyber newspapers, daily newspapers, and television that are all mainstream media, and the information received through those platforms already underwent various processes of verification, thus the information can be accounted for. So even if the first information they received was from social media, the result shows that they don't fully believe this information.That is why they seek confirmation about the information and this aligns with the fifth competence of Jaringan Pegiat Literasi Digital Indonesia (Japelidi) verification competence.The presence of online media as the most preferred media in verifying information and used by respondents shows the media in conveying information using information technology.The discovery of various kinds of information technology makes it easier for people to find information immediately (Simarmata, 2016). If we at the reason why the respondents have chosen mainstream mass media, the majority of Generation Z and millennials chose daily newspapers as the most trustworthy media for giving information.Generation Z has chosen online newspapers because it is easier to access information.Meanwhile, millennials have chosen online newspapers because they can fast access information.If we think about millennials' age that has already been categorized as a digital native, the reason makes sense.Outside of that, respondents that verified information through television or streaming television stated that their reason for choosing that media is because of the habit of accessing it. In the new media era, hoaxes develop in all their forms in all areas of life.We cannot rely entirely on authorities to eradicate hoaxes on social media.(Fardiah et al., 2021).The public is the central controller of information flow.It is important to educate the public how to avoid hoaxes by confirming all information on social media (Finneman & Thomas, 2018).To anticipate the widespread circulation of hoaxes needs to be accompanied by increased digital literacy.The Indonesian public tend to be faster to believe news without confirming the truth and immediately spread it on social media (Juditha, 2019). Motive can direct individual behavior to consume media and will affect the individual's selective exposure to the type of media content.(Blumer, Herbert;& Katz, 1974) categorize the social and psychological functions of mass media, including; (1) cognitive needs, namely obtaining information, knowledge, and understanding, (2) effective needs, consisting of emotional, pleasant, and aesthetic experiences, (3) personal interactive needs, namely to strengthen credibility, selfconfidence, stability, and status, (4) social integrative needs, namely strengthening relationships with family, friends and so on and (5) the need to release tensions, escape and diversion. These technological improvements require electronic mass media, especially television, to offer streaming television.In Aceh, local television Serambi uses YouTube in disseminating information.By using YouTube as a platform a local media like Serambi expect to expand their information and their viewers can be more reachable. Conclusions The people of Aceh have high access to digital media, be it online news media, Instagram, WhatsApp, and also YouTube.Out of all those media, information is received through online news media and Instagram, down from highest to less high.Despite that, they are also not fully accepting the information that they received, therefore the process of news verification will still be done.The most used media for news verification is online newspaper because it is easy to access, fast, and trustworthy. Based on those findings, it could be concluded that the people of Aceh, especially millennials and Generation Z, have enough digital literacy skills to access digital media, understand content, and verify news. Figure 2 . Figure 2. Crosstab in first media accessed by age Source: Result of questionaire data processing Figure 3 . Figure 3. Crosstab first media accessed by gender Source: Result of questionnaire data processing Figure 3 . Figure 3. Crosstab of Respondent's Reason for Choosing Mainstream Media based on Age Source: Result of questionnaire data processing Table 1 . First Media for Getting Information Table 2 . Media Used for Verification Information (based on media grouping) Media Table 3 . Media Used for Verification Information (based on media types)
5,028.8
2023-12-25T00:00:00.000
[ "Education", "Computer Science" ]
Numerical Simulations of the Lunar Penetrating Radar and Investigations of the Geological Structures of the Lunar Regolith Layer at the Chang ’ E 3 Landing Site In the process of lunar exploration, and specifically when studying lunar surface structure and thickness, the established lunar regolith model is usually a uniform and ideal structural model, which is not well-suited to describe the real structure of the lunar regolith layer. The present study aims to explain the geological structural information contained in the channel 2 LPR (lunar penetrating radar) data. In this paper, the random medium theory and Apollo drilling core data are used to construct a modeling method based on discrete heterogeneous random media, and the simulation data are processed and collected by the electromagnetic numerical method FDTD (finite-difference time domain). When comparing the LPR data with the simulated data, the heterogeneous random medium model is more consistent with the actual distribution of the media in the lunar regolith layer. It is indicated that the interior structure of the lunar regolith layer at the landing site is not a pure lunar regolith medium but rather a regolith-rock mixture, with rocks of different sizes and shapes. Finally, several reasons are given to explain the formation of the geological structures of the lunar regolith layer at the Chang’E 3 landing site, as well as the possible geological stratification structure. Introduction The exploration of the internal structure of moon has been ongoing since the first time a human being landed on the surface of the moon in the 1960s.According to the analysis of seismic data, the internal structure of the moon can be roughly divided into a lunar crust, lunar mantle, and lunar nucleus.The thickness of the outermost lunar shell is approximately 60∼65 km, the top 1∼2 km of which mainly consists of lunar regolith and rock fragments [1].On December 2, 2013, China successfully launched the Chang'E 3 spacecraft to explore the moon.The LPR was one of the important payloads on the Chang'E 3. As a high-resolution lunar surface penetrating radar, LPR consists of two channels. The first channel is centered at a 60 MHz frequency and has a meter-level resolution in simulated lunar rock material.It is used to detect the subsurface lunar structures along the path of the Yutu rover.The second channel is centered at a 500 MHz frequency, with a resolution of less than 30 cm in the simulated lunar regolith, and it is used to detect the internal structure of the lunar regolith and its thickness.ALSE (Apollo Lunar Sounder Experiment), LRS (Lunar Radar Sounder), and LPR are all surface penetration radars, which are usually used to detect subsurface lunar structures.However, the LPR resolution is significantly higher than that of either ALSE or LRS.Especially for the detection of lunar regolith, the resolution of ALSE and LRS does not have sufficient detection accuracy, and the deepest drilling depth of an experiment with a sample return is 294.5 cm from Apollo 17 [2].However, it is obvious that the above drilling depths cannot reach the bottom of the lunar regolith layer in the lunar maria, where the average layer depth is approximately 5 m [3].Therefore, the LPR data are valuable for the study of the internal structure of the lunar regolith.Furthermore, the core diameter from the Apollo borehole is no more than 4 cm, which means that no data, other than that of lunar penetrating radar, can directly verify that there exist rock fragments with diameters larger than 4 cm in the lunar regolith. The study of the internal structure or thickness of the lunar regolith requires theoretical modeling, regardless of whether active radar detection or passive microwave radiometer detection is used.For instance, Shkuratov and Bondarenko [3] established a simplified ideal uniform lunar regolith structure model to obtain the first map of the distribution of lunar regolith thickness on the front side of the moon using the data of the Arecibo Astronomical Observatory's 70 cm-wavelength ground-based radar in combination with the iron and titanium abundances of the front side of the lunar surface.Lan and Zhang [4] assumed that the lunar regolith layer is a uniform medium in their study of the thickness of the lunar regolith using microwaves.Fa and Jin [5] assumed that the lunar regolith layer has a uniform distribution of dense particle media to simulate the bright temperature of the multichannel lunar surface radiation.Meng et al. [6] assumed that the permittivity of the lunar regolith follows changes in depth, established a nonisotropic lunar model, and analyzed lunar thickness, frequency, and other effects on the bright temperature.Using the same assumption as Meng et al., Chen et al. [7] established twolayer and three-layer models.In Chen et al. 's studies, the lunar surface structure was simulated using GprMax, and its waveform characteristics were analyzed. However, the LPR data show that the internal structure of the lunar regolith is very complex and the above modeling method is too ideal, so it is difficult to describe the real structure of the lunar regolith.Therefore, here, we employ the random medium model theory, Apollo drilling sample data, and geomorphologic images to establish a heterogeneous random medium model of the lunar regolith layer.Then, the FDTD numerical method is used to simulate the propagation of the electromagnetic wave in the model.The result of the echoes is obtained and compared with the LPR data. Lunar Penetrating Radar (LPR) LPR is a surface penetrating radar with a carrier frequency in the nanosecond pulse time domain whose working principle [8] is as follows: the transmitter antenna emits an electromagnetic wave into the lunar subsurface; when the propagating electromagnetic wave meets a heterogeneous medium, layered interface, or other buried object, phenomena such as reflection, diffraction, and scattering occur; the receiving antenna receives echo signals, such as reflections and scatterings; by analyzing and processing the received echo signals, we will obtain information about the geological structure of the lunar regolith along the road of Yutu rover.The basic parameters of the LPR are shown in Table 1. The echo signals are mainly affected by the electromagnetic wave propagation velocity V, seen in ( 1), and the attenuation , seen in ( 2), in the Yutu rover detection process. where is the angular frequency, is the speed of light in free space, is the permeability, is the permittivity, and is the electrical conductivity.The loss tangent, tan = / , reflects the loss of energy when propagating through the lunar regolith. The Heterogeneous Random Medium Model of the Lunar Regolith Layer As shown in Figure 1, there is a random distribution of lunar rocks of different sizes on the lunar surface.Therefore, it is inferred that there exist a large number of small-scale and irregularly distributed media, such as basalt grains and breccias, under the lunar surface.The radar echo signal is affected by these small-scale media during the detection process, which is probably the reason for the confusing radar gram.These small-scale media distributions can be considered as random processes to study the characteristics of their permittivity.The technique of random medium modeling is applied to the seismic numerical simulation [9][10][11]. Lunar rocks Lunar regolith On the assumption of stationary random processes, the permittivity of the random medium model is expressed by where is the mean permittivity of the background, (, ) is the standard deviation, and (, ) is a small-scale random perturbation whose spatial distribution is subject to where (, ) is the autocorrelation function; parameters and are the horizontal and vertical autocorrelation lengths, respectively; is the autocorrelation angle; and is the roughness factor at the microscale.When = 0, (4) is the Gaussian autocorrelation function.When = 1, (4) is the exponential autocorrelation function.When 0 < < 1, (4) is a hybrid autocorrelation function [12].The algorithm of the established discrete random medium model is as follows. Step 1.The power spectra density function, Φ( , ), of the spatial random perturbation function, (, ), was calculated from (4) and is defined as Step 2. We used a random phase function ( , ), which is an independent and evenly distributed two-dimensional random sequence on the interval [0, 2), to calculate the random power spectra function ( , ), which is defined as Step 3. We obtained the spatial perturbation function of the random medium, (, ), by using the inverse Fourier transform of the random power spectra function. Step 4. The spatial perturbation function was normalized.Moreover, we substitute ( 7) into (3) to achieve a discrete heterogeneous random medium model. The lunar regolith layer medium should be composed of a lunar regolith and rock mixture at the Chang'E 3 landing site area, which is shown in Figure 2.This type of lunar geological structure can be called a regolith-rocks mixture.Hence, the random medium theory modeling method is used to describe the geological structure of a regolith-rocks mixture.According to previous knowledge from the Apollo samples, the lunar regolith is mainly composed of mineral and rock fragments, breccia debris, all kinds of glass material, meteorite fragments, and so forth, with a permittivity in the range of 2.3 to 3.5 and loss tangent in the range of 0.005∼0.009.The mare region is mainly composed of basalt, with a permittivity in the range of 6.6 to 8.6 and a loss tangent in the range of 0.009∼0.016[13].Therefore, we assume that the dielectric properties of the random medium model of the lunar regolith range from 2.3 to 8.6 and the loss tangent ranges from 0.005 to 0.016.A set of autocorrelation lengths, and , equal to 0.05 m, 0.1 m, 0.2 m, and 0.3 m, are chosen to establish four discrete heterogeneous random medium models with different characteristics, which are shown in Figure 3.The model size is 5 m × 5 m and the length of the discrete step is 0.01 m.The channel 2 antenna, which is mounted on the bottom of the Yutu rover, is 0.3 m away from the lunar surface.Hence, we set the antenna height as 0.3 m off the lunar surface in the model.The first layer is the vacuum layer, with a permittivity of 1 and a depth of 0.3 m.The second layer is the heterogeneous random medium of the lunar regolith layer with a background permittivity of 4.8, autocorrelation angle of 0, roughness factor of 0, and model standard deviation of 0.85.The third layer serves as a reference layer with a permittivity of 8 and a depth of 0.1 m.The autocorrelation length describes the scale of the random medium in the horizontal and vertical directions, which is manifested in the size of the rock fragments distributed randomly in the model, as shown in Figure 3.With an increase of the autocorrelation length, the size of the rocks increases and the number of rocks decreases.That is, the distribution of the real medium in the lunar regolith layer can be described effectively by selecting the appropriate parameters. The Numerical Simulation of LPR Channel 2 4.1.The Selection of the Radiation Pulse Source.The radar radiation pulse source waveform is taken from [8], which can be calculated by UWB Ricker.Equation ( 8) [14] is defined as We substituted the parameters of the LPR into (8), where = 750 MHz, = 250 MHz, and time 0 is 4 ns.Then, we can obtain the radiation pulse source waveform shown in Figure 3. The Numerical Method of the 2D-FDTD. The differential Maxwell equations in the time domain are given by where is the magnetic field strength and is the electric field strength., , and are the dielectric permittivity, permeability, and conductivity, respectively.Considering only the two-dimensional TM mode [15], all of the electric fields are transverse electric fields for the coordinates, and (9a) and (9b) contain only the directional components of , , and .In the Cartesian coordinate system, the 2D-FDTD differential iterative equations in the TM mode are defined as Moreover, in order to avoid the dispersion of the electromagnetic wave caused by the numerical calculation, it is necessary to satisfy the following conditional formula (see (11)) when the discrete interval parameters Δ, Δ, and Δ approach zero. 4.3.Numerical Simulation of the Models.This section will make use of the 2D-FDTD numerical method to simulate the heterogeneous random medium models (a), (b), (c), and (d), which are established in Figure 2. As a comparison, the second layer in Figure 2(a) is replaced by homogeneous media with a permittivity of 3 and loss tangent of 0.005.Before the simulation, we need to set the parameters of the simulation model.The transmitter and receiver are set in the same position as the point source.The simulation time window is set to 80 ns.The discrete grid spacing is set to 0.01 m.A data trace is detected with every movement of 0.043 m in the horizontal direction.The separation distance is in accordance with the real situation when the LPR probed the lunar surface. The simulated horizontal direction of the model is 5 m, such that it receives 116 data traces.The discrete time step is set to 0.02 ns, according to (11).Due to the limited memory of the computer, a PML (Perfectly Matched Layer) is used as the electromagnetic absorption boundary condition [16], which simulates the propagation of electromagnetic waves in free infinite space.The simulation result of the model (a) was calculated by 2D-FDTD, shown as both the A-Scan and B-Scan in Figure 4.In the figure, the direct and coupling waves of the radargram are clearly shown, but the reflected echoes of the buried objects are not clearly displayed.The A-Scan waveforms were plotted from a trace of the B-Scan data, and it was found that the amplitudes of the direct and coupling wave reflected signals were much larger than those of the reflected signals from the lunar buried objects. The main objective of this data processing is to analyze the reflection signal of the objects in the lunar regolith layer.For this reason, the amplitudes of the reflected signals of the direct and coupling waves can be suppressed by the threshold in (12) to relatively increase the amplitude of the reflection signal of the lunar buried objects.The amplitude threshold is set to 0.003 for each radar gram which are the simulation results of the models (a), (b), (c), and (d) and the homogeneous media.We plotted both the A-Scan and B-Scan simulation results, shown in Figure 5. The time delay of the lunar surface is 7.91 ns, as calculated in Figure 5.The reference layer echoes cannot be found from simulation results of models (a) and (b) because the radargrams are so cluttered, but they can be gradually found when they become more distinct, in models (c) and (d), from the homogeneous media, in which the time delays are 75.58ns, 72.39 ns, and 61.29 ns, respectively.On the one hand, as the autocorrelation length decreases, the complexity of the model increases.The position of the echo signals of the reference layer is delayed because it is obscured by other signals and cannot be visually distinguished.On the other hand, when the autocorrelation length increases and the multiplied reflected echoes among the rock fragments are diminished, the amplitude of the reflected echoes of the buried targets in the lunar regolith layer model is gradually weakened, and the reference layer gradually becomes distinct.Moreover, this simulated experiment can also explain why the LPR did not find clear layers below the lunar surface at the Chang'E 3 landing site.The details of the investigation of the geological information of the lunar regolith layer will be elaborated in the following chapters. Comparison between the LPR Data and Simulated Data The LPR began work at 10:50:32 (UTC) on December 15, 2013, and ran until 14:16:56 (UTC) on January 15, 2014, when it stopped working due to a mechanical problem after a total of 277 minutes of work on the lunar surface [17].Channel 2 received 2351 valid data traces during that working period.The probe distance is approximately 114 m along the path of the Yutu rover on the moon.Part of the LPR data at the landing site in the Mare Imbrium is shown in Figure 6.Meanwhile, we compared the LPR data with the simulated data calculated from the lunar regolith models by FDTD.By comparing the LPR data and the simulation data, it is found that the echo characteristics of the two radargrams are similar.To quantify the degree of similarity of the two radargrams, the Bhattacharyya distance is used to analyze the two datasets in this paper.The method used to calculate the Bhattacharyya distance is defined as follows: where meas and simul are the LPR data and simulation data, respectively.The results of the Bhattacharyya distances of models (a), (b), (c), and (d) and the homogeneous model are 0.5829, 0.7798, 0.1165, 0.1216, and 0.1045, respectively.It is clear that model (b) is the most similar to the LPR data among those results.This means that the heterogeneous random medium model (b) effectively corresponds to the interior structural characteristics of the lunar regolith.Hence, we can infer that the lunar regolith layer is not a purely regolith medium but rather has a distribution of a large number of rock fragments of uneven sizes and different shapes, and the diameters of the rock fragments are approximately 20 cm.In addition, there are continuously reflected echo signals at 24 ns, which may be a stratified structure in the lunar regolith layer or an echo signal of a continuous block of rocks. The Interior Structure of the Lunar Regolith at the Chang'E 3 Landing Site The Chang'E 3 landed near the young crater C1, which has a diameter of approximately 450 m [18,19].The landing position was approximately 50 m from the edge of the impact crater.It can be seen in Figures 1 and 7 that the lunar surface is scattered with a large number of rock fragments of different sizes.At the impact edge, the distribution of rock fragments is even denser.These rocks originate from the lunar crater formation process.When the meteorite crashed on the lunar surface, the bedrock was contacted, squeezed, and crushed.This process formed an ejecta blanket and dug out material from deeper sections of the lunar surface.Some rocks were gasified or melted by the high temperatures, forming new material. The broken rock fragments were also formed because the meteorite impact process involves a massive transfer of mechanical energy to heat energy.Therefore, the large-scale materials sputtering over the original lunar regolith layer formed an ejecta layer, which could also be called a new lunar regolith layer, including a large number of new materials formed by the high temperature, such as impact breccias, glass, and metal, and broken rock blocks.Xiao et al. [18] used a diameter-frequency method to estimate the geological age of the landing site (impact crater C1) at a minimum model age of 27 million years (My) and a maximum model age of 80 million years (My).This indicates that the geological structure is very young, the lunar regolith is immature, and the internal structure is rock-like.Basilevsky et al. [20] compared the photographs taken by the Lunokhod and Yutu rovers, which indicated that there exist rock blocks with diameters of dozens of centimeters across the lunar surface.In other words, it can be inferred that the interior of the lunar regolith also contains similar scales of rocks.This is consistent with one of the conclusions of the current study, namely, that the interior of the lunar regolith is a regolithrock mixture. Based on the above studies, the subsurface geological structures of the Chang'E 3 landing site are probably divided into a lunar regolith layer and a lunar rock layer, as shown in Figure 8.The lunar regolith layer includes the materials of the ejecta blanket and the original lunar regolith layer.We did not include the lunar dust layer because the boundary between the lunar dust layer and lunar regolith layer is difficult to define in this dataset.Because the geological structure of the Chang'E 3 landing site is very young, with immature lunar regolith and more internal stones, it is easy to misidentify the continuous echoes produced by a number of adjacent lunar rocks as a layered structure in the radargram.According to the statistics of the Apollo samples, as the sampling depth increases, the average diameter of the rocks in the lunar regolith layer increases slightly.In general, lunar rock size is related to maturity.The higher the maturity level, the smaller the diameter of lunar rocks.The maturity is related to the time of exposure on lunar surface [21].It can be seen that the maturity of the lunar regolith is low at the landing site; the distribution of the interior rocks is irregular, and the geological structure is young at the Chang'E 3 landing site.Therefore, one possible structure is a fractured rock layer that transitions to the bedrock layer below the lunar regolith layer. The specific depth of the lunar regolith layer is not determined because the simulation results show that, within a certain range, as the number of rocks increases, the electromagnetic wave propagation in that medium will produce multiple reflections and scattering, which makes the radar echoes too complicated.Thus, an accurate depth of the lunar regolith International Journal of Antennas and Propagation is difficult to determine by LPR.In fact, the boundary between the lunar regolith layer and the lunar rock layer is not clear [22].Rather, it is a gradual structure from top to bottom, which makes it difficult for radar to distinguish interior layers. Conclusion and Discussion Based on the theory of the random medium model, the Apollo drilling samples data, and the real lunar surface at the Chang'E 3 landing site, the model of a heterogeneous random medium is established.By using the FDTD numerical method, electromagnetic wave propagation is simulated and the resulting radar echoes are obtained.Comparing the LPR data and the simulated data, the following conclusions are obtained: (1) The lunar model of a heterogeneous random medium is more consistent with the real structure of the lunar regolith than other theoretical models of the lunar regolith layer used in the preceding literature. (2) The radar echoes become more complicated as the number of rocks increases, within a certain range.Thus, the accurate depth and interior structure of the lunar regolith are difficult to determine by LPR. (3) The interior of the lunar regolith is not a purely uniform medium but has a distribution of regolith-rock mixture media with different rock sizes and shapes.The diameter of the rock fragments is approximately 20 cm in the lunar regolith layer, which is larger than the 4 cm diameter of the sample from the Apollo mission. (4) The site produces clear layered echoes at approximately 24 ns in the LPR data, shown in Figure 6.These layered echoes can be interpreted as stratified structures in the interior of the lunar regolith or as a large number of small-scale lunar rocks that produce overlapping radar echo signals. The future Chang'E 5 program will be equipped with a lunar regolith radar sounder, which will have a higher detection resolution and working frequency than the LPR.Therefore, further studies of the application of the heterogeneous random medium method in lunar regolith modeling will be helpful to interpret the lunar radar data and better understand the real distribution characteristics and structures of the lunar regolith.In addition, the traditional homogeneous multireflection filtering method [23] has been difficult to use to filter the multiple reflection electromagnetic waves of the radar echoes in the lunar regolith layer.Using statistical methods, it is possible to establish a multireflection filtering method based on the random medium theory, which may be a breakthrough for solving complicated filtering problems. Figure 1 : Figure 1: The photos were taken by the panoramic camera on the Yutu rover. Amplitude 12 )Figure 4 : Figure 4: The simulation result of the random media model (a).The A-Scan of the simulated data is shown in (a).The B-Scan of the simulated data is shown in (b). Figure 5 : 8 InternationalFigure 6 : Figure 5: The A-Scan and B-Scan simulation results of the models (a), (b), (c), and (d) and the homogeneous media with the amplitude threshold at 0.003. Figure 7 : Figure 7: Chang'E 3 landing site and the topography of the impact crater C1.(a) is from Arizona State University, and (b) is from the Science and Application Center for Moon and Deep Space Exploration. Figure 8 : Figure 8: The diagram of the interior structure of the lunar regolith layer at the Chang'E 3 landing site. Table 1 : Basic parameters of lunar penetrating radar.
5,630.8
2017-05-23T00:00:00.000
[ "Geology", "Physics" ]
THREE DIMENSIONAL DEFORMATION OF MINING AREA DETECTION BY INSAR AND PROBABILITY INTEGRAL MODEL A new solution algorithm that combined D-InSAR and probability integral method was proposed to generate the three dimensional deformation in mining area. The details are as follows: according to the geological and mining data, the control points set should be established, which contains correct phase unwrapping points in subsidence basin edge generated by D-InSAR and several GPS points; Using the modulus method to calculate the optimum parameters of probability integral prediction; Finally, generate the three dimensional deformation of mining work face by the parameters. Using this method, the land subsidence with big deformation gradients in mining area were correctly generated by example TerraSAR-X images. The results of the example show that this method can generate the correct mining subsidence basin with a few surface observations, and it is much better than the results of D-InSAR.  Corresponding author INTRODUCTION China is rich in mineral resources, approximately 4,000,000 hectares land area were destroyed by mining, and still increasing at an annual rate of 200,000 hectares.In all kinds of mineral resources, coal accounted for about 70% of primary energy consumption in china, because of long time and large scale mining lead to most of the mining area are facing with serious ground subsidence (Fan H D et al,2015).With the accelerated pace of the mine construction, the surface subsidence disaster warning, subsidence compensation and governance, ecological environment restoration, recycling waste subsidence and other derivative problems appeared gradually, and the key to solve these problems is clear the law of mining subsidence, this must be based on enormous measured data of surface subsidence inversion and analysis (Fan H D et al,2011).Unfortunately, the traditional monitoring methods (leveling, GPS, total station measurements) have high precision, but there is a heavy workload, high cost, measuring point sparse and other shortcomings, difficult to obtain the three-dimensional deformation and historical subsidence information (Fan H D et al,2012).In addition, these methods require a large number of ground monitoring points, it's difficult to ensure that preserve and emplace a large surface monitoring points because exist problems such as occupation of the land, the worker-peasant relations and other issues.D-InSAR (Differential Interferometric Synthetic Aperture Radar) (Gabriel A K et al,1989)is a kind of advanced earth observation technology, which has all-time, all-weather, wide coverage and high accuracy advantage, and gradually widely used in seismic deformation extraction (Zhang G H et al,2011), landslide disaster monitoring (Han Y F et al,2010), subsidence monitoring because of loss water (Francesca C et al,2012) and other fields.However, the technique is easily affected by the decorrelation of time and space, and its application conditions are quite harsh.In particular, the majority of mine in china are covered by vegetation, large topographic relief (Midwest), the mining subsidence because of resource exploitation has a very large deformation gradient, it's difficult to solve correctly using the existing D-InSAR technique, which greatly limits the application of D-InSAR technique in the mining surface monitoring (Ng A H et al, 2011;Ng A H et al,2012;Fan H D et al,2012).D-InSAR technology is difficult to obtain the large deformation of surface subsidence, to solve the above problem.This paper puts forward a solution based on the mining subsidence theory of probability integration method, under the fusion of D-InSAR monitoring results and a small amount of measured surface subsidence, extracted surface subsidence basin respectively under the condition of actual mining. SUBSIDENCE MONITORING CAPABILITY OF D-INSAR Use the D-InSAR technology to monitor surface subsidence in mining area, SAR image pairs in addition to keeping short time interval and small vertical baseline conditions, the wavelength and the spatial resolution also can't be ignored.Massonnet and other scholars had given that D-InSAR technique can detect the theory formula of the maximum deformation gradient, that is (Massonnet D et al,1998) where In the multi-look conditions, for ERS and Envisat satellite Cband SAR image, the wavelength is 56mm, the resolution is 20m, D is 0.0014; ALOS satellite's L-band SAR image, the wavelength is 230mm, the resolution is 10m, D is 0.0125; TerraSAR-X satellite's X-band SAR image, the wavelength is 32mm, resolution is 1m, L D is 0.016.Obviously, only from the maximum available unwrapping phase gradient, ERS satellite images detection capability for large deformation is much less than TerraSAR satellite images.However, TerraSAR satellite revisit cycle is short and the resolution is high, but the X-band wavelength is shorter, it is strongly influenced by noise such as vegetation, and its cost is very high, during the test we found that the L-band ALOS satellite is the best satellite monitoring of mining subsidence at the current. THE RESEARCH METHODS In China, probability integration method (He G Q et al,1991) is a prediction theory of mining subsidence in mining subsidence movement and deformation.As shown in Figure1 (Fan H D et al,2014), take the ground coordinate system xOy and coal seam coordinate system   1 O , the mining unit ) , ( B   in inclined coal seam.According to the principle of the probability integral method, the subsidence of any surface point (x,y) caused by mining can be expressed as: where q = the subsidence coefficient r = the main influence radius r = H 0 /tanβ H 0 = the average mining depth θ = the mining influence angle tanβ = the tangent of the main influence angle x i , y i = the planar coordinate of mining unit i x, y = the coordinate of any surface point m = the thickness of the coal seam The spatial coordinate system is shown in Figure 1. Figure 1. Spatial coordinate system of mining The key to predict surface mining subsidence by probability integral method is to determine the subsidence coefficient (q), the tangent of main influence angle (tanβ), the mining influence angle (θ) and horizontal movement coefficient b (this defaults to the common value of 0.3).These parameters are mainly inversed by a certain amount of surface movement monitoring data.Therefore, the traditional methods have some disadvantaged including heavy workload, high cost, insufficient density of field observation points, and it is necessary to study parameter inversion method based on new techniques and methods. Therefore, in this paper, the subsidence of sinking basin edge points generated by D-InSAR and few GPS points (near the maximum subsidence and inflection points) were combined to calculate the parameters of probability integral method by formula (1), and then the inversion parameters were used to estimate three-dimensional surface subsidence caused by mining subsidence. TEST ANALYSIS The study area is located in Yulin city of Shanxi province, in northwest of China, which is one of the largest coal production bases.Thirteen scenes of TerraSAR-X images which were provided by the German Aerospace Centre (DLR) from April 2, 2013 to December 13, 2012 were selected to carry out the 2pass approach.Then the time series D-InSAR results were accumulated which were shown in Figure 2. The land surface was in the stage of rapid movement, and the mining direction of this working face was from the southeast to the northwest.The mining velocity of this working face was about 4.5 meters per day and the mining activity was finished on 25 March 2013.Especially mining area in our country use longwall caving mining method, surface subsidence is large, there is a certain degree of difficulty that only use the technology to obtain surface settlement. (2)Fusion D-InSAR techniques and probability integration method extracted surface subsidence under the conditions of mining large deformation gradients.It is feasible to inverse the mining subsidence basin by using the parameters of the probability integral method, which can be obtained the combination of points in the subsidence basin edge generated by D-InSAR with a few observations. (3)The number of surface observations can be decreased when D-InSAR technique is used.Meanwhile, the number of edge points involved in the prediction of subsidence basin is increased.On the one hand reduce the workload and cost of traditional monitoring method; On the other hand, with the increase of number of control points at basin edge, the original parameters of mining subsidence inversion exist the problem of fast convergence and others, through in-depth study, this issue is expected to be corrected in the future. Figure 2 . Figure 2. Subsidence generated by D-InSAR The maximum D-InSAR monitoring of mining subsidence was about 0.202m, and during this period, the maximum GPS monitoring of mining subsidence was 4.365m, which is greater than the maximum of D-InSAR results.In the study, to reduce the work of GPS measuring, the parameters of the probability integral method were calculated by combining the D-InSAR and the few GPS points.Firstly, D-
1,990.8
2015-06-26T00:00:00.000
[ "Mathematics" ]
The CO 2 SINK Boreholes for Geological Storage Testing , and the provision and the provision of operational field results to aid in the development of stan dards for CO2 geological storage. Three boreholes (one injection well and two observation wells) have been drilled in 2007, each to a depth of about 800 m. The wells are completed as “smart” wells containing a variety of permanent downhole sensing equipment, which has proven its functionality during its baseline surveys. The injection of CO2 is scheduled for spring 2008 and is intended to last up to two is intended to last up to two intended to last up to two years to allow for monitoring of migration and fate of the injected gas through a combination of downhole monitoring with surface geophysical surveys. This report summarizes well design, drilling, coring, and completion operations. , and completion operations. and completion operations. Introduction Europe's first onshore scientific carbon dioxide storage testing project CO 2 SINK (CO 2 Storage by Injection into a Natural saline aquifer at Ketzin) is performed in a saline aquifer in NE Germany.The major objectives of CO 2 SINK are the advancement of the science and practical processes for underground storage of carbon dioxide, and the provision , and the provision and the provision of operational field results to aid in the development of standards for CO 2 geological storage.Three boreholes (one injection well and two observation wells) have been drilled in 2007, each to a depth of about 800 m.The wells are completed as "smart" wells containing a variety of permanent downhole sensing equipment, which has proven its functionality during its baseline surveys.The injection of CO 2 is scheduled for spring 2008 and is intended to last up to two is intended to last up to two intended to last up to two years to allow for monitoring of migration and fate of the injected gas through a combination of downhole monitoring with surface geophysical surveys.This report summarizes well design, drilling, coring, and completion operations. , and completion operations.and completion operations. Since the publication of the Intergovernmental Panel on Climate Change Report (IPCC, 2005), carbon dioxide cap-cap-capture and storage, including the underground injection of CO , including the underground injection of CO including the underground injection of CO 2 through boreholes, became a viable option to mitigate , became a viable option to mitigate became a viable option to mitigate atmospheric CO 2 release.One of the major goals for the immediate future is to investigate the operational aspects of CO 2 storage and whether the risks of storage can be successfully managed. CO 2 SINK is the first European research and development project on in situ testing of geological storage of CO 2 in an onshore saline aquifer (Förster et al., 2006).Key objectives of the project are to advance understanding of and develop practical processes for underground storage of CO 2 , gain operational field experience to aid in developing a harmonized regulatory framework and standards for CO 2 geological storage, and build confidence towards future set in "projects of that kind". The CO 2 SINK site is located near the town Ketzin to the west of Berlin, Germany (Fig. 1).The plan is to inject into a saline aquifer over a period of two years a volume of approximately 60,000 t of CO 2 .For this purpose, one vertical injection well (Ktzi-201) and two vertical observation wells were drilled at a distance of 50 m to 100 m from each other (Fig. 1).All three wells are equipped with downhole instrumentation to monitor the migration of the injected CO 2 and to complement the planned surface geophysical surveys.The injection of CO 2 will be interrupted at times for repeated downhole seismics (VSP, MSP), cross-hole seismic experiments, and downhole geoelectrics. The preparatory phase for CO 2 injection started in April 2004 with a comprehensive geological site characterization and a baseline fluid monitoring (Förster et al., 2006).This was followed by a baseline 3-D seismic survey (Juhlin et al., 2007) and the development of a drilling and completion concept (Fig. 2) allowing for monitoring during CO 2 injection and storage observation. Geological Background The CO 2 SINK site is located in the Northeast German Basin (NEGB), a subbasin of the Central European Basin System.The sedimentary succession in the NEGB is several kilometers thick containing geological formations of Permian to Quaternary age, comprising abundant deep saline aquifers.The CO 2 will be injected into the Stuttgart Formation (lower lower portion, Fig. 3 Borehole Design All three wells were designed with the same casing layout, , including stainless production casings equipped with preperforated sand filters in the reservoir section and wired on the outside with a fiber-optical cable, a multi-conductor er-optical cable, a multi-conductor r-optical cable, a multi-conductor copper cable, and a PU-heating cable to surface (Table 1).The reservoir casing section is externally coated with a fiber-er-r-flood-plain-facies rocks of poor reservoir quality.A geostatistical approach applied to the reservoir architecture (Frykman et al., 2006) pointed towards variable dimensions of the sandstone bodies and was supported by continuous wavelet transforms on 3-D seismic data (Kazemeini et al., 2008). The Stuttgart Formation is underlain by the Grabfeld Formation (Middle Keuper), which is a thin-bedded mudstone succession with interbedded marlstone, marly dolomite and thin anhydrite or gypsum beds deposited in a clay/mud-sulfate playa depositional environment (Fig. 3�� Fig. 3�� Beutler and Nitsch, 2005).The immediate caprock of the Stuttgart Formation, the Weser Formation (Middle Keuper), , also is of continental playa type, consisting mainly of finegrained clastics such as clayey and sandy siltstone that alternate with thin-bedded lacustrine sediments, like carbonates, and evaporites (Beutler and Nitsch, 2005).The high clay-mineral content and the observed pore-space geometry of these rocks attest sealing properties appropriate for CO 2 capture (Förster et al., 2007).The Weser Formation is overlain by the Arnstadt Formation (Middle Keuper), again of lacustrine character (mud/clay-carbonate playa�� Beutler and Nitsch, 2005) with similar sealing properties.The two caprock formations immediately overlying the Stuttgart Formation are about 210 m thick (Fig. 3).Meters diverter/gas-flare installation on the rig to capture and control unexpected and sudden shallow gas influxes.As no stranded shallow gas was encountered during drilling (as (as as also confirmed by reconnaissance wire-line logging and surface seismic processing), this pilot drilling was conse-), this pilot drilling was conse-, this pilot drilling was consequently skipped for the second and third well.Casing Casing asing (18 5/8 18 5/8 " ) running and cementation with stinger to surface running and cementation with stinger to surface were performed in all three wells without problems. ere performed in all three wells without problems.performed in all three wells without problems. In the following 12 1/4 " sections, the wells penetrated the Jurassic aquifer systems in which under-balanced pressure regimes were supposed.All wells encountered a minimum of three loss circulation zones between 366 m and 591 m with m and 591 m with and 591 m with cumulative mud losses of 550 m 3 .The addition of medium-to coarse-grained shell grit to the mud cured the loss of circulation and brought the wells safe to the 9 5/8 " casing depth between 588 m and 600 m. m and 600 m. and 600 m. glass resin wrap for electrical resin wrap for electrical resin wrap for electrical insulation.A staged cementation program was planned around the application of newly developed swellable elastomer packer and stage cementation downhole tools.This technology was preferred over perforation work that red over perforation work that over perforation work that would have caused unmanageable risks of potential damage of the outside casing cables. The 200-m core sections for -m core sections for m core sections for detailed reservoir and sealing property investigations were recovered with a 6 " x 4 " wire-line coring system using polycrystalline diamond compact (PDC) (PDC) (PDC) core bits.The 6 1/4 " core hole sections were enlarged to 8 1/2 " , and the wells finally deepened below the reservoir zone to accommodate sufficient sensor spacing for installation of behind-casing sensor arrays. Drilling and Completion Operations Constructing three wells close to each other and with such a dense sensor and cable population requires detailed planning.For this purpose, high-end oilfield QHSE (Quality, Health, Safety, Environment) management tools were applied, such as "drill well on paper" (DWOP), hazardous operation identification, repeated incident reporting, post job analysis, and risk management. Drill site construction started in December 2006, and the drilling operation commenced on 13 March 2007 with the 13 March 2007 with the March 2007 with the mobilization of a truck-mounted and top-drive equipped rotary drill rig.All the Ketzin wells were drilled with a shale inhibited KCl-water-based mud system, with the exception of -water-based mud system, with the exception of water-based mud system, with the exception of the top-hole section in the fresh-water aquifers, where a K 2 CO 3 -water-based system was required by the authorities.Both drill muds were conditioned at 1.05-1.16gcm -3 3 density.In order to avoid potential risks from environmental hazards, the project further implemented a "shallow gas" procedure in this well section to avoid spills when the wells would encounter high pressurized shallow gas from the past gas storage activity.For this purpose, the top-hole section of the first borehole was pre-drilled with a blow-out preventer/ Using the DTS technology, quasicontinuous temperature profiles can be measured on-line along the entire length of the wells with high temporal and spatial resolution (Förster et al., 1997�� Büttner andHuenges, 2003).The permanent installation of DTS sensors behind the casing (Hancock et al., 2005�� Henninges et al., 2005) offers the advantage of full access to the well during technical operations, which, for example, allows control of the process of casing cementation (Henninges and Brandt, 2007).The borehole temperature data will primarily serve in the delineation of physical properties and of the state of the injected CO 2 .To enhance the thermal signal and improve the monitoring of brine and CO 2 transport, successive thermal perturbation experiments (Freifeld et al., 2006) will be performed, using the electrical heater cable installed adjacent to the DTS cables.VERA provides data on the CO 2 saturation employing the Electrical Resistivity Tomography (ERT) method.Each of the VERA arrays covers an interval of about 140 m centered in the injec-ered in the injec-red in the injection horizon and consisting of fifteen electrodes spaced at fifteen electrodes spaced at electrodes spaced at about 10-m intervals.The P/T-sensor installed at the bottom -m intervals.The P/T-sensor installed at the bottom m intervals.The P/T-sensor installed at the bottom of the injection string above the packer system will continuously monitor the downhole pressure and tempera-monitor the downhole pressure and temperature changes during injection.Data will be transferred via will be transferred via transferred via red via via optical fiber attached to the injection string. The inclusion of the permanent downhole sensors into the well completion required a selection of suitable completion components and procedures.Custom-made casing centralizers were used for outside-casing installation of sensor cables, for centralization of the casing inside the borehole, ation of the casing inside the borehole, the casing inside the borehole, and for protection of cables from mechanical damage during installation (Fig. 5).The 8 1/2 " borehole diameter in the lower reservoir sections allowed for sufficient clearance within the annular space between casing and borehole wall and thus for a safe installation of the downhole sensors. fe installation of the downhole sensors.e installation of the downhole sensors.Within the 140-m zone, where the VERA electrodes are -m zone, where the VERA electrodes are m zone, where the VERA electrodes are placed, the steel casing was electrically insulated outside using a fiberglass coating.erglass coating.rglass coating. After an on-site installation test had been conducted, the installation of the DTS and VERA cables (Fig. 5) and electrodes in the Ktzi 200,201,and 202 The lower part of Weser Formation and the entire Stuttgart reservoir section were cored with a specially designed CaCO 3 -water/polymer drilling mud (1.1 g cm water/polymer drilling mud (1.1 g cm cm cm -3 3 ).In the first well, a total of 100 m core was drilled in thirty-nine core runs, thirty-nine core runs, core runs, , and an average recovery of 97� was achieved.In the second well 80 meters of core was retrieved in thirty-one runs (100� thirty-one runs (100� runs (100� recovery).In the third well only the top 18 m of the Stuttgart Formation was cored with the same excellent performance.The 6 1/4 " core hole section was then enlarged to 8 1/2 then enlarged to 8 1/2 enlarged to 8 1/2 " , and the wells finally deepened below the reservoir into the Grabfeld Formation. Stainless steel 5 1/2 " production casings (Fig. 4) were 4) were 4) were installed and cemented in all wells with sensors and cables on the outside.The cables were terminated and fed pressure tight at the wellhead to the outside through the drilling spool below the casing slips.The cement selected in all casing cementations was standard class-G with fresh water and no additives (SG = 1.98 kg L L -1 ), with the exception of the plug cementation, for which a specially designed CO 2 -resistant class-G salt cement was selected. The CO 2 injection well was completed with a gas-tight and internally coated production tubing, including a permanent production packer above the injection horizon, a fiber-optic er-optic r-optic pressure and temperature mandrel/gauge arrangement above the packer and a wire-line-retrievable subsurface a wire-line-retrievable subsurface a wire-line-retrievable subsurface safety valve at 50 m depth below the well head.The optical cables and hydraulic safety valve actuation lines were clamped to the outside of the production tubing and fed pressure tight to the outside at the tubing hanger adaptor below the Christmas tree gate valves. Christmas tree gate valves.tree gate valves. Permanent Downhole Sensors for hole Sensors for ole Sensors for Monitoring of CO 2 Geophysical monitoring techniques are applied in CO 2 SINK to delineate the migration and saturation of injected CO 2 (Fig. 2).The injection well and the two observation wells are equipped with state-of-the-art as well as newly developed geophysical sensors.The data from this permanent downhole monitoring will be interpreted in combination with data from periodic seismic monitoring (VSP, MSP, and cross-hole seismics) and periodic fluid sampling and well logging (Reservoir Saturation Tool). The following permanent components were installed in the boreholes for scientific monitoring: Progress Reports Progress Reports guided into the substructure of the drill rig, and the casing , and the casing and the casing was cemented. The DTS monitoring allowed online monitoring and control of the cementing operations and provided valuable information about the positions of the cemented sections during the setting of the cement.This information was verified by subsequent industry-standard cement-bond logs.The installation of monitoring tools was finished by feeding the cables into the casing spool at the wellhead, which was subsequently pressure-sealed using a stuffing box.Preliminary tests of VERA have shown that all electrodes and cables are fully functional. Field Laboratory The CO 2 SINK field laboratory comprised core-cleaning and core-sealing facilities, a full core imager, and a Geotek tek ek gamma-ray density core logger.The field lab was designed to record and describe a high core-run volume within a short handling time to quickly generate the litholog for the drilled boreholes and to identify the reservoir section.This procedure was necessary in order to proceed rapidly with decision rapidly with decision with decision making on the selection of the borehole intervals completed with filter casings through which the CO 2 would be injected into the formation or monitored. In the preparation for unconsolidated sandstone in the Stuttgart Formation, coring was performed with PVC liners in 3-m liner intervals.At the drill rig, liners were cut after -m liner intervals.At the drill rig, were cut after m liner intervals.At the drill rig, liners were cut after orientation marking into 1-m sections, and the cut surface -m sections, and the cut surface m sections, and the cut surface , and the cut surface and the cut surface geologically described was sealed before sealed before being transferred to the field lab for analyses.Sections containing sandstone were shipped preserved in liners to a commercial laboratory for "hot-shot" poro-perm analysis. Reservoir sandstone intervals (Fig. 6) with porosities on the order of 20�-25�, together �-25�, together -25�, together with requirements for permanent ERT sensor arrangement on the casing, guided the depths at which the wells were completed with filter screens for CO 2 injection and monitoring. The geological description of core started with the sections of well-cemented mudstone after its cleaning with synthetic formation water, reorientation, and scanning unrolled using an optical core scanner.Later, the "hot-shot" reservoir sections were included.From the geological core and cutting descriptions and interpreted petrophysical well logs, stratigraphic-lithologic logs (Fig. 3) were finally generated for all three CO 2 SINK wells to refine the geological model.For example, the stratigraphic-lithologic logs were used to calibrate the 3-D seismic time sections (Juhlin et al., 2007).Petrographical and mineralogical studies and geochemical analyses from reservoir and caprock were performed to characterize the Ketzin site on micro-scale as a basis for fluid-rock-alteration modeling. Outlook CO 2 SINK is the first project that extensively uses behind-casing installations for a study of the CO 2 injection and storage process in a geological medium.In this regard, CO 2 SINK differs from other scientific projects of CO 2 test storage, such as the Frio experiment in Texas (Hovorka et al., 2006), the Nagaoka experiment in Japan (Kikuta et al., 2004), the field test in the West Pearl Queen Reservoir in New Mexico (Pawar et al., 2006), and the Otway Basin pilot project in Australia (Dodds et al., 2006). It is envisaged that the extensive set of data generated by cross-correlation of seismic surface monitoring, well-logging and monitoring, and simulations, will allow for verification of a priori scenarios of storage/migration of fluids.Emphasis, for example, will be given to the observation of non-isothermal effects in the storage formation during injection as described by Kopp et al. (2006).This type of effect also can occur during leakage from a storage reservoir along a fracture zone as numerically investigated by Pruess (2005).Thus, the observations in progress will contribute to a sound understanding of the thermodynamic processes of CO 2 injection at well-scale as well as in the short and longer term the processes during CO 2 storage at larger scale. Photo Credits Fig. 1.VNG -Verbundnetz Gas AG, Leipzig, Germany . 1. VNG -Verbundnetz Gas AG, Leipzig, Germany 1. VNG -Verbundnetz Gas AG, Leipzig, Germany .VNG -Verbundnetz Gas AG, Leipzig, Germany VNG -Verbundnetz Gas AG, Leipzig, Germany Verbundnetz Gas AG, Leipzig, Germany Fig. 5 ) of Triassic (Middle Keuper) age, into the Fig. 3) of Triassic (Middle Keuper) age, into the southern flank of a gently dipping double anticline.The 80-m-thick target formation rests at about 630-710 m -m-thick target formation rests at about 630-710 m m-thick target formation rests at about 630-710 m -thick target formation rests at about 630-710 m thick target formation rests at about 630-710 m depth at a temperature of about 38°C.The formation is made a temperature of about 38°C.The formation is made temperature of about 38°C.The formation is made up of siltstones and sandstones interbedded by mudstones deposited in a fluvial environment.The reservoir is in sandstone channels as well as levee and crevasse splay deposits.These channel-(string)-facies rocks alternate with muddy ese channel-(string)-facies rocks alternate with muddy channel-(string)-facies rocks alternate with muddy Figure 4 . Figure 4. Drilling design and well completion of the Ktzi 201/2007 borehole.Yellow line indicates DTS and ERT cables with location of ERT electrodes (yellow pluses).Sandstone reservoir intervals are shown in green. Figure 5 . Figure 5. Centralizer attached to casing string with DTS (left) and VERA cables (right). Figure 6 . Figure 6.Core image of reservoir sandstone showing cross-bedding.
4,576.6
2008-07-01T00:00:00.000
[ "Geology" ]
Halogen-doped phosphorescent carbon dots for grayscale patterning Flexible organic materials that exhibit dynamic ultralong room temperature phosphorescence (DURTP) via photoactivation have attracted increasing research interest for their fascinating functions of reversibly writing-reading-erasing graphic information in the form of a long afterglow. However, due to the existence of a nonnegligible activation threshold for the initial exposure dose, the display mode of these materials has thus far been limited to binary patterns. By resorting to halogen element doping of carbon dots (CDs) to enhance intersystem crossing and reduce the activation threshold, we were able to produce, for the first time, a transparent, flexible, and fully programmable DURTP composite film with a reliable grayscale display capacity. Examples of promising applications in UV photography and highly confidential steganography were constructed, partially demonstrating the broad future applications of this material as a programmable platform with a high optical information density. Introduction Light plays an important role in the advanced manufacturing and processing of materials. By introducing photosensitive units and regulating the spatial distribution of the exposure dose, specific patterns and structures have been produced with unprecedented precision and efficiency [1][2][3] . The developing trend in both optics and materials science therefore has introduced requirements for photoresponsive materials with programmable dynamic performance, namely, the ability to record, reproduce, and erase optical information by phototriggering. To meet this need, materials with photoprogrammable absorption 1,4-7 , fluorescence 6-10 , and even phosphorescence [11][12][13][14] have recently emerged, demonstrating great potential in display, imaging, and optical encryption. One of these material systems that has drawn particular interest is the group with photoinduced DURTP [15][16][17][18][19] . Taking advantage of the spin-forbidden T 1 → S 0 transition, DURTP materials can easily achieve a long afterglow emission observable to the naked eye (τ > 50 ms), which not only entirely avoids the excitation background but also provides a large lifetime space for data storage and encryption. Better still, DURTP behaviors have been achieved in organic materials, including polymer composites 18,19 , which are important in terms of having lower toxicity, better machinability, cheaper cost, higher optical transparency, mechanical flexibility, etc. Nevertheless, very few phosphorescent materials with such desirable properties have been reported to date due to difficulties in material design. In previous work, we demonstrated for the first time that carbon dots (CDs), a class of emerging phosphorescent nanomaterials [20][21][22] , could act as reliable DURTP emitters when incorporated with the specific macromolecule material polyvinylpyrrolidone (PVP) 19 . In contrast to the conventional design of static organic phosphorescent materials that emphasized the shielding of environmental oxygen [23][24][25][26] , the DURTP in CDs/PVP composites was facilitated by an oxygen-regulating mechanism. Here, PVP acted both as a solid matrix and an oxygen reservoir, simultaneously providing hydrogen bonding fixation that facilitates the DURTP of CDs and triplet oxygen that suppresses its emission. CDs also played a dual role, both as an oxygen-sensitive phosphor and as an oxygen-consuming photosensitizer. An activation process via mask and photolithography could therefore regulate the regional "on/off" switch of DURTP, realizing the manipulation of erasable long afterglow patterns on a transparent flexible film (Fig. 1a). However, although the oxygen-regulated DURTP with CDs/PVP was more than qualified for a binary display (which distinguished only the "on" and "off" states), it became insufficient when considering a grayscale display requirement (which displayed a series of intensities with a gradient). Undoubtedly, this issue has hindered DURTP materials from reaching their full potential in further applications. The crux of the matter lies in the high threshold or "dead time" in DURTP activation. In other words, the response of DURTP intensity to photoactivation occurs with a considerable delay, causing the loss of shadow details in the grayscale pattern. The challenge commonly faced by almost all oxygenregulated dynamic phosphorescent systems 12,18,19 is fundamentally limitation by the insufficient ISC of metal-free dynamic phosphors. First, organic phosphors with low ISC rates are less tolerant to triplet oxygen, showing significant quenching even at a low oxygen concentration 27,28 . Second, since the phosphor also acts as an oxygen-consuming photosensitizer in DURTP, an insufficient yield of triplet excitons should also limit the oxygen consumption rate, providing the same photon dose, which further contributes to the issue. To address that, a straightforward solution would be modifying the preexisting material systems by introducing molecular structures that facilitate ISC, such as lone pair electrons. To date, although certain functional groups, such as nitrogen heterocyclic rings and phosphonates with lone pair electrons, have been introduced to DURTP systems to improve their performance 29,30 , none of them have reported grayscale display capacities. In addition to introducing lone pair electrons, another strategy commonly used for improving the ISC efficiency is heavy atom substitution/doping, especially by halogen atoms 31,32 . Typically, doping with halogen atoms induces a series of changes in the phosphor known as the "heavy atom effect", enhancing spin-orbit coupling and consequently facilitating ISC and phosphorescent emission. However, halogen doping might also induce changes in polarity and hydrogen bonds, interfering with phosphorpolymer compatibility. Luckily, the surface of solvothermal-synthesized CDs is highly functionalized with hydrophilic groups 33,34 , which should likely compensate for the change in compatibility induced by halogen doping. Moreover, the possibility of halogen doping in CDs has been previously validated. For instance, Feng's group reported the synthesis of F-doped phosphorescent CDs by introducing hydrogen fluoride/fluorine in the bottom-up synthesis of CDs [35][36][37] . These successful precedents have encouraged us to further consider heavy halogen doping as an applicable strategy for enhancing ISC in DURTPs. In this work, we propose a series of flexible DURTP polymer composites illuminated by halogen-doped CDs with fully programmable emissions. We show that the (Fig. 1b). Taking advantage of the high phosphorescence quantum yield, long afterglow emission, grayscale display capacity, and ultrafast UV response of this composite, the applications of DURTP materials in grayscale-based UV photography and steganography have been explored for the first time, demonstrating the broad possibilities for optical applications of this spectacular group of materials. Results Since the CD material was synthesized by a bottom-up strategy from molecular precursors, the halogen-doping of CDs could be easily achieved by using halogenated precursors in the synthesis (Fig. 2a). Notably, although Br or I atoms may provide a more significant heavy atom effect, their larger atom diameters and lower bonding energies with carbon also cause a higher leaving tendency during the formation of CDs. As a result, low (to zero) contents of these elements were found in the CDs made of iodinated and brominated precursors (Fig. S1). As the second-best option, we synthesized a series of CDs using chlorinated derivatives of p-benzoquinone under solvothermal conditions, which is similar to the previous method we adopted to synthesize halogen-free CDs 19 . The resultant halogen-doped CDs, ClCDs-1~3, gradually increased in chloride contents from 6.0 to 9.3%, depending on the number of substituents in their precursors ( Fig. 2b and Table 1). Compared to their halogen-free cousin CDs-0, the intensified signal at~1100 cm −1 in the FT-IR absorbance of ClCDs-1-3 indicated the presence of C-Cl ( Fig. 2c and Table S1), which was also validated by the increase in the C-Cl signal intensities in the C1s and Cl2p binding energies of the CDs ( Fig. 2d-g). Moreover, the deviation in the Cl2p binding energy suggested that Cl atoms were binding with both sp2-and sp3-hybridized C in the halogen-doped materials (Fig. S2). Despite the differences in the chemical compositions, all four CDs showed very similar morphologies with average diameters ranging from 2.6-2.8 nm (Fig. S3). Stripes with regular intervals of 0.24 nm (corresponding to the [1 0 0] lattice plane of the graphitic structure) were observed in the HRTEM results, suggesting the crystalline nature of these nanoparticles. The crystalline structure in CDs was also revealed by the X-ray diffraction (XRD) spectra of these materials. As shown in Fig. S4, broad peaks at 2θ = 23°and bumps at 2θ = 45°were detected, corresponding to the [0 0 2] and [1 0 0] lattice planes of nanographite 38,39 . In terms of their optical features, CDs-0 and the three halogen-doped CDs showed similar absorption and fluorescence emission curves in ethanol (Fig. S5), suggesting that the Cl doping did not cause severe changes in the excited state energy levels of the CDs. To further facilitate DURTP emission, the four CDs were embedded in solid PVP matrices through solution blending and film casting. The nonradiative relaxation of the triplet excited states was thus sufficiently suppressed through hydrogen bond fixation between the polymer and the hydrophilic functional groups of CDs. Under a common atmosphere environment, the CDs/PVP composite films showed no significant long afterglow when initially irradiated by a short pulse of UV excitation (400 nm, 10 mW cm −2 , 20 ms). This stage corresponds to the "off" state of the DURTP since the environmental oxygen molecules have permeated the polymer films and caused the quenching of triplet excitons of CDs. With a prolonged irradiation time, the orange-colored phosphorescent emission of the CDs/PVP films (denoted as composite 0-3 hereafter) becomes activated, showing seconds of afterglow when the excitation switches off (Fig. 3a). This stage corresponds to the "on" state of the DURTP, where the permeated oxygen has been efficiently removed through photochemical reactions, allowing phosphorescent emission from the triplet excited states of CDs (Fig. 1). From this point, the DURTP emission of the films could be readily evoked by short irradiation pulses before the environmental oxygen again permeated the film through molecular thermal motion. At room temperature (25°C) and under a common atmosphere, the DURTP intensities slowly decreased to nondetectable intensities in~2.5 h (Fig. S6). Naturally, the deactivation of DURTP by oxygen could be accelerated by heating the composite films under a common atmosphere: typically, the fully activated DURTP could be entirely deactivated by baking the samples at 120°C for 10 min. For all four CDs/ PVP composites, such activation and deactivation of DURTP have proven to be highly reversible, as ten cycles of on-off switching did not induce a significant change in their activated DURTP intensities (Fig. S7). The emission spectra and lifetimes of the four composites confirmed that the doping of Cl played an important role in promoting the ISC in excited states: as the doping of Cl in CDs increased, the percentage of the phosphorescent component in the activated steady-state photoluminescence emission increased markedly from 17.1% (composite 0) to 54.9% (composite 3) (Fig. 3b), while the lifetime of the activated DURTP gradually decreased from 472 to 110 ms (Figs. 3c and S8 and Table S2), in accordance with the typical heavy-atom effect. By measuring the fluorescence quantum yields of the deactivated composites and multiplying them with the I Phos. /I Fluo. ratio, we were able to calculate the phosphorescence quantum yields of these materials ( Fig. S9 and Table S3). Notably, the overall phosphorescence quantum yield (λ ex = 400 nm) of composite 3 was 2.93%, which was comparable to some molecular long afterglow phosphors in a similar wavelength range 32,40 . Moreover, the phosphorescence quantum yields of composites 0-2 were 0.94%, 1.61%, and 1.93%, respectively, clearly showing an increasing tendency as the halogen atom content increased. The enhancement of ISC in CDs should change their photoactivation behavior in two ways. First, the enhanced radiative transition of T 1 → S 0 increased the tolerance of phosphorescent emission to the environmental oxygen. In other words, the initial activation of DURTP occurred earlier with a higher oxygen content in the composite film, thus shortening the dead-time threshold. On the other hand, the increased population of triplet excited states led to higher photodynamic conversion capacities, increasing the speed of DURTP activation. Here, the singlet oxygen productivities of all four CDs were measured by electron spin resonance spectroscopy. As shown in Fig. 3d, the halogen-doped CDs, especially ClCDs-3, featured a highly improved photodynamic efficiency (~4.1 times that of CDs-0). As a result, much faster DURTP activation kinetics were observed in the halogen-doped composites (Fig. 3e). First, the values of the characteristic parameter t 1/2 (that is, the time required to reach the halfmaximal DURTP intensity under a certain power density) for composites 1-3 at 0.1 mW cm −2 were 175, 110, and 71 s, respectively, which were all significantly shorter than that for composite 0 (325 s). Second, the activation threshold (determined by the intercept of the activating curve on the X-axis, see Fig. S10) clearly decreased with an increasing Cl content in the materials. Here, we defined the relative threshold R th as the ratio of the deadtime to t 1/2 . The dramatic decreases in the deadtime from 123 to 6.7 s and in R th from 0.38 to 0.09 unambiguously confirmed that the halogen-doping strategy improved the performance of the CDs/PVP composites in terms of both the response speed and the threshold. Based on this knowledge, we further tested the applicability of the composite as a grayscale-based display medium. As the most promising candidate with the smallest threshold, composite 3 was studied at different stages of activation. Figure S11 depicts the gradually increasing phosphorescence intensities and lifetimes of this composite during activation. In summary, an 84-fold enhancement in intensity and ten-fold enhancement in lifetime (from 11 to 110 ms) occurred during the activation of DURTP. According to these results, we envisaged that a grayscale-based display of DURTP could be manipulated in the range of 0-20 mJ cm −2 . Here, we applied a transparent mask printed with a tailored ITE grayscale chart (Fig. S12) to regulate the light dose and create a phosphorescent grayscale step-chart on a flexible film of composite 3. The relationship between the normalized intensities/lifetimes and the UV light dose at different grayscale steps is given in Fig. 3f. After exposing the film to 20 mJ cm −2 UV, a series of grids with intensities increasing stepwise were obtained, demonstrating the potential of this material for grayscale display (Fig. 3f, inserted). In contrast, a composite 0 film displayed only part of the grayscale gradient due to its high activating threshold (Fig. S13). Discussion The grayscale display capacity and reversible DURTP functions of halogen-doped CDs composites have enabled a number of unique optical applications. Here, a repeatedly editable DURTP tag with designable print-on-demand grayscale patterns was developed by utilizing the photoactivation and thermal-deactivation behaviors of DURTP. As illustrated in Fig. 4a and Movie S1, the designed grayscale DURTP patterns could be created through a photolithography process and erased through heating for multiple cycles. With a proper mask, the resolution of such DURTP patterns could reach~35 μm, corresponding to over 724 dpi in the display (Fig. S14). After photowriting and removing the mask, the grayscale pattern could be readily reproduced by exposing the film to a short UV pulse. Judging from the excitation dynamics and activation threshold of composite 3 (Fig. S15), a total UV dose in the range of 0.02-0.65 mJ cm −2 (2-65 ms at 10 mW cm −2 ) is considered to be suitable for reproducing the patterns. As shown in Fig. 4b-e, the film of composite 3 provided an excellent grayscale distribution that enabled the display of an intensity gradient in both icons and portraits. Notably, the composite film itself was highly flexible and could be attached to a curved surface (Fig. S16), further broadening its potential application in print-on-demand temporary tags integrating optical anti-counterfeiting functions 18 UV photography has long been recognized as a powerful tool for optical diagnosis and biological studies by providing imaging results beyond the visible range 41,42 . Considering the low activation dose and grayscale display capacity of composite 3, we envisaged that this material could also function as a UV photographic film to directly capture UV images from object reflections. To verify this idea, a simplified imaging system mimicking the structure of the film camera was built, as illustrated in Fig. 4f. The UV-reflecting object (a white porcelain mug cup) partially coated with a UV-absorbing sunscreen was placed in front of a convex lens against the UV-sensitive film (composite 3). With a large object distance (≫2 focal lengths) and a small image distance (≪2 focal lengths), a miniature and inverted image of the object occurred on the film under UV irradiation. With sufficient exposure, an image of the object could be captured by the UV-sensitive film and then reproduced with pulsed excitation. In contrast to what was shown in the visible-light photo (Fig. 4f, inserted), the sunscreen appeared as dark shadows in the UV photography image (Fig. 4g), showing strong UV absorption in these areas. Many plants have developed UV-absorbing patterns on their flowers to attract pollinating insects with UV color vision 43 . With the same technique demonstrated above, we captured UV photographs of a daisy flower at 365 and 400 nm. Here, two grayscale images were reproduced from the photoinduced long afterglow pattern in the composite film after UV exposure (Fig. 4h, i). In contrast to the grayscale visible image, both UV photographs showed a region of high UV absorbance in the center of the flower where the stamens were mostly located (Fig. 4i). Moreover, in the 365 nm photograph, a dark halo was observed surrounding the center, suggesting the existence of a concentric-circular structure with a UV color gradient. As discussed above, the DURTP of composite 3 gradually increased both in terms of its intensities and lifetimes upon activation. While the precise reading of emission intensity might be influenced by environmental noise and other issues, luminescence lifetimes are considered highly characteristic and conservative, providing reliable performance in optical encrypting scenarios. Thus, lifetimebased steganography has long been an intriguing topic among the various applications of persistently luminescent materials. However, most previous works have focused on time-gating-based methods [20][21][22]40,44 , which usually require a large lifetime contrast between the cover message and the real message (for example,~ns/fluorescence versus~ms/phosphorescence) and hence have shown limited encryption capacity. In this work, taking advantage of the multiple merits of composite 3, including a fast activation dynamic, a visible long afterglow, and very importantly, a highly manipulatable emission lifetime, we have proposed a new strategy for highly confidential steganography by grayscale DURTP. Figure 5a schematically shows our design of a reusable dynamic steganographic device in practice: the film device physically consists of two layers, a static layer with apparent text and a dynamic layer with a latent DURTP pattern printed on demand. Due to the high transparency of the PVP polymer film, the text or patterns on the static layer could be clearly seen under ambient light (Fig. 5b), acting as the cover message in this sense. When performing optical encryption, a piece of the grayscale mask was first applied to write the lifetime-coded secret message on the dynamic layer with UV exposure (Fig. 5c). In this demonstration, the mask contained two types of text patterned with different transmittances, corresponding to two informational layers of graphics with different grayscale values and lifetimes. Then, the mask was removed, and a short UV pulse was used to read the hidden message in the form of afterglow (Fig. 5b, c). To further improve the confidentiality of this steganographic device, the sheet was then overexposed via continuous UV irradiation to exert a "burn after reading" function. Finally, simply by heating in a common atmosphere, the device could be reset to the initial state for the next use (Fig. 5c, d). To take full advantage of the lifetime gradient of this device, we introduced lifetime imaging instead of time gating in the "read" procedure to extract the encrypted graphical message. Figure S17 illustrates the fundamentals of the afterglow lifetime imaging setups. In short, a pulsed diode laser device was connected to an ICCD camera with an external trigger to control the image capture at different delayed times after the excitation pulse. As a result, the pixelwise decay profile was obtained and analyzed. After denoising and thresholding, the mapping results clearly showed distinguishable lifetimes (66 ms/77 ms) in the two parts of the DURTP graphics (Fig. 6a, b). Notably, the lifetime difference (11 ms) was only~10% of the fully activated lifetime value of composite 3, which suggested high potential for increasing the encryption capacity with this lifetime-coded steganography design. Finally, to avoid the complex denoising process and further simplify the decryption process, we also introduced a phasor analysis method to resolve lifetime information in the frequency domain 45 . Here, the phasor plot diagram was drawn with a MATLAB program we previously developed 46 . As depicted in Fig. S18, the phasor plot results were clearly divided into two clusters corresponding to background noise and signal. The signal spots were distributed along the semicircle curve, corresponding to the PL signal emitted from patterned areas featuring quasi-monoexponential decay. Meanwhile, the background noise induced by intramembrane refraction, reflection, and scattering distributed along a vector pointed from (0, 0) to the signal cluster, indicating the complexity of their origins. cluster showed that the two layers of the encrypted graphic message could be well separated by selecting different regions along the curve. ROIs 1 and 2 correspond to pixels with different PL lifetimes, which were facilely resolved into two images (Fig. 6c). Importantly, since no curve fitting is required in the phasor analysis 47 , the amount of calculation required to resolve the encrypted message is greatly reduced, thus allowing near-real-time decryption of the message. Accordingly, we envision that the combination of grayscale-based lifetime encryption and phasor analysis has the potential to become a standard solution for DURTP-based steganography design in the future. In summary, in this work, we proposed a facile strategy to achieve programmable DURTP with a low activation threshold by enhancing its ISC through halogen doping in CDs. On that basis, we have further contrived the demonstration of grayscale-based patterning, UV photography and steganography in flexible films, showing the promising potential of this material for advanced optical applications. Overall, we believe that our findings in this work reveal a new direction for the applications of DURTP materials and contribute special insights into the rational synthesis, luminescence and optical applications of CDs. p-Iodanil was synthesized according to a previous reference 48 with modification: briefly, 3 g (7 mmol) of powdered p-bromanil was treated with KI (2.3 g, 14 mmol) in ethanol (30 ml) and refluxed for 2 h. After being filtered, the solid was further treated with 2.1 g NaI (14 mmol) in ethanol (30 ml) and refluxed for another 2 h. The dark-brown product was isolated by filtration and recrystallized in ethyl acetate (mp 279-280°C). Apparatus and characterization The infrared absorbance spectra were measured on a Thermo Scientific Nicolet iS5 FT-IR spectrometer (Thermo, US), and the UV-vis absorption spectra were measured with a Cintra 2020 spectrometer (GBC, Australia). X-ray photoelectron spectroscopy of CDs was performed on a PHI 5000 Versa Probe machine (UlVAC-PHI, Japan). The transmission electron microscopy images of the CDs were captured on an F-200 device (JEOL, Japan), and the powder XRD patterns were measured on a MiniFlex600 device (Rigaku, Japan) with a Cu target (λ = 1.5405 Å). For photoluminescence characterization, the emission spectra and lifetimes were measured on an FLS-1000 spectrometer, while the quantum yields were measured with a C9920-02G absolute quantum yield measurement system (Hamamatsu Photonics, Japan). For lifetime imaging, a wide-field time-resolved system consisting of a pulsed laser, ICCD camera, and optical configurations (camera lens and long-ass filter) was adopted. The emission from the sample was captured by an ICCD, model DH312T-18U-03, from Andor Technology. Synthesis of the CDs The synthesis of CDs was adopted from previous literature with some modification 19 . Two types of molecular precursors, the p-benzoquinone derivatives and ethylenediamine monohydrate, were used together to synthesize CDs through a one-pot solvothermal procedure. In general, 0.5 mmol of p-benzoquinone derivatives were dispersed in 100 ml of ethanol by bath ultrasonication. After adding 1 mmol of ethylenediamine monohydrate (81 μl), the dispersion quickly turned to dark hazel. The mixture was sealed in a stainless-steel autoclave and heated to 160°C for 12 h before cooling to room temperature. The raw product was concentrated by rotary evaporation, dialyzed against water (Mw cutoff = 2000 D, 24 h), dried under reduced pressure and purified via silica column chromatography (eluent: 5-15% methanol in dichloromethane) for further use. Fabrication of the CDs/PVP composite films Typically, to fabricate a CDs/PVP composite film, 1 g of vacuum-dried PVP was dissolved in 40 ml of double-distilled water under mild agitation at 50°C to form a homogeneous solution. Then, 5 mg of CDs dissolved in 1 ml of methanol was quickly added by injection. The mixture was kept at 65°C under agitation for 60 min to remove methanol and prevent the formation of visible bubbles. Afterward, the viscous solution was poured onto a 20 cm × 20 cm square polystyrene petri dish and kept at 30°C in the atmosphere overnight to remove the solvent. (Note that the heating device should be leveled in advance to obtain a high-quality film with uniform thickness.) After shaping, the composite film was gently peeled from the dish and further dried at 120°C and 0.1 mbar for 2 h. The resultant film (50 μm in thickness) was then sealed with 50 μm PET via lamination for further use. Lifetime imaging and phasor analysis The lifetime imaging system was set up as illustrated in Fig. S15. A pulsed UV beam with a 20 ms width (2 mW cm −2 ) was triggered at a 1 Hz frequency to excite the activated sample with a designed DURTP pattern. Here, the ICCD was triggered through a BNC cable with a TTL signal synchronized with the laser beam. The emission signal was collected in the same direction as the excitation beam by a camera lens combined with a longpass filter. The captured images were analyzed by both lifetime mapping and phasor analysis methods. For lifetime mapping, the sequential intensities of each pixel were extracted and fitted with an exponential decay model to give their lifetime values. For phasor analysis, the decay profile of each pixel was processed in the frequency domain according to the following calculations: where (G, S) are the phasor plot coordinates of a given pixel, ω is the angular frequency of the pulsed laser, C k is the emission intensity at a certain time (channel k) after excitation, I is the summation of C k , N is the total number of channels, and t k is the width of channels in time.
6,118
2022-05-30T00:00:00.000
[ "Materials Science" ]
Enhancing Monascus Pellet Formation for Improved Secondary Metabolite Production Filamentous fungi are well-known for their ability to form mycelial pellets during submerged cultures, a characteristic that has been extensively studied and applied. However, Monascus, a filamentous saprophytic fungus with a rich history of medicinal and culinary applications, has not been widely documented for pellet formation. This study aimed to investigate the factors influencing pellet formation in Monascus and their impact on citrinin production, a key secondary metabolite. Through systematic exploration, we identified pH and inoculum size as critical factors governing pellet formation. Monascus exhibited optimal pellet growth within the acidic pH range from 5 to 6, resulting in smaller, more homogeneous pellets with lower citrinin content. Additionally, we found that inoculum size played a vital role, with lower spore concentrations favoring the formation of small, uniformly distributed pellets. The choice of carbon and nitrogen sources also influenced pellet stability, with glucose, peptone, and fishmeal supporting stable pellet formation. Notably, citrinin content was closely linked to pellet diameter, with larger pellets exhibiting higher citrinin levels. Our findings shed light on optimizing Monascus pellet formation for enhanced citrinin production and provide valuable insights into the cultivation of this fungus for various industrial applications. Further research is warranted to elucidate the molecular mechanisms underlying these observations. Introduction Filamentous fungi play a pivotal role in various industrial processes, contributing significantly to the production of enzymes, organic acids, antibiotics, and cholesterollowering agents through fermentation [1].In submerged culture, these fungi exhibit two primary forms: mycelium aggregates forming pellets and uniformly dispersed suspended mycelium, fostering uniform growth [2].The process of filamentous fungal pellet formation has been extensively studied, revealing two distinct typologies: coagulation and noncondensing, as illustrated in Figure 1.Coagulation-type pellets result from the coalescence of numerous spores during the pre-fermentation phase, followed by spore germination and subsequent mycelial tip growth.In contrast, non-condensing pellets undergo spore germination preceding pellet formation [3].Fungi cultivated in pellet form offer several advantages, such as low fermentation broth viscosity, ease of biomass harvesting, and efficient oxygen diffusion [4]. The production of fungal products, whether target products or secondary metabolites, varies based on the morphological characteristics of the fungus [5].Metabolites produced in pellet form typically exhibit higher yields, while fungi growing in suspended mycelia are more conducive to enzyme production [6].However, the choice of form depends on the fungus species and external conditions.For example, Aspergillus terreus, characterized by small-diameter mycelial spheres, provides a favorable environment for lovastatin synthesis [7].Conversely, Aspergillus nidulans, existing in a dispersed mycelial form, demonstrates enhanced penicillin production [8].Research by Sai jin et al. indicates that small mycelial pellets formed by Aspergillus niger as dispersed mycelial fragments yield higher citric acid compared to undispersed pellets [9].Various cultivation factors, including pH, inoculum, nutrients, and trace metals, also intricately affect mycelial morphology.For instance, certain Rhizobium species have a high probability of forming pellets at high inoculum spore concentrations (up to 3 × 10 9 spores/L) [10], Penicillium chrysogenum strains require high pH values for pellet formation [11], and carbon sources play a pivotal role in Aspergillus terreus pellet formation [12].Consequently, the study of fungal pellets has been predominantly limited to individual fungal species.The production of fungal products, whether target products or secondary metabolites, varies based on the morphological characteristics of the fungus [5].Metabolites produced in pellet form typically exhibit higher yields, while fungi growing in suspended mycelia are more conducive to enzyme production [6].However, the choice of form depends on the fungus species and external conditions.For example, Aspergillus terreus, characterized by small-diameter mycelial spheres, provides a favorable environment for lovastatin synthesis [7].Conversely, Aspergillus nidulans, existing in a dispersed mycelial form, demonstrates enhanced penicillin production [8].Research by Sai jin et al. indicates that small mycelial pellets formed by Aspergillus niger as dispersed mycelial fragments yield higher citric acid compared to undispersed pellets [9].Various cultivation factors, including pH, inoculum, nutrients, and trace metals, also intricately affect mycelial morphology.For instance, certain Rhizobium species have a high probability of forming pellets at high inoculum spore concentrations (up to 3 × 10 9 spores/L) [10], Penicillium chrysogenum strains require high pH values for pellet formation [11], and carbon sources play a pivotal role in Aspergillus terreus pellet formation [12].Consequently, the study of fungal pellets has been predominantly limited to individual fungal species. While previously reported fungal species growing in granules include Penicillium [13], Aspergillus [14], and Rhizopus [15], with Aspergillus niger being predominant, there is a paucity of literature on Monascus growing in pellets.Monascus, a filamentous saprophytic fungus, is renowned for its historical applications in medicine and food [16].Monascus is known for its significant polyketide secondary metabolites, including pigments, Monacolin K, and citrinin.Despite extensive research into gene manipulation to eliminate citrinin production due to its nephrotoxic nature [17], few studies have explored the relationship between citrinin production and mycelial morphology.Stirring rates, for example, significantly impact Monascus red-pigment production, with higher rates resulting in greater pigment yield and shorter mycelial branches [18].Moreover, nonionic surfactants have demonstrated the capacity to modulate pigment production and mycelial morphology during Monascus fermentation [19]. The primary objective of this study is to investigate the external factors influencing Monascus pellet formation, with a focus on optimizing pellet fermentation conditions.Additionally, this study aims to elucidate the relationship between pellet morphology and While previously reported fungal species growing in granules include Penicillium [13], Aspergillus [14], and Rhizopus [15], with Aspergillus niger being predominant, there is a paucity of literature on Monascus growing in pellets.Monascus, a filamentous saprophytic fungus, is renowned for its historical applications in medicine and food [16].Monascus is known for its significant polyketide secondary metabolites, including pigments, Monacolin K, and citrinin.Despite extensive research into gene manipulation to eliminate citrinin production due to its nephrotoxic nature [17], few studies have explored the relationship between citrinin production and mycelial morphology.Stirring rates, for example, significantly impact Monascus red-pigment production, with higher rates resulting in greater pigment yield and shorter mycelial branches [18].Moreover, nonionic surfactants have demonstrated the capacity to modulate pigment production and mycelial morphology during Monascus fermentation [19]. The primary objective of this study is to investigate the external factors influencing Monascus pellet formation, with a focus on optimizing pellet fermentation conditions.Additionally, this study aims to elucidate the relationship between pellet morphology and the secondary metabolite citrinin.The findings of this investigation provide valuable insights into the interplay between Monascus pellet morphology and citrinin production, laying a robust foundation for future applications and advancements in this field. Strains and Media The wild strain M. purpureus RP2, maintained in the laboratory on potato dextrose agar medium at 30 • C, was selected as the experimental strain.For liquid fermentation, M. purpureus RP2 was initially cultivated on potato dextrose agar medium at 30 • C for 4 days.Subsequently, the mycelium was subjected to two rounds of washing with 5 mL of sterile water and then filtered through sterile gauze to obtain the spore suspension.Freshly prepared spore suspensions, each containing 10 7 spores/mL, were then inoculated into 30 mL of potato dextrose broth and adjusted to a pH of 5.5 in its natural state.The inoculated cultures were incubated at 28 • C for 5 days under vigorous shaking at 180 rpm.All fermentation experiments were conducted in 100 mL conical flasks, each containing 30 mL of sterile medium. Shaker Pelletizing: Screening Factors Following preliminary assessments, the influence of various factors on pellet formation by Monascus was systematically screened.These factors encompassed pH, carbon source, nitrogen source, and spore concentration.A pH of 5.5 (the standardized optimal value) was employed as the baseline for all screened factors unless otherwise specified.The parameters under investigation included media pH at five levels (pH 4.0, 6.0, 7.0, 8.0, and 10.0) and spore concentration at five levels (1.5 × 10 3 , 1.5 × 10 4 , 1.5 × 10 5 , 1.5 × 10 6 , and 1.5 × 10 7 spores/mL).Additionally, various carbon sources, such as glucose, sucrose, mannitol, soluble starch, xylose, and citric acid (each at 15 g/L), were examined.Similarly, diverse nitrogen sources, including NH 4 Cl, peptone, yeast leavening, soybean meal, fish powder, and C 5 H 8 NNaO 4 (each at 10 g/L), were investigated to evaluate their impact on fungal pellet formation.The pH of the media was adjusted using either 2 M NaOH or 2 M HCl. Analytical Methods The initial spore concentration was determined using a hemocytometer counter and a light microscope (Olympus CX43, Tokyo, Japan).To measure spore concentration, the spore solution was diluted 1000-fold before quantification.Mycelial morphology in submerged cultures was assessed through visual observation over a 72 h period, enabling the differentiation between mycelium and pellets.Pellet biomass was quantified utilizing the dry weight method.Specifically, a defined volume of fermentation broth was filtered through pre-weighed filter paper, and the mycelium was rinsed thrice with ultrapure water.Subsequently, the filter paper was dried to a constant weight at 60 • C to ascertain mycelium biomass (dry weight).The diameter of pellets was measured using a vernier caliper with a resolution of 0.01 mm, with the average diameter calculated from measurements of ten pellets. Morphological Observation of M. purpureus Pellet The morphology of the pellets was observed under an optical microscope.For more detailed observations, the micro-morphology of the pellets was examined using a su8020 scanning electron microscope (SEM; Hitachi, Ltd., Tokyo, Japan).After 5 days of liquid fermentation, the pellets were fixed using a 2.5% glutaraldehyde solution, followed by a dehydration process involving sequential ethanol solutions of varying concentrations.This dehydration procedure, lasting 10 min for each concentration and repeated twice, was utilized.Subsequently, the morphology of the pellets, post-gold spraying, was examined under SEM [20].The internal mycelial structure of the pellets was observed by employing the paraffin sectioning technique.This method encompassed the fixation, staining, decolorization, re-staining, drying, and sealing of sections, culminating in the observation of pellet tissue under a microscope. Measurement of Monascus Citrinin Production To assess the citrinin content, the fermentation broth of Monascus was subjected to a pre-treatment procedure.One milliliter of the fermentation broth was mixed with 2 mL of chromatographic-grade methanol and then subjected to ultrasonic extraction, away from light, for 30 min.Subsequently, the mixture was placed in a water bath at 60 • C for 1 h and then allowed to cool to room temperature.The supernatant was obtained by centrifugation for 15 min and subsequently filtered through a 0.22 µm filter.Citrinin content was determined via high-performance liquid chromatography-mass spectrometry (HPLC/MS) under specified conditions.The analysis was conducted on an Agilent 1200 series HPLC system (Agilent, Santa Clara, CA, USA) coupled with a triple quadrupole mass spectrometer (Agilent 6460 system).The analytical column employed was an Agilent eclipse plus C18 (2.1 mm × 50 mm × 3.5 µm).Conditions for the analysis encompassed a mobile phase consisting of a 70:30 (v/v) mixture of 0.1% formic acid and acetonitrile, a flow rate of 0.4 mL/min, a detection temperature of 40 • C, an injection volume of 1 µL, electrospray ionization source (ESI) in positive ion mode, a spraying voltage of 3000 V, auxiliary gas at 200 • C, sphingoid gas flow rate at 11 mL/min, and sphingoid gas temperature at 325 • C. The analysis was conducted in multiple reaction monitoring (MRM) mode, and the characteristic ion pair for citrinin was 233/251.2. Statistical Analysis All experiments were performed in triplicate, and numerical data are expressed as mean ± standard deviation.Statistical analysis was conducted using GraphPad Prism 9.0, employing analysis of variance (ANOVA).Statistically significant differences were identified with p-values < 0.05 and <0.01. Effect of Initial pH on Pellet Formation Interestingly, our investigation revealed robust pellet growth across a pH range of from 4.0 to 8.0, with a noticeable absence of pellet growth at an alkaline pH of 10.0.This observation can be attributed to the inherent negative surface charge typically exhibited by fungal spores, discouraging spore aggregation [21]. Upon closer examination, a consistent pattern emerged between pellet diameter and biomass in response to pH variations.Both parameters exhibited a reverse correlation with pH levels, as both pellet diameter and biomass decreased with increasing pH.For instance, compared to the substantial 25.64 ± 0.27 mm pellet diameter observed at an initial pH of 4.0 (Figure 2a), the pellet diameter notably reduced by 38.61%, 49.10%, and 62.24% when adjusting the medium pH to 6.0, 7.0, and 8.0, respectively.This trend was mirrored in the biomass of the pellets, as depicted in Figure 2b, where decreases of 53.75%, 56.5%, and 87.62% were observed at pH values of 6.0, 7.0, and 8.0, respectively, compared to the biomass at pH 4.0. chromatographic-grade methanol and then subjected to ultrasonic extraction, away from light, for 30 min.Subsequently, the mixture was placed in a water bath at 60 °C for 1 h and then allowed to cool to room temperature.The supernatant was obtained by centrifugation for 15 min and subsequently filtered through a 0.22 μm filter.Citrinin content was determined via high-performance liquid chromatography-mass spectrometry (HPLC/MS) under specified conditions.The analysis was conducted on an Agilent 1200 series HPLC system (Agilent, Santa Clara, CA, USA) coupled with a triple quadrupole mass spectrometer (Agilent 6460 system).The analytical column employed was an Agilent eclipse plus C18 (2.1 mm × 50 mm × 3.5 μm).Conditions for the analysis encompassed a mobile phase consisting of a 70:30 (v/v) mixture of 0.1% formic acid and acetonitrile, a flow rate of 0.4 mL/min, a detection temperature of 40 °C, an injection volume of 1 μL, electrospray ionization source (ESI) in positive ion mode, a spraying voltage of 3000 V, auxiliary gas at 200 °C, sphingoid gas flow rate at 11 mL/min, and sphingoid gas temperature at 325 °C.The analysis was conducted in multiple reaction monitoring (MRM) mode, and the characteristic ion pair for citrinin was 233/251.2. Statistical Analysis All experiments were performed in triplicate, and numerical data are expressed as mean ± standard deviation.Statistical analysis was conducted using GraphPad Prism 9.0, employing analysis of variance (ANOVA).Statistically significant differences were identified with p-values < 0.05 and <0.01. Effect of Initial pH on Pellet Formation Interestingly, our investigation revealed robust pellet growth across a pH range of from 4.0 to 8.0, with a noticeable absence of pellet growth at an alkaline pH of 10.0.This observation can be attributed to the inherent negative surface charge typically exhibited by fungal spores, discouraging spore aggregation [21]. Upon closer examination, a consistent pattern emerged between pellet diameter and biomass in response to pH variations.Both parameters exhibited a reverse correlation with pH levels, as both pellet diameter and biomass decreased with increasing pH.For instance, compared to the substantial 25.64 ± 0.27 mm pellet diameter observed at an initial pH of 4.0 (Figure 2a), the pellet diameter notably reduced by 38.61%, 49.10%, and 62.24% when adjusting the medium pH to 6.0, 7.0, and 8.0, respectively.This trend was mirrored in the biomass of the pellets, as depicted in Figure 2b, where decreases of 53.75%, 56.5%, and 87.62% were observed at pH values of 6.0, 7.0, and 8.0, respectively, compared to the biomass at pH 4.0.Remarkably, acidic conditions were found to be particularly conducive to the formation of well-defined pellets.Specifically, at pH 6.0, uniformly sized and well-segregated pellets were generated, characterized by an average diameter falling within the range from 1.4 to 1.6 mm.Conversely, at pH 4.0, although pellet size was relatively large, the pellets were not uniformly distributed, with an average diameter of 2.58 ± 0.19 mm.In contrast, at pH 8.0, pellet size was markedly smaller, with an average diameter of 0.97 ± 0.05 mm. Effect of Different Carbon and Nitrogen Sources on Pellet Formation In our investigation of the impact of six diverse carbon sources on pellet formation under an initial medium pH of 5.5, we observed variations in pellet formation among these sources.Notably, sucrose, xylose, mannitol, and soluble starch led to reductions of 32.10%, 56.72%, 17.17%, and 4.49%, respectively, in biomass compared to the glucose group (Figure 3a).Conversely, citric acid as the sole carbon source resulted in a 58.51% increase in biomass relative to glucose.mation of well-defined pellets.Specifically, at pH 6.0, uniformly sized and well-segregated pellets were generated, characterized by an average diameter falling within the range from 1.4 to 1.6 mm.Conversely, at pH 4.0, although pellet size was relatively large, the pellets were not uniformly distributed, with an average diameter of 2.58 ± 0.19 mm.In contrast, at pH 8.0, pellet size was markedly smaller, with an average diameter of 0.97 ± 0.05 mm. Effect of Different Carbon and Nitrogen Sources on Pellet Formation In our investigation of the impact of six diverse carbon sources on pellet formation under an initial medium pH of 5.5, we observed variations in pellet formation among these sources.Notably, sucrose, xylose, mannitol, and soluble starch led to reductions of 32.10%, 56.72%, 17.17%, and 4.49%, respectively, in biomass compared to the glucose group (Figure 3a).Conversely, citric acid as the sole carbon source resulted in a 58.51% increase in biomass relative to glucose.The choice of carbon source also influenced pellet diameter, with sucrose, mannitol, soluble starch, and citric acid contributing to increases of 2.40%, 10.51%, 17.15%, and 50.52%, respectively, in pellet diameter (Figure 3b).Xylose, however, led to a 37.48% decrease in pellet diameter.Interestingly, under carbon sources of soluble starch and xylose, a mixture of free-suspending mycelium and mycelium-forming clumps was observed. Although citric acid yielded higher pellet biomass and diameter compared to the glucose group, visual examination revealed excessively large pellets with signs of autolysis in the central region.Consequently, our findings suggest that glucose is the preferred carbon source for optimal pellet formation. For the effect of nitrogen sources on pellets, our study explored the impact of NH4Cl, peptone, yeast extract powder, soybean meal powder, fish meal, and C5H8NNaO4 as the sole nitrogen sources.NH4Cl, yeast extract powder, soybean meal powder, and C5H8NNaO4 significantly increased pellet biomass, resulting in respective enhancements The choice of carbon source also influenced pellet diameter, with sucrose, mannitol, soluble starch, and citric acid contributing to increases of 2.40%, 10.51%, 17.15%, and 50.52%, respectively, in pellet diameter (Figure 3b).Xylose, however, led to a 37.48% decrease in pellet diameter.Interestingly, under carbon sources of soluble starch and xylose, a mixture of free-suspending mycelium and mycelium-forming clumps was observed. Although citric acid yielded higher pellet biomass and diameter compared to the glucose group, visual examination revealed excessively large pellets with signs of autolysis in the central region.Consequently, our findings suggest that glucose is the preferred carbon source for optimal pellet formation. For the effect of nitrogen sources on pellets, our study explored the impact of NH 4 Cl, peptone, yeast extract powder, soybean meal powder, fish meal, and C 5 H 8 NNaO 4 as the sole nitrogen sources.NH 4 Cl, yeast extract powder, soybean meal powder, and C 5 H 8 NNaO 4 significantly increased pellet biomass, resulting in respective enhancements of 49.14%, 52.17%, 48.39%, and 54.18% relative to peptone as the nitrogen source (Figure 4a).In terms of pellet diameter, NH 4 Cl, soybean meal powder, and C 5 H 8 NNaO 4 decreased pellet diameter by 33.40%, 24.65%, and 8.84%, respectively, compared to peptone (Figure 4b).Yeast extract powder, as the sole nitrogen source, resulted in mycelium with uneven density distribution. Relationship between Pellet Size and Citrinin During the fermentation process, the intricate relationship between the production of secondary metabolites and the morphology of filamentous fungi becomes evident [23].Varied pellet diameters seem to exert distinct influences on citrinin synthesis, with larger pellet diameters (average diameter of 2.04 ± 0.008 mm) correlating with higher citrinin content, while smaller pellet diameters (average diameter of 1.4 ± 0.07 mm) are associated with lower citrinin levels in the fermentation.This correlation is visually represented in Figure 6b, showcasing the differences in citrinin content across various pellet diameters, ranging from a few hundred micrometers to one mm (Figure 6c-e). Relationship between Pellet Size and Citrinin During the fermentation process, the intricate relationship between the production of secondary metabolites and the morphology of filamentous fungi becomes evident [23].Varied pellet diameters seem to exert distinct influences on citrinin synthesis, with larger pellet diameters (average diameter of 2.04 ± 0.008 mm) correlating with higher citrinin content, while smaller pellet diameters (average diameter of 1.4 ± 0.07 mm) are associated with lower citrinin levels in the fermentation.This correlation is visually represented in Figure 6b, showcasing the differences in citrinin content across various pellet diameters, ranging from a few hundred micrometers to one mm (Figure 6c-e). Relationship between Pellet Size and Citrinin During the fermentation process, the intricate relationship between the production of secondary metabolites and the morphology of filamentous fungi becomes evident [23].Varied pellet diameters seem to exert distinct influences on citrinin synthesis, with larger pellet diameters (average diameter of 2.04 ± 0.008 mm) correlating with higher citrinin content, while smaller pellet diameters (average diameter of 1.4 ± 0.07 mm) are associated with lower citrinin levels in the fermentation.This correlation is visually represented in Figure 6b, showcasing the differences in citrinin content across various pellet diameters, ranging from a few hundred micrometers to one mm (Figure 6c-e).The internal structure of fungal pellets is characterized by a dense kernel of tightly packed hyphae [24].While the conventional method for pellet analysis involves microscopic examination [25], this approach has limitations, including sample squeezing that can impact biomass size and the loss of three-dimensional pellet information.To comprehensively understand pellet morphology, we employed scanning electron microscopy (SEM) for both overall and internal pellet visualization, supplemented by paraffin sections to observe the internal mycelial structure. SEM images at 170× and 1000× magnifications (Figure 7a) unveiled the formation of pellets through the interweaving of mycelium, creating gaps between the mycelial elements.In comparison, control pellets with a mean diameter of 1.7 ± 0.08 mm exhibited a looser surface, while smaller-diameter pellets were tightly ensconced by mycelium.Notably, the mycelium structure within larger pellets appeared relatively less dense than their smaller counterparts, facilitating the expulsion of metabolites. For a more detailed examination of the internal mycelial structure, paraffin sections were prepared for mycelial pellets cultured for 168 h (Figure 7b).Optical microscopy at 40× and 200× magnifications revealed robust overall growth and a uniform, dense distribution of mycelium within smaller pellets (average diameter of 1.4 ± 0.07 mm).In contrast, larger pellets (average diameter of 2.04 ± 0.008 mm) displayed sparse mycelium that appeared lighter in color following staining.This detailed analysis provides valuable insights into the complex interplay between pellet morphology and citrinin production. were prepared for mycelial pellets cultured for 168 h (Figure 7b).Optical microscopy at 40× and 200× magnifications revealed robust overall growth and a uniform, dense distribution of mycelium within smaller pellets (average diameter of 1.4 ± 0.07 mm).In contrast, larger pellets (average diameter of 2.04 ± 0.008 mm) displayed sparse mycelium that appeared lighter in color following staining.This detailed analysis provides valuable insights into the complex interplay between pellet morphology and citrinin production. Discussion Industrial fermentation processes encounter challenges with filamentous fungi, and the adoption of a pelletized growth form may present solutions to some of these issues [26].While prior studies have explored various filamentous fungal species like Rhizopus [10], Aspergillus [27], and Penicillium [13], all demonstrating pellet growth, it was surprising that, before this investigation, there were no reports of Monascus exhibiting pellet morphology.In our study, we successfully induced Monascus to adopt pellet form by intentionally manipulating four key factors: pH (ranging from 4.0 to 10.0), carbon source selection (including glucose, sucrose, mannitol, soluble starch, xylose, and citric acid), nitrogen source variation (NH4Cl, peptone, yeast leavening, soybean meal, fish powder, and C5H8NNaO4), and spore concentration (ranging from 1.5 × 10 3 to 1.5 × 10 7 spores/mL).The Discussion Industrial fermentation processes encounter challenges with filamentous fungi, and the adoption of a pelletized growth form may present solutions to some of these issues [26].While prior studies have explored various filamentous fungal species like Rhizopus [10], Aspergillus [27], and Penicillium [13], all demonstrating pellet growth, it was surprising that, before this investigation, there were no reports of Monascus exhibiting pellet morphology.In our study, we successfully induced Monascus to adopt pellet form by intentionally manipulating four key factors: pH (ranging from 4.0 to 10.0), carbon source selection (including glucose, sucrose, mannitol, soluble starch, xylose, and citric acid), nitrogen source variation (NH 4 Cl, peptone, yeast leavening, soybean meal, fish powder, and C 5 H 8 NNaO 4 ), and spore concentration (ranging from 1.5 × 10 3 to 1.5 × 10 7 spores/mL).The morphological transformations in the pellets were closely monitored over a 168 h fermentation period.Furthermore, our investigation explored the correlation between citrinin production and pellet characteristics, revealing a noteworthy observation: pellets with smaller diameters exhibited lower citrinin levels. In our study, we systematically examined the impact of various factors on Monascus pellet formation, with a particular focus on pH and inoculum size.The role of pH in shaping pellet morphology is well-established, although its effects can vary among different fungal strains [11].Our findings underscored the pivotal role of pH in Monascus pellet formation, with acidic conditions, specifically in the pH range of form 5 to 6, proving to be particularly conducive.Previous research on Cordyceps sinensis Cs-Hk1 and Rhizopus oryzae has reported similar observations, where lower pH levels (around 3.3 to 2.6) promoted the development of small, uniformly distributed pellets [15,28].In our study, Monascus pellets formed under acidic conditions exhibited distinct characteristics, including small diameters, uniform and densely packed internal distributions, robust pellet integrity, and lower citrinin levels.This aligns with the general trend observed in fungal spores, which are negatively charged and exhibit variations in surface charges and isoelectric points [21].The negative charge of fungal spores, expressed through electrophoretic mobility or zeta potential, increases with higher pH values, inhibiting spore aggregation [29,30].This was substantiated by experimental demonstrations showing that increased pH hindered spore adherence to negatively charged surfaces [31].Additionally, pH significantly influences the hydrophobicity of proteins, particularly those with hydrophobic characteristics that strongly impact adhesion [32]. Conversely, higher pH levels (7 to 8) in our study led to reduced biomass production and uneven pellet distribution.This observation in Monascus aligns with the notion that, in general, an increase in pH impedes spore aggregation due to the heightened negative charge on the conidial cell wall surface.The robust association we observed between initial pH levels and well-defined pellet formation in Monascus, particularly favoring acidic conditions, adds valuable insights to the understanding of fungal pelletization dynamics.Our investigation highlights the intricate relationship between initial pH conditions and Monascus pellet morphology, emphasizing the favorable conditions for pellet development under acidic pH values (5)(6). In shaping the morphology of filamentous fermentation, critical factors such as the quantity, type, and age of the inoculum play pivotal roles.Our study focused on elucidating the impact of spore concentration, revealing noteworthy insights into Monascus pellet formation.Lower spore concentrations, specifically at 1.5 × 10 7 spores/mL, were associated with the development of small, uniformly distributed pellets.These findings align with the observed inverse relationship between spore quantity and pellet size in various Aspergillus species [33].Similar trends, indicating a decrease in spore inoculum leading to the formation of larger pellets, were documented in the study conducted by Posch and Herwig [34].Notably, there exists a discrepancy in the literature regarding the preferred inoculum concentrations for optimal pellet formation [35]. To reconcile these disparities, our study integrated insights from the morphological development of Aspergillus niger with spore inoculum analyses [36].The observed trends may be attributed to the interplay between NH 4+ uptake, glucosamine concentration, and dissolved oxygen levels.Higher inoculum concentrations appear to enhance mycelial aggregation through an accelerated NH 4+ -uptake rate, subsequently leading to the faster conversion and release of glucosamine.This released glucosamine acts as a cohesive substance, promoting mycelial aggregation and pellet formation.As the inoculum concentration continued to increase, we observed a reduction in pellet diameter, accompanied by significant changes in roughness and densification.Assigning specific factors responsible for mycelial aggregation and disaggregation proves challenging, as discussed in the review by Braun et al. [37].Our results underscore the significance of the inoculum amount in determining mycelial morphology by influencing the internal environment of the fermentation broth and the overall fermentation rate. Monascus, like other filamentous fungi, relies on organic carbon and nitrogen sources to support its growth, providing essential energy for fungal development.Our investigation explored the impact of various carbon and nitrogen sources on pellet formation, revealing diverse effects on Monascus morphology.Organic compounds crucial for fungal growth, including sugars like D-glucose, D-fructose, and D-sucrose, are readily assimilated and utilized by Monascus.Additionally, Monascus exhibits the capacity to utilize a spectrum of other compounds, such as polysaccharides, organic acids, amino acids, alcohols, and hydrocarbons [38].In our experiment, different carbon sources, excluding citric acid, did not significantly decrease pellet size but influenced both biomass and pellet dimensions.All studied carbon sources yielded pellet sizes ranging from 1 mm to 3 mm, showcasing well-developed mycelium on the pellet surface, and stable-sized pellets were consistently observed.Notably, the medium supplemented with glucose as the carbon source exhibited the highest biomass, consistent with findings from other studies highlighting glucose as a preferred carbon source for biomass production [39].Mycelia, recognized as pivotal sites for growth and branching [40], play a crucial role in the expansion of pellets.While the exact mechanism behind the influence of glucose on pellet size and biomass remains unclear, it is evident that glucose contributes to increased pellet size by directly providing energy and carbon. Changes in nitrogen sources led to variations in both biomass and pellet diameter.Pellet diameters for all nitrogen sources, except yeast extracts, fell within the 1 mm-3 mm range and exhibited robust development.However, NH 4 Cl and C 5 H 8 NNaO 4 resulted in a significant decrease in pellet size, aligning with similar findings in the work of Jonsbu et al. [41].These salts were observed to induce a growth lag, effectively controlling pellet growth.Earlier reports emphasizing the promotion of pellet formation by organic nitrogen sources due to their nutrient-rich composition supporting cell growth were corroborated in our study [42].Our investigation highlighted the profound impact of carbon-and nitrogensource selection on Monascus pellet formation.These findings underscore the pivotal role of nutrient sources in shaping fungal morphology.Glucose emerged as the preferred carbon source for optimal pellet formation, while peptone and fish meal demonstrated favorability as nitrogen sources for promoting pelleting. While the optimal pellet morphology for maximizing secondary metabolite production in Monascus remains undetermined, studies have demonstrated a positive correlation between productivity and specific features, such as short, swollen branches [43].Generally, a less dense internal structure within pellets is associated with higher productivity, a trend observed in the highly productive lactic acid formation from the pellet morphology of Rhizopus oryzae [44].Despite this, smaller pellets are often considered more compact and stable.The overall connection between pellet size and productivity underscores the importance of oxygen and substrate availability, highlighting the need for pellets with open channels for optimal productivity.In summary, the relationship between pellet morphology and productivity in Monascus fermentation is intricate, with attention to features like short, swollen branches and internal structure density playing a key role in secondary metabolite production. Conclusions In conclusion, the production of vital secondary metabolites by Monascus, including pigments, γ-aminobutyric acid, Monacolin K, and citrinin, is significantly influenced by changes in mycelial morphology.Our findings established a correlation between citrinin content and pellet diameter, indicating that larger pellets tend to exhibit higher citrinin content.This correlation may be attributed to the autolysis tendency of large-diameter pellets, leading to the elimination of metabolites due to their sparse distribution within the pellet structure. To explore the growth of Monascus pellets, our study emphasized the pivotal roles of pH and inoculum size in pellet formation, highlighting the pH range from 5 to 6 as optimal for fostering pellet growth.Additionally, we uncovered a relationship between pellet diameter and citrinin content, demonstrating that smaller pellets harbor lower citrinin levels.Nevertheless, further research is imperative to unravel the underlying molecular mechanisms and key molecules that drive pellet formation in filamentous fungi. Figure 1 . Figure 1.Description of the process of coagulative and non-coagulative types of pellets. Figure 1 . Figure 1.Description of the process of coagulative and non-coagulative types of pellets. Figure 2 . Figure 2. Effect of M. purpureus fermentation broth pH on diameter and biomass of pellets.(a) Diameter, (b) dry weight.Data labeled with different lowercase letters indicate significant differences (p < 0.05). Figure 3 . Figure 3.Effect of carbon source in M. purpureus fermentation broth on diameter and biomass of pellets.(a) Dry weight, (b) diameter.Data labeled with different lowercase letters indicate significant differences (p < 0.05). Figure 3 . Figure 3.Effect of carbon source in M. purpureus fermentation broth on diameter and biomass of pellets.(a) Dry weight, (b) diameter.Data labeled with different lowercase letters indicate significant differences (p < 0.05). Figure 4 . Figure 4. Effect of nitrogen source in M. purpureus fermentation broth on diameter and biomass of pellets.(a) Dry weight, (b) diameter.Data labeled with different lowercase letters indicate significant differences (p < 0.05). Figure 4 . Figure 4. Effect of nitrogen source in M. purpureus fermentation broth on diameter and biomass of pellets.(a) Dry weight, (b) diameter.Data labeled with different lowercase letters indicate significant differences (p < 0.05). Figure 5 . Figure 5.Effect of M. purpureus spore addition on diameter and biomass of pellets.(a) Dry weight, (b) diameter.Data labeled with different lowercase letters indicate significant differences (p < 0.05). Figure 5 . Figure 5.Effect of M. purpureus spore addition on diameter and biomass of pellets.(a) Dry weight, (b) diameter.Data labeled with different lowercase letters indicate significant differences (p < 0.05). Figure 6 . Figure 6.Determination of citrinin in the fermentation broth of M. purpureus.(a) Total ion flow diagram in fermentation broths, (b) content of citrinin in different pellet diameters.(c-e) Pellets of different diameters, (c) the average diameter of 1.7 ± 0.08 mm.(d) The average diameter of 1.4 ± 0.07 mm.(e) The average diameter of 2.04 ± 0.008 mm. Figure 6 . Figure 6.Determination of citrinin in the fermentation broth of M. purpureus.(a) Total ion flow diagram in fermentation broths, (b) content of citrinin in different pellet diameters.(c-e) Pellets of different diameters, (c) the average diameter of 1.7 ± 0.08 mm.(d) The average diameter of 1.4 ± 0.07 mm.(e) The average diameter of 2.04 ± 0.008 mm. Figure 7 . Figure 7.Comparison of the morphology of pellets of different diameter sizes.(a) Scanning electron microscope observation of the microstructure of pellets at ×170 (left) and ×1000 (right).(b) Paraffin sections were observed under a light microscope at×40 (left) and ×200 (right). Figure 7 . Figure 7.Comparison of the morphology of pellets of different diameter sizes.(a) Scanning electron microscope observation of the microstructure of pellets at ×170 (left) and ×1000 (right).(b) Paraffin sections were observed under a light microscope at×40 (left) and ×200 (right).
7,978.6
2023-11-01T00:00:00.000
[ "Environmental Science", "Biology" ]
A constraint handling technique using compound distance for solving constrained multi-objective optimization problems Guiding the working population to evenly explore the valuable areas which are not dominated by feasible solutions is important in the process of dealing with constrained multi-objective optimization problems (CMOPs). To this end, according to the angular distance and `p-norm, this paper introduces a new compound distance to measure individual’s search diameter in the objective space. After that, we propose a constraint handling technique using the compound distance and embed it in evolutionary algorithm for solving CMOPs. In the proposed algorithm, the individuals with large search diameters in the valuable areas are given priority to be preserved. This can prevent the working population from getting stuck in the local areas and then find the optimal solutions for CMOPs more effectively. A series of numerical experiments show that the proposed algorithm has better performance and robustness than several existing state-of-the-art constrained multi-objective evolutionary algorithms in dealing with different CMOPs. Introduction This paper focuses on the constrained multi-objective optimization problems (CMOPs), which exist widely in the real world [1][2][3][4][5][6][7] and are generally formulated as follows [8][9][10]: h j (x) = 0, j = r + 1, · · · , q, x = (x 1 , . . . , x n ) T ∈ Ω ⊂ R n , (1.1) where f 1 (x), f 2 (x), · · · , f m (x) are m real-valued objective functions, g i (x) 0 and h j (x) = 0 are inequality and equality constraints, respectively, and Ω is the search space. In order to facilitate the analysis, the constraint violation value of an individual x is usually calculated as: (1.2) where ε (e.g., ε = 10 −4 ) is a sufficiently small positive number to relax equality constraints h j (x) = 0, j = r + 1, · · · , q. Obviously, if and only if c(x) = 0, the individual x satisfies all the constraints and is called the feasible solution. When c(x) > 0, x is recorded as an infeasible solution. For any two solution x 1 , x 2 ∈ Ω, if F(x 1 ) F(x 2 ) and ∀i = 1, .., m, f i (x 1 ) f i (x 2 ), then it is said that x 1 dominates x 2 , denoted as x 1 ≺ x 2 . A solution that is not dominated by other solutions in Ω is called the unconstrained Pareto optimal solution, and the geometry formed by all unconstrained Pareto optimal solutions in the objective space is defined as the unconstrained Pareto front (UPF). Similarly, the constrained Pareto front (CPF) is formed by all the feasible solutions that are not dominated by other feasible ones in Ω. Therefore, the essence of dealing with CMOP is to find a set of feasible solutions which are uniformly distributed on its CPF. In the research community, a variety of constrained multi-objective evolutionary algorithms (CMEAs) have been proposed to deal with CMOPs [11][12][13][14][15][16]. Therein, many different constraint handling techniques (CHTs) have been used to balance objectives and constraints statically or dynamically. In penalty-based CHTs [17][18][19], the constraint violation value of an individual is transformed into the punishment for its objectives by the dynamic and adaptive penalty functions. Feasibility-led CHTs, such as constraint dominance principle (CDP) [20][21][22][23][24] and epsilon constraint-handling method (EC) [25][26][27], prefer individuals with smaller constraint violation values. Considering that the working populations guided by these two types of CHTs are easy to get stuck in the local areas and then fail to find the CPFs [24,28], several new CHTs tend to use the method of ignoring the feasibility and only optimizing the objectives to improve their global search ability. For example, the push and pull search (PPS) [29,30] first pushes the working populations toward UPFs without considering constraints in its push stage, and then pulls them back to CPFs by an improved EC in the pull stage. In C-TAEA [31] and CCMO [32], one population that prefers feasible solutions is used to provide search pressure to CPFs, while another population that ignores feasibility to optimize the objectives is used to force the working population to jump out of the local regions. Many numerical experiments show that these new CHTs do improve the ability of the working population to cross the complex feasible regions in front of the CPFs. Nevertheless, as overly emphasize the importance of optimizing constraints and objectives, these CHTs may mislead the population to jump over the feasible regions containing constrained Pareto optimal solutions. This would lead to missing some or even all of the fragments of CPF. So, it is still a valuable research topic to design the effective CHTs to deal with CMOPs. To deal with the CMOPs effectively, this paper proposes a compound distance-based CHT, which aims to guide the population to explore the valuable areas that are not dominated by feasible solutions and avoid the insufficient search for some areas. Specifically, according to the angle and p -norm, we first introduce a compound distance into the objective space to measure individual's search diameter. Then, the individuals with large search diameters in the valuable areas are given priority to be preserved. This effectively guides the working population to distribute evenly in the valuable areas and avoid missing the feasible regions containing constrained Pareto optimal solutions. In other words, the compound distance-based CHT is effective in guiding the working population to find CPFs. Accordingly, we propose a CMEA with the compound distance-based CHT, which is denoted as CD-CMEA, for solving CMOPs. Similar to the existing CMEAs, the proposed CD-CMEA also use a feasibility-oriented archive to record the best solutions for CMOPs. Besides, to balance the local search and global search in the evolutionary process, we use a dynamic mixing strategy to select individuals from the archive and the current population evenly scattered in the valuable areas to generate offspring. A series of numerical experiments on several benchmark problems are carried out to test the performance of the proposed CD-CMEA. And the experimental results show that CD-CMEA performs better than the state-of-the-art CEMAs in dealing with different CMOPs. The main contributions of this paper can be summarized as follows: 1. We propose a novel compound distance-based CHT that guides the population to explore the valuable areas evenly. It can effectively prevent the population from being stuck in local areas. 2. We introduce a dynamic mixing strategy to guide the generation of offspring in each generation. This helps to make full use of the information of the current population and the archive to balance the local search and global search in the evolution process. 3. We carry out a series of numerical experiments to verify the effectiveness of the proposed algorithm. The rest of this paper is organized as follows. Section 2 describes the compound distance-based CHT in detail. In Section 3, we embed the compound distance-based CHT into evolutionary algorithm for dealing with CMOPs. Section 4 organizes a series of numerical experiments to compare the performances of the proposed algorithm and the existing CMEAs. Section 5 concludes this paper. Compound distance-based constraint handling technique In this section, we describe in detail the proposed compound distance-based CHT, which aims to guide the population to explore the valuable areas that are not dominated by feasible solutions and is quite different from the existing CHTs. Let the objective vector of any individual x q in Ω be F q = ( f 1 (x q ), ..., f m (x q )) T , and f min i (i = 1, ..., m) be the minimum value that has been found on the i-th objective. To facilitate the analysis, we shift the objective space to R m + according to the following transformations: and introduce two symbols as follows: Besides, before introducing the proposed compound distance-based CHT, we first give a lemma, which is helpful to prove the properties of the compound distance involved. i,min i,min ) > 0, then the right side of inequality (2.3) can be simplified as ), then we can get that i,min According to the Sequence Inequality, i,min (2.8) By substituting Eq (2.8) into Eq (9), we have (2.10) Compound distance When dealing with unconstrained multi-objective optimization problems, the angle is widely used as a diversity measure and has shown its effectiveness of guiding the population to evenly search for different segments of PF [33][34][35][36]. However, it can not accurately reflect the relative distance between different individuals. For example, point A = (1, 1) is close to point B = (2, 2) and far from point C = (100, 100), but it is easy to get that the angular distance between A, B is equal to that between A, C. This is not conducive to guiding the population to search different regions evenly when dealing with CMOPs. In order to take the advantage of angle in guiding the population to search for the different segments of CPF and overcome its deficiency in measuring the relative distance between different individuals, we introduce a new compound distance in the objective space R m + of CMOPs, which takes into account both the angular distance and the relative position between considered individuals and defines the distance between any two points F 1 , F 2 ∈ R m + as: i,min where * , * represents the angle between the two vectors, and k is a constant greater than 0. For the three points A, B, C mentioned above, we can easily calculate that This shows that the compound distance d p ( * , * ) can effectively evaluate the crowding degree of individuals. As a distance function in the objective space, d p ( * , * ) needs to satisfy four necessary properties, which are proved as below. Proof. For all F 1 , F 2 ∈ R m + , it is the fact that Thus, according to the definition of d p ( * , * ) (see Eq (2.11)), we can easily get that d p (F 1 , F 2 ) 0. Proof. For any F 1 , F 2 , F 3 ∈ R m + , it is obvious that Besides, according to the Minkowski Inequality, By using Lemma 1, we can further transform Eq (2.16) as follows So, it can inferred from Eq (2.15) and Eq (2.17) that (2.18) Compound distance-based CHT Using the compound distance d p ( * , * ), we define the search diameter of a point F i in F ⊂ R m + as: Since Properties 1 to 4 have shown that d p ( * , * ) is a measure in the space R m + , the larger the value of d p (F i , F j ) obviously means that the distance between two individuals F i , F j is larger. So, a point F i with a large search diameter R p (F i |F) means that it has a great contribution for F to searching different areas. Consider the point set F = {A, B, C} ⊂ R 2 + mentioned in Subsection 2.1 as an example. It is easy to get that , which is consistent with the distribution of points A, B, C in space R 2 + . Therefore, in the process of reducing the size of the considered population, the proposed compound distance-based CHT preferentially removes the individuals with the smallest search diameters and retains the individuals with larger search diameters. Suppose that the size of the considered population is N * , which exceeds its limited size N. Then the pseudo codes of pruning it using the proposed compound distance-based CHT is shown in Algorithm 1. Specifically, it first calculates the search diameter of each solution by the use of Eq (2.19) and determines the number of deleted solutions according to DN = N * − N. After that, the DN worst solutions with the smallest search diameters are removed from the considered population one by one (Lines 3 to 7 in Algorithm 1). Considering that the removal of the worst individual may affect the search diameters of some remaining individuals and then results in the inaccurate of determining the next worst solutions, we update the search diameters of all remaining individuals in Line 7. Finally, the N retained individuals in the considered population are output as the results. Algorithm 1: Pruning steps by using the proposed compound distance-based CHT Input: The considered population X of size N * and the limited size N; Output: The refined population X with size N; 1 Calculate the search diameter of each solution x in X by Eq (2.19); 2 Determine the number of solutions that need to be deleted: DN = N * − N; Determine the worst solution x ∈ X with the smallest search diameter: Update the search diameters of the remaining individuals in X by Eq (2.19); 7 end 8 return Population X. It is easy to get that in the process of determining the best individuals in the considered population, the proposed compound distance-based CHT do not take into account their performance in optimizing constraints and objectives. This is the main difference between the proposed compound distance-based CHT and the existing CHTs. In this way, the proposed compound distance-based CHT can avoid being lured to the local areas by deceptive constraints and guide the working population search for CPFs more effectively. Proposed algorithm framework In this section, we embed the compound distance-based CHT into evolutionary algorithm and propose a new CMEA with the compound distance-based CHT (denoted as CD-CMEA) to solve the CMOPs. The details of the proposed CD-CMEA are described below. Update mechanism of the current population Due to the emergence of new individuals, the size N * of the working population X t in the t-generation will exceed the preset size N. In order to effectively guide the X t to search for CPFs, we use the compound distance-based CHT to select the N individuals of X t who perform best in uniformly searching valuable areas to form the current population X t+1 of the next generation. Specifically, we first identify the set of individuals located in the valuable areas as: If the size of V X t is small than N, i.e., |V X t | < N, we initialize the current population of the next generation X t+1 as V X t and then select (N − |V X t |) individuals closest to the valuable areas from the remaining individuals to fill X t+1 , where the distance from an individual to the valuable areas determined by V X t is calculated as: Otherwise, we use Algorithm 1 to select the best N individuals of V X t who perform best in uniformly searching valuable areas to form X t+1 . We simplify the above steps to the pseudo-code form in Algorithm 2. Algorithm 2: Update mechanism of the current population Input: The current working population X t and preset size N; Output: The population X t+1 for the next generation; 1 Identify the set V X t of valuable individuals by Eq (3.5); Use the Algorithm 1 to select N individuals of V X t to form X t+1 ; 8 end 9 return X t+1 . Archive In the proposed CD-CMEA, the best solutions for solving CMOPs are recorded in an archive, which prefers feasible solutions. The pseudo codes of forming the archive are shown in Algorithm 3. First, the feasible solutions of the working population are used to initialize the archive. Then, if the number of individuals in archive is smaller than the preset size N, the infeasible solutions with the smallest constraint violation values are selected to fill the archive; otherwise, we use environmental selection of PREA [37,38], which has showed its excellent performance in dealing with different unconstrained MOPs and won the championship in IEEE WCCI 2018 Competition of Many-Objective Optimization, to prune the archive. The specific steps of environmental selection of PREA is as follows: 1. Shift the archive Λ to R + m as: where f Fmin i is the minimum value found by feasible solutions on the objective f i . 2. Calculate the fitness value of each individual x i ∈ Λ by the following equation: 3. Determine the nondominated solutions in Λ: If |Λ | N, terminate the calculation and let Λ retain only the N individuals with the best value of Fitness(x|Λ). Otherwise, let Λ = Λ and mark N solutions with the best fitness values: These solutions are then used to form a promising region and to identify valuable candidates Λ C as follows: After that, the individuals with the minimum crowding distance in Λ are eliminated one by one until the size of Λ is reduced to N. In each iteration, the one with the smaller fitness value of the two nearest individuals is removed from Λ. Therein, a parallel distance is used to measure the distance between any two solutions in Λ and is calculated as follows: Reduce the size of Λ to N by the environmental selection of PREA [37]; 6 end 7 return Archive Λ. Offspring production To balance the local search and global search in the evolutionary process, we introduce a dynamic mixing strategy to randomly select individuals from the current population and the archive to form the mating pool of size N. Therein, the number of individuals selected from the current population of the t-th generation is calculated as: where T is the maximum number of evolutionary generation, and function * returns the largest integer not exceed the argument. The individuals of mating pool selected from the archive is (N − N t ). According to Eq (3.9), it is easy to get that with the deepening of the optimization process, that is, the increase of t value, the number N t of individuals from the current population in mating pool gradually decreased from the initial N to 0. This helps the CMEA to make full use of the information of individuals evenly scattered in the valuable areas for global search in the early evolutionary stage, and to focus on local search using the archive in the late evolutionary stage. For each individual x in the mating pool, its two spouses have a 0.7 probability of being the individuals with the smallest angles with x in the objective space, and a 0.3 probability of being randomly selected from the mating pool. After that, we implement the differential evolution operator (DE) [39] and polynomial mutation (PM) [40] for these matched individuals to produce offspring. The specific steps for generating offspring are shown in Algorithm 4. Framework of the proposed CD-CMEA The flow chart of the proposed CD-CMEA is given in Figure 1, and its pseudo-code is shown in Algorithm 5. Specifically, N individuals are randomly generated to initialize the current population X 0 and the archive Λ. After that, the following steps are repeated many times until the stopping criteria are met, which include using Algorithm 4 to produce offspring Y, updating the archive Λ by Algorithm 3, and using Algorithm 2 to select the best N solutions from X t , Λ and Y to form the next generation population X t+1 . Finally, archive Λ is output as the result. For simplicity, in the proposed CD-CMEA, we set the parameters k and p of the compound distance (see Eq (2.11)) as π 2 p √ m and 1, respectively. According to the framework of the proposed CD-CMEA, it can be easily found that the complexity of CD-CMEA mainly lies in two the processes: 1) calculating the search diameter of each individual by Eq (2.19); and 2) updating the archive set by the environmental selection of PREA. As the complexity of calculating the search diameter of an individual is O (mN log N), the complexity of calculating all individuals in the current population is O (mN 2 log N). Also, the number of objectives m is generally smaller than the population size N and the complexity of PREA is O(N 3 ) [37]. Therefore, the complexity of the proposed CD-CMEA is O(N 3 ). Output: The archive Λ; 1 Randomly generate the initial population X 0 of size N and let Λ = X 0 , t = 0; 2 while the stopping criteria are not met do 3 Use Algorithm 4 to produce offspring Y; 4 Merge X t , Λ and Y to form a big working population Z t ; 5 Select N solutions from Z t to update Λ by Algorithm 3; 6 Determine the current population X t+1 of size N for the next generation by Algorithm 2; 7 Let t = t + 1; 8 end 9 return The archive Λ. Numerical experiments and analysis In this section, we have carried out a series of numerical experiments to investigate the performance of the proposed algorithm CD-CMEA, which is compared with that of other state-of-the-art CMEAs. CMOEA/D-DE-CDP [22] is a decomposition-based differential evolution algorithm. It uses the CDP to determine the best solutions in the current population. Specifically, in the process of evaluating the quality of candidate solutions, the solutions with smaller constraint violation values are considered to be better. And when the constraint violation values of the candidate solutions are all the same, the solutions with the best fitness values under the evaluation of Tchebycheff method are regarded as the optimal solutions. 2. CMOEA/D-DE-SR [22] is an improved version of CMOEA/D-DE-CDP and uses a stochastic ranking approach (SR) to balance the constraints and objectives. Specifically, when the candidate solutions are both infeasible and the randomly generated number is smaller than the preset small probability value p f , it ignores the constraint and determines the optimal solutions according to the performances of these candidate individuals on optimizing objectives; Otherwise, it also uses the CDP to selects the best solutions. 3. C-TAEA [31] is a two-archive CMEA that simultaneously maintains two collaborative archives, i.e., the convergence oriented archive CA and diversity oriented archive DA. Therein, CA prefers feasible individuals and provides pressure to push the population to search for CPF, while DA evaluated by only optimizing the objectives focuses on exploring the areas that have not been exploited by CA. 4. PPS [29] is such a CMEA that divides the search process into two different stages: the push stage and the pull stage. In the push stage, constraints are ignored, and the population is evolved by optimizing only the objectives. In the pull stage, it uses an improved -CDP to pull the population to search for the CPF. 5. ToP [44] is a two-phase based CMEA for CMOPs. In the first phase, it transforms the original CMOP into a constrained single-objective optimization problem, which is then solved by a constrained single-objective evolutionary algorithm. In the second phase, it focuses on the promising area discovered in the first phase and uses the NSGA-II with CDP to find the CPF. 6. AnD [23] is a CMEA that uses angle-based selection strategy, shift-based density estimation strategy, and CDP. In the process of comparing individual quality, those with smaller violations of constraints will be selected for the next generation. Considering that the released version of these compared algorithms may be their best form, we set their private parameters to be consistent with those in their published papers. For the sake of fairness, in terms of public parameters, the population size and the maximum function evaluation of all algorithms on each test problem are set to 100 and 2 × 10 5 , respectively. And the number of independent runs of the algorithms on each test problem is set to 20. Two widely used metrics, i.e., inverted generational distance (IGD) [45] and hypervolume (HV) [46], are adopted in this paper to evaluate the performance of the seven considered algorithms on each test problem. These two metrics are defined as follows: IGD-metric: Let P be a set of uniformly distributed points on the CPF, and S is the final feasible results of an algorithm in the objective space, then the IGD of S to P is calculated as: Obviously, the smaller the IGD(S|P) is, the better it is to approximate the CPF with S. For each test instance in this paper, we use the method of Das and Dennis [47] to generate the uniformly distributed point set P with the size of 10,000. where VOL( * ) is the Lebesgue measure. It is easy to get that a large value of HV(S|z) means good quality of S in the objective space. To hear the voice from statistics, we use the Wilcoxon Rank Sum Test with a significance level of 0.05 to analyze the performance differences between the proposed CD-CMEA and the other six existing CMEAs on each test problem. Therein, three symbols "+", "=" and "-" are introduced, which represent that the final results obtained by the compared algorithms are significantly better than, equal to, and worse than the proposed CD-CMEA, respectively. Experimental results and analysis The comparison results among the seven considering CMEAs on each test instance under IGD and HV metrics are listed in Tables 1 and 2. In these tables, we highlight the best algorithm for each test problem with a gray background and a bold way. Besides, in order to visually show the excellent performance of the proposed CD-CMEA, we take DASCMOP series as the examples to draw the feasible solutions obtained by these seven comparing algorithms in Figures 2 and 3. As can be seen from these Tables 1 and 2, compared with the other six CMEAs, the proposed CD-CMEA performs better in the test instances, and it has better robustness in dealing with different CMOPs. This is to be expected. Next, we will analyze the reasons for this in detail. When dealing with CTP1-5,7, C2-DTLZ2, C3-DTLZ4 and DC1-DTLZ1, we can see that there is little difference in the performance of the seven algorithms. The main reason for this is that the constraints of these instances are very simple and they actually do not pose any challenge for the seven algorithms to find the CPFs. So, the proposed CD-CMEA and the other six algorithms can easily find the CPFs of these test instances, and the slight difference in performance among them is caused by their diversity maintenance mechanism. Since these problems have irregular PFs, which are captured more accurately by the PREA adopted in CD-CMEA than by the reference point-based and angle-based diversity maintenance mechanism of other algorithms [37], it is not surprising that the proposed CD-CMEA outperforms CMOEA/D-DE-CDP, CMOEA/D-DE-SR, C-TAEA, PPS, and AnD on them. Besides, Tables 1 and 2 also show that ToP performs better than the proposed CD-CMEA in solving CTP1-5,7. That is not surprising. Because the update mechanism of ToP is NSGA-II, which has excellent performance in capturing irregular CPF in two-dimensional objective space. As NSGA-II performs poorly in selecting the individuals with good diversity in the space with the dimension number larger than 2, we can see that the performance of ToP on the three-objective instances C2-DTLZ2, C3-DTLZ4 and DC1-DTLZ1 is poor. When it comes to CTP6,8, whose CPFs below the infeasible regions in the objective space, the feasibility-led CMEAs, i.e., CMOEA/D-DE-CDP, CMOEA/D-DE-SR, ToP and AnD, are easily trap into the local areas and than fail to find the entire CPFs. As the individuals of the working population in the proposed CD-CMEA are forced to distribute evenly in the valuable areas according to their search diameters, they can easily cross the infeasible regions of the CTP6,8 and find the CPFs. So, it is natural that the performance of CD-CMEA is better than that of CMOEA/D-DE-CDP, CMOEA/D-DE-SR, ToP and AnD on CTP6,8. Due to the objective-led CHT that ignores the constraints to only optimize the objectives, C-TAEA and PPS can also effectively force the working population to cross the infeasible region in front of CPFs and find CPFs of CTP6,8. Nevertheless, because the diversity maintenance mechanism of CD-CMEA performs better than that of C-TAEA and PPS in capturing the irregular CPFs of CTP6,8, CD-CMEA is better than C-TAEA and PPS in dealing with CTP6,8. For dealing with DASCMOP1-9, MW1-14, DC2-DTLZ1 and DC3-DTLZ1, it can see from Tables 1 and 2 that the proposed CD-CMEA is superior to CMOEA/D-DE-CDP, CMOEA/D-DE-SR, ToP and AnD. The reason for this is easy to get. As there are complex infeasible regions in front of the CPFs of DASCMOP1-9, MW1-14, DC2-DTLZ1 and DC3-DTLZ1 in the objective space, the feasibility-led CHT in CMOEA/D-DE-CDP, CMOEA/D-DE-SR, ToP and AnD fails to guide the working population to cross these infeasible regions, and this leads to the poor performances of CMOEA/D-DE-CDP, CMOEA/D-DE-SR, ToP and AnD on DASCMOP1-9, MW1-14, DC2-DTLZ1 and DC3-DTLZ1. For the three algorithms CD-CMEA, C-TAEA and PPS, they can effectively cross the complex infeasible regions of these instances and find the CPFs. However, because some instance of DASCMOP-series is the unbalanced problem, that is, the probability of populations appearing in different regions of the objective space is very different, the objective-led CHT in C-TAEA and PPS is easy to mislead the working population to the local feasible regions and miss some other feasible regions containing CPF fragments. So, C-TAEA and PPS fail to find the entire CPFs of some instance of DASCMOP1-9. As the individuals of the working population in the proposed CD-CMEA are forced to distribute evenly in the valuable areas according to their search diameters, they can easily find the entire CPFs of DASCMOP1-9. In addition, the CPFs of DASCMOP1-9, MW1-14, DC2-DTLZ1 and DC3-DTLZ1 are irregular, and CD-CMEA performs better than C-TAEA and PPS in capturing them. Therefore, it is not surprising that CD-CMEA is superior to C-TAEA and PPS on DASCMOP1-9, MW1-14, DC2-DTLZ1 and DC3-DTLZ1. In summary, compared with the six state-of-the-art CMEAs, i.e., CMOEA/D-DE-CDP, CMOEA/D-DE-SR, C-TAEA, PPS, ToP and AnD, the proposed CD-CMEA performs better and has better robustness in dealing with different CMOPs. In other words, the proposed CD-CMEA is a promising algorithm for CMOPs. Further investigation In order to investigate the effective of the proposed compound distance, we take the DC2-DTLZ1 as an example to compare the differences of archives obtained by the proposed algorithm using the compound distance and the Euclidean distance, respectively. The comparison results are shown in Figure 4. It can be seen from Figure 4 that Euclidean distance prefers the solutions far from the origin in the objective space and fails to guide the population to cross the infeasible regions in front of CPF. On the contrary, the proposed compound distance can effectively guide the population to search the different areas of the objective space evenly, whether they are close to or far from the origin. This makes the population guided by the proposed compound distance easily cross the infeasible regions in front of CPF and then find CPF of DC2-DTLZ1. Therefore, compared with Euclidean distance, the proposed compound distance is more beneficial to guide the population to explore the objective space evenly when solving the CMOPs. Conclusions and future work In this paper, we introduce a compound distance into the objective space to measure individual's search diameter according to the angle and p -norm. After that, a compound distance-based CHT, in which the individuals with larger search diameter in the valuable areas are given priority to be preserved, is proposed to force the working population to fully explore the valuable areas that are not dominated by feasible solutions. Embedding the compound distance-based CHT in the evolutionary algorithm, we proposed a new method named CD-CMEA to deal with CMOPs. Numerical experimental results show that the proposed CD-CMEA performs better than the other six existing state-of-the-art CMEAs in dealing with different CMOPs. So, CD-CMEA is a promising method for solving CMOPs. Considering that the main characteristic of the proposed CD-CMEA is to force the working population to explore the valuable areas uniformly, we believe that the CHTs with similar properties should also perform well in dealing with CMOPs. Thus, in the future, we will further study the other forms of CHTs that can force the working population to explore the valuable areas evenly.
7,505.4
2021-01-01T00:00:00.000
[ "Computer Science" ]
Active Vibration Control of a Fluid-Conveying Functionally Graded Cylindrical Shell using Piezoelectric Material Active vibration control of a smart FG (functionally graded) cylindrical shell conveying fluid in thermal environment is studied theoretically by using a laminated piezoelectric actuator. Velocity feedback control law is implemented to activate the piezoelectric actuator. Considering the electric-thermo-fluidstructure interaction effect, a nonlinear dynamic model of the smart fluid-conveying FG cylindrical shell is developed based on Hamilton’s principle and von-Karman type geometrical nonlinear relationship. The inviscid, incompressible, isentropic and irrotational fluid is coupled into governing equations using the linearized potential theory. The Galerkin’s method is used to obtain the nonlinear governing equations of motion of the coupled system. The multiple time scales approach is applied to solve the resulting governing equations for analysing the nonlinear dynamic characteristics of the coupled system. The influence of fluid flow velocity, feedback control gains of piezoelectric voltage, external excitation and material properties of FGM on the frequency-response curves of system are investigated. The results indicate that the piezoelectric voltage is an effective controlling parameter for vibration control of the system, and the flow velocity can effect significantly the vibration amplitude and nonlinearity of the coupled system. Introduction Cylindrical shells conveying fluid have widely applications in various engineering fields, particularly in aerospace, marine industry, biomechanical applications. These structures are prone to undergoing undesirable vibration and noise due to the coupled effect of fluidstructure interactions, which not only degrade the system performance but also influence the structural integrity and reliability. In order to ensure the safety of the systems and obtain accurate control, the piezoelectric materials are applied in fluid-conveying cylindrical shells to control the coupled vibration. Functionally graded materials (FGM) have received considerable attention in engineering communities due to the advantages of being able to withstand severe hightemperature while maintaining structural integrity. Piezoelectric FGM structures will have the advantages of FGMs and piezoelectric materials linked together. A number of works have been employed to study the active control of nonlinear vibration of single structures such as beams, plates and shells without considering fluid effects. Li et al. [1] investigated the active control of a beam subjected to a harmonic excitation using piezoelectric material. Vedat [2] studied the active control of the nonlinear vibration of the FGM plate under random excitation. By using numerical simulations and experimental method, Zhang et al. [3] studied the active vibration control of a cylindrical shell with a laminated PVDF actuator. Considering fluid effects, some attention has been paid to explore the active control of coupled vibration due to fluid-structure interaction by numerical, experimental method and finite element method. Shigeki et al. [4] employed a numerical method to study the active control by using piezoelectric materials for fluid-structure interaction problems. Woo et al. [5] experimentally investigated the active vibration control performance and modal characteristics of a cylindrical shell in air and water. Ray and Reddy [6] analysed the active control performance of a fluidconveying cylindrical shell by using finite element method. In current study, Considering the electricthermo-fluid-structure interaction effect, the nonlinear vibration of a smart FG cylindrical shell conveying fluid in thermal environment is studied theoretically. And the active vibration control of the coupled system is investigated by using a laminated piezoelectric actuator. Model description and nonlinear dynamic modeling A smart FG cylindrical shell conveying fluid with length L, thickness h and mean radius R is considered herein, as shown in Fig.1. The piezoelectric actuator layer with thickness h p is perfectly bonded onto the outer surface of FG cylindrical shell, which is used to control the coupled vibration of fluid-conveying FG shell. It is polarized along the thickness direction. The fluid with density, ρ f , flows in the x-direction of the cylindrical shell with a uniform flow velocity U f . The FGM shell is subjected to a radial harmonic excitation q (x, θ, t) with excitation amplitude q 0 and excitation frequency Ω. The system is defined in a coordinate system (x, θ, z) as shown in Fig.1, where x, θ and z are the axial, circumferential and radial coordinates, respectively. The corresponding displacement components are denoted by u, v and w, respectively. Theoretical formulations The FGM cylindrical shell is composed of ceramics and metals. In thermal environment, the effective material properties of FGM cylindrical shell, P eff , are related to both the thickness of shell and temperature, which can be expressed as where η expresses the volume fraction exponent. P m (T) and P c (T) denote the properties of metal and ceramic of FGM shell, respectively. It can be defined as a function of temperature as where P 0 , P -1 , P 1 , P 2 , P 3 are the temperature coefficients, which are unique to constituent materials. The temperature distribution of FGM cylindrical shell is assumed to vary linearly along the thickness. The FGM cylindrical shell is modeled based on Donnell shell's theory with taking von Karman geometrical nonlinearity into account. The straindisplacement relations can be written as The stress-strain relations of the FGM cylindrical shell considering temperature effects, are 11 1 11 12 where the thermal expansion coefficients The effective mass density ρ eff , the effective Young's modulus E eff , the effective Poisson's ratio υ eff and the thermal conductivity α eff vary according to Eqs. (1)(2). For the piezoelectric actuator layer, the constitutive equations coupling elastic, thermal and electric fields in the piezoelectric medium, can be expressed as where the piezoelectric actuator layer is subjected to a temperature variation ΔT 2 , and ΔT 2 = T p (z)-T 0 (T 0 = 300K). The temperature distribution of piezoelectric layer, T p (z), varies linearly along the thickness. E i and D i are the electric field intensities and electric displacements. C ij , e ij and ij ∈ represent the elastic, piezoelectric and dielectric constants, respectively. α 11e and α 22e are the thermal expansion coefficients of piezoelectric layer in the x and θ directions. A layerwise quadratic distribution of the electric potential φ is considered and given by where z p is the local thickness coordinate with respect to the piezoelectric layer mid-plane, is the electric voltage applied to the piezoelectric layer.ψ is the induced electric potential by elastic deformation in the piezoelectric element. The electric field intensities z z E , ϕ = − . Equations of motion of the smart FG cylindrical shell conveying fluid The Hamilton's principle is utilized to develop the governing equations of motion of the smart FG cylindrical shell conveying fluid subjected to an external radial harmonic excitation. where T and U are referred to the total kinetic energy and potential energy of the system. W exc is the work done by the external excitation. W f is the work done by fluid perturation pressure. The fluid in the FG cylindrical shell is considered to be inviscid, incompressible, isentropic and irrotational. The equations of the smart piezoelectric FG cylindrical shell -fluid system are coupled via the work done by fluid perturbation pressure which can be written as The fluid perturbation pressure is related to shell motions and act on the radial direction of the FGM cylindrical shell. The fluid-structure interaction can be described by the linear potential flow theory. The velocity potential, Ψ, can be defined as the sum of the two parts including a steady part due to the mean flow velocity U f in the x direction and an unsteady part Φ related to shell motion, Active control of nonlinear vibration for the coupled system An analytical approach is utilized to investigate the nonlinear vibrations of the coupled system. The simplysupported boundary condition is considered at two ends of the cylindrical shell. The shell displacements are discretized by using the following trigonometric expansions. 14) into Eq.(13) and then employing Galenkin method, the nonlinear ordinary differential equation can be obtained. By omtting the in-plane inertias, then the transverse motion of the coupled system can be obtained. where f (V E ) is the control force of piezoelectric actuator, and supplied to the piezoelectric actuator layer is considered to be negatively proportional to the velocity of a point on the outer surface of the shell, which can be expressed as where G v is the control gain, and C is the transform vector used to transfer the coordinates of the feedback measurement point (x p , θ p ) from modal coordinates to cylindrical coordinates. Substituting Eq.(16) into Eq.(15), the transverse motion of the coupled system can be rewritten as 1 Results and discussions In order to verify the validity of the present method, the natural frequencies of a homogeneous piezoelectric made of PZT-4 without conveying fluid are compared with the existing analytical results [7]. The comparison results are shown in Table 1, and good agreements can be observed. The effect of various parameters on the frequencyresponse curves of primary resonance for the smart fluidconveying FG cylindrical shell bonded with a laminated piezoelectric actuator and subjected to the harmonic external excitation are shown in Fig.2. The FGM cylindrical shell layer is ZrO 2 (ceramic material)/T i -6Al-4V(metal material). The outer piezoelectric layer is PZT-4. The parameters used are L=0.3, R=0.6, h=0.003, h p =h/3, η=2. The feedback measurement point is selected as (x p , θ p )= (L/2, 0). The effect of the external excitation amplitude on amplitude-frequency response of the system is plotted in Fig.2(a). It can be seen that the resonant region and the system nonlinearity increase with the increasing excitation amplitude, and the vibration amplitude increases as the excitation amplitude increases. Fig.2(b) shows the effect of the feedback control gains of the piezoelectric voltages on amplitude-frequency response of the system. It can be observed that vibration amplitudes decrease with the increasing feedback control gains. This indicates the larger control gains applied to the piezoelectric actuator can control the system vibtation. Fig.2(c) shows the effect of fluid velocity on the amplitude-frequency response of the system. It can be seen the vibration amplitude increases as the flow velocity increase. The reason of this change can be explained by the fact that the fluid perturbation pressure Pp at the shell wall increases as the flow velocity increases. This leads to a decrease in the system stiffness and increase the vibration amplitude. Fig.2(d) describes the effect of material property of the FGM cylindrical shell on the amplitude-frequency response of the system. As the volume fraction exponent increases, the resonance amplitudes decrease and the hardening nonlinearity of the coupled system increases. This is due to the ceramic content (ZrO2) in functionally graded material increases as the volume fraction exponent increases, and the elastic modulus of ceramic (ZrO2) is larger than that of their metal counterpart (Ti-6Al-4V). Conclusions Active vibration control of a smart fluid-conveying FG cylindrical shell bonded with a laminated piezoelectric actuator in thermal environment is studied theoretically. Velocity feedback control law is implemented to activate the piezoelectric actuator. Considering electric-thermofluid-structure interaction effect, a nonlinear dynamic model of system is developed. The effects of various parameters on nonlinear dynamic behaviours of the system are discussed. The results indicate that the piezoelectric voltage is an effective controlling parameter for vibration control of the system, and the fluid flow velocity can effect significantly the vibration amplitude and the nonlinearity of the coupled system. The theoretical framework and numerical results presented in this paper are helpful for the application of smart structures subjected to internal fluid flow under harmonic excitation.
2,661.4
2020-01-01T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
On applications of Mathematica Package"FAPT"in QCD We consider computational problems in the framework of nonpower Analityc Perturbation Theory and Fractional Analytic Perturbation Theory that are the generalization of the standard QCD perturbation theory. The singularity-free, finite couplings ${\cal A}_{\nu}(Q^2),{\mathfrak A}_{\nu}(s)$ appear in these approaches as analytic images of the standard QCD coupling powers $\alpha_s^{\nu}(Q^2)$ in the Euclidean and Minkowski domains, respectively. We provide a package"FAPT"based on the system Mathematica for QCD calculations of the images ${\mathcal A}_{\nu}(Q^2)$, ${\mathfrak A}_{\nu}(s)$ up to N$^3$LO of renormalization group evolution. Application of these approaches to Bjorken sum rule analysis and $Q^2$-evolution of higher twist $\mu_4^{p-n}$ is considered. Introduction The QCD perturbation theory (PT) in the region of space-like momentum transfer Q 2 = −q 2 > 0 is based on expansions in a series in powers of the running coupling α s (µ 2 = Q 2 ) which in the one-loop approximation is given by α (1) s (Q 2 ) = (4π/b 0 )/L with b 0 being the first coefficient of the QCD beta function, L = ln(Q 2 /Λ 2 ), and Λ is the QCD scale. The one-loop solution α (1) s (Q 2 ) has a pole singularity at L = 0 called the Landau pole. The ℓ-loop solution α (ℓ) s (Q 2 ) of the renormalization group (RG) equation has an ℓ-root singularity of the type L −1/ℓ at L = 0, which produces the pole as well in the ℓ-order term d ℓ α ℓ s (Q 2 ). This prevents the application of perturbative QCD in the low-momentum space-like regime, Q 2 ∼ Λ 2 , with the effect that hadronic quantities, calculated at the partonic level in terms of a power-series expansion in α s (Q 2 ), are not everywhere well defined. In 1997, Shirkov and Solovtsov discovered couplings A 1 (Q 2 ) free of unphysical singularities in the Euclidean region [1], and Milton and Solovtsov discovered couplings A 1 (s) in the Minkowski region [2]. Due to the absence of singularities of these couplings, it is suggested to use this systematic approach, called Analytic Perturbation Theory (APT), for all Q 2 and s. The APT yields a sensible description of hadronic quantities in QCD (see reviews [3][4][5]), though there are alternative approaches to the singularity of effective charge in QCD -in particular, with respect to the deep infrared region Q 2 < Λ 2 . One of the main advantages of the APT analysis is much faster convergence of the APT nonpower series as compared with the standard PT power series (see [6]). Recently, the analytic and numerical methods, necessary to perform calculations in two-and three-loop approximations, were developed [7][8][9]. The APT approach was applied to calculate properties of a number of hadronic processes, including the width of the inclusive τ lepton decay to hadrons [10][11][12][13][14], the scheme and renormalization-scale dependencies in the Bjorken [15,16] and Gross-Llewellyn Smith [17] sum rules, the width of Υ meson decay to hadrons [18], meson spectrum [19], etc. The generalization of APT for the fractional powers of an effective charge was done in [20,21] and called the Fractional Analytic Perturbation Theory (FAPT). The important advantage of FAPT in this case is that the perturbative results start to be less dependent on the factorization scale. This reminds the results obtained with the APT and applied to the analysis of the pion form factor in the O(α 2 s ) approximation, where the results also almost cease to depend on the choice of the renormalization scheme and its scale (for a detailed review see [22] and references therein). The process of the Higgs boson decay into a bb pair of quarks was studied within a FAPT-type framework in the Minkowski region at the one-loop level in [23] and within the FAPT at the three-loop level in [21]. The results on the resummation of nonpower-series expansions of the Adler function of scalar D S and a vector D V correlators within the FAPT were presented in [24]. The interplay between higher orders of the perturbative QCD expansion and higher-twist contributions in the analysis of recent Jefferson Lab data on the lowest moment of the spindependent proton structure function, Γ p 1 (Q 2 ), was studied in [25] using both the standard PT and APT/FAPT. The FAPT technique was also applied to analyse the structure function F 2 (x) behavior at small values of x [26,27] and calculate binding energies and masses of quarkonia [28]. All these successful applications of APT/FAPT necessitate to have a reliable mathematical tool for extending the scope of these approaches. In this paper, we present the theoretical background which is necessary for the running of A ν [L] and A ν [L] in the framework of APT and its fractional generalization, FAPT, and which is collected in the easy-to-use Mathematica package "FAPT" [29]. This task has been partially realized for APT as the Maple package QCDMAPT in [30] and as the Fortran package QCDMAPT F in [31]. We have organized "FAPT" in the same manner as the well-known package "RunDec" [32]. A few examples of APT and FAPT applications are given. Theoretical framework Let us start with the standard definitions used in "FAPT" for standard PT calculations. The QCD running coupling, α s (µ 2 ) = α s [L] with L = ln[µ 2 /Λ 2 ], is defined through where n f is the number of active flavours. The β-function coefficients are given by (see [33]) ζ is Riemann's zeta function. We introduce the following notation: ( Then Eq. (1) in the l-loop approximation can be rewritten as: In the one- with the Landau pole singularity at L → 0. In the two-loop (ℓ = 2) approximation where W −1 [z] is the appropriate branch of the Lambert function. The three-(c k (n f ) = b k (n f ) = 0 for all k ≥ 3) and higher-loop solutions a (ℓ) [L; n f ] can be expanded in powers of the two-loop one, a (2) [L; n f ], as has been suggested and investigated in [8,9,14]: The coefficients C (ℓ) n can be evaluated recursively. As has been shown in [9], this expansion has a finite radius of convergence, which appears to be sufficiently large for all values of n f of practical interest. Note here that this method of expressing the higher-ℓ-loop coupling in powers of the two-loop one is equivalent to the 't Hooft scheme, where one puts by hand all coefficients of the β-function, except b 0 and b 1 , equal to zero and effectively takes into account all higher coefficients b i by redefining perturbative coefficients d i (see for more detail [35]). The basic objects in the Analytic approach are the analytic couplings in the Euclidian A It is convenient to use the following representation for spectral functions: In the one-loop approximation the corresponding functions have the simplest form whereas at the two-loop order they have a more complicated form with W 1 [z] being the appropriate branch of Lambert function. In the three-(ℓ = 3) and four-loop (ℓ = 4) approximation we use Eq. (7) and then obtain Here we do not show explicitly the n f dependence of the corresponding quantities -it goes inside through R (2) The package "FAPT" performs the calculations of the basic required objects: α APT and FAPT applications As an example of the APT application, we present the Bjorken sum rule (BSR) analysis (see for more details [39]). The BSR claims that the difference between the proton and neutron structure functions integrated over all possible values of the Bjorken variable x in the limit of large momentum squared of the exchanged virtual photon at Q 2 → ∞ is equal to g A /6, where the nucleon axial charge g A = 1.2701 ± 0.0025 [33]. Commonly, one represents the Bjorken integral in Eq. (16) as a sum of perturbative and higher twist contributions The perturbative QCD correction ∆ Bj (Q 2 ) has a form of the power series in the QCD running coupling α s (Q 2 ). At the up-to-date four-loop level in the massless case in the modified minimal subtraction (MS) scheme, for three active flavors, n f = 3, it looks like [36] ∆ The perturbative representation (18) violates analytic properties due to the unphysical singularities of α s (Q 2 ). To resolve the issue, we apply APT. In particular, the four-loop APT expansion for the perturbative part ∆ PT Bj (Q 2 ) is given by the formal replacement Clearly, at low Q 2 a value of α s is quite large, questioning the convergence of perturbative QCD series (18). The qualitative resemblance of the coefficients pattern to the factorial growth did not escape our attention although more definite statements, if possible, would require much more efforts. This observation allows one to estimate the value of α s ∼ 1/3 providing a similar magnitude of three-and four-loop contributions to the BSR. To test that, we present in Figs dominant contribution to the pQCD correction ∆ Bj (Q 2 ) comes from the four-loop term ∼ α 4 s . Moreover, its relative contribution increases with decreasing Q 2 . In the region Q 2 > 2 GeV 2 the situation changes -the major contribution comes from one-and two-loop orders there. Analogous curves for the APT series given by Eq. (19) are presented in Fig. 2. Figures 1 and 2 demonstrate the essential difference between the PT and APT cases, namely, the APT expansion obeys much better convergence than the PT one. In the APT case, the higher order contributions are stable at all Q 2 values, and the one-loop contribution gives about 70 %, two-loop -20 %, three-loop -not exceeds 5%, and four-loop -up to 1 %. One can see that the four-loop PT correction becomes equal to the three-loop one at Q 2 = 2 GeV 2 and noticeably overestimates it (note that the slopes of these contributions are quite close in the relatively wide Q 2 region) for Q 2 ∼ 1 GeV 2 which may be considered as an extra argument supporting an asymptotic character of the PT series in this region. In the APT case, the contribution of the higher loop corrections is not so large as in the PT one. The four-loop order in APT can be important, in principle, if the theoretical accuracy to better than 1 % will be required. Now we briefly discuss how the APT applications affect the values of the higher-twist coefficients µ p−n 2i in Eq. (17) extracted from Jlab data. Previously, a detailed higher-twist analysis of the four-loop expansions in powers of α s was performed in [39]. In Figs. 3 and 4 we present the results of 1-and 3-parametric fits in various orders of the PT and APT. The corresponding fit results for higher twist terms µ p−n 2i , extracted in different orders of the PT and APT, are given in Table 1 (all numerical results are normalized to the corresponding powers of the nucleon mass M ). From these figures and Table 1 one can see that APT allows one to move down up to Q 2 ∼ 0.1 GeV 2 in description of the experimental data [39]. At the same time, in the framework of the standard PT the lower border shifts up to higher Q 2 scales when increasing the order of the PT expansion. This is caused by extra unphysical singularities in the higher-loop strong coupling. It should be noted that the magnitude of µ p−n 4 /M 2 decreases with an order of the PT and becomes compatible to zero at the four-loop level. It is interesting to mention that a similar decreasing effect has been found in the analysis of the experimental data for the neutrino-nucleon DIS structure function xF 3 [37] and for the charged lepton-nucleon DIS structure function F 2 [38]. Consider the application of the FAPT approach by the example of the RG-evolution of the non-singlet higher-twist µ p−n 4 (Q 2 ) in Eq. (17). The evolution of the higher-twist terms µ p−n 6,8, ... is still unknown. The RG-evolution of µ p−n 4 (Q 2 ) in the standard PT reads In the framework of FAPT the corresponding expression reads as follows: We present in Table 2 the best fits for µ p−n 4 (Q 2 0 ) taking into account the corresponding RGevolution with Q 2 0 = 1 GeV 2 as a normalization point and without the RG-evolution. We do not Table 2. Results of higher twist extraction from the JLab data on BSR with inclusion and without inclusion of the RG-evolution of µ p−n 4 (Q 2 ) normalized at Q 2 0 = 1 GeV 2 . for the standard PT calculations and compare with FAPT since the only effect of that would be the enhancement of the Landau singularities by extra divergencies at Q 2 ∼ Λ 2 , whereas at higher Q 2 ∼ 1 GeV 2 the evolution is negligible with respect to other uncertainties. We see from Table 2 that the fit results become more stable with respect to Q min variations, which reduces the theoretical uncertainty of the BSR analysis. Summary To summarize, APT and FAPT are the closed theoretical schemes without unphysical singularities and additional phenomenological parameters which allow one to combine RGinvariance, Q 2 -analyticity, compatibility with linear integral transformations and essentially incorporate nonperturbative structures. The APT provides a natural way for the coupling constant and related quantities. These properties of the coupling constant are the universal loop-independent infrared limit and weak dependence on the number of loops. At the same time, FAPT provides an effective tool to apply the Analytic approach for RG improved perturbative amplitudes. This approaches are used in many applications. In particular, in this paper we consider the application of APT and FAPT to the RG-evolution of nonsinglet structure functions and Bjorken sum rule higher-twist analysis at the scale Q 2 ∼ Λ 2 considered. The singularity-free, finite couplings A ν (Q 2 ), A ν (s) appear in APT/FAPT as analytic images of the standard QCD coupling powers α ν s (Q 2 ) in the Euclidean and Minkowski domains, respectively. In this paper, we presented the theoretical background, used in a package "FAPT" [29] based on the system Mathematica for QCD calculations in the framework of APT/FAPT, which are needed to compute these couplings up to N 3 LO of the RG running. We hope that this will expand the use of these approaches.
3,416
2013-10-22T00:00:00.000
[ "Mathematics", "Computer Science", "Physics" ]
Solar wind rotation rate and shear at coronal hole boundaries, possible consequences for magnetic field inversions In-situ measurements by several spacecraft have revealed that the solar wind is frequently perturbed by transient structures (magnetic folds, jets, waves, flux-ropes) that propagate rapidly away from the Sun over large distances. Parker Solar Probe has detected frequent rotations of the magnetic field vector at small heliocentric distances, accompanied by surprisingly large solar wind rotation rates. The physical origin of such magnetic field bends, the conditions for their survival across the interplanetary space, and their relation to solar wind rotation are yet to be clearly understood. We traced measured solar wind flows from the spacecraft position down to the surface of the Sun to identify their potential source regions and used a global MHD model of the corona and solar wind to relate them to the rotational state of the low solar corona. We identified regions of the solar corona for which solar wind speed and rotational shear are important and long-lived, that can be favourable to the development of magnetic deflections and to their propagation across extended heights in the solar wind. We show that coronal rotation is highly structured and that enhanced flow shear develops near the boundaries between coronal holes and streamers, around and above pseudo-streamers, even when such boundaries are aligned with the direction of solar rotation. A large fraction of the switchbacks identified by PSP map back to these regions, both in terms of instantaneous magnetic field connectivity and of the trajectories of wind streams that reach the spacecraft. These regions of strong shears are likely to leave an imprint on the solar wind over large distances and to increase the transverse speed variability in the slow solar wind. The simulations and connectivity analysis suggest they can be a source of the switchbacks and spikes observed by Parker Solar Probe. Introduction Measurements made by Parker Solar Probe (PSP; Fox et al. 2016) during its first set of orbits have revealed several intriguing properties of the solar wind at heliocentric distances that had never been probed before. PSP has shown that strong magnetic perturbations in the form of localised magnetic field-line bends, the most intense of which are termed switchbacks (SB), are omnipresent in the pristine solar wind measured near the Sun ). PSP observations have also shown that the magnitude of the transverse (i.e, rotational) speeds of the solar wind increases rapidly in the corona with increasing proximity to the solar surface, up to amplitudes that were not expected considering its general radial trend higher up in the heliosphere (Kasper et al. 2019). However, current knowledge of the exact way that the solar rotation propagates into and establishes in the highly magnetised solar corona is insufficient to interpret adequately the rotation state of the observed solar wind flows. Physical links between these coincidental phenomena are yet to be identified, but possibly relate to the structure of the solar wind and magnetic field at large scales, and to how their interplay generates regions with contrasting magnetic field directions and flows. Interchange reconnection has been pointed out as a potential candidate mechanism (Fisk & Kasper 2020), as it links large-scale rotation to magnetic reconnection. Interchange can occur at a variety of scales in the solar corona, from polar plumes above small magnetic bipoles inside coronal holes (Wang et al. 2004(Wang et al. , 2012Owens & Forsyth 2013) to streamer/coronal hole boundaries. Sudden reversals of the magnetic field have been observed for several decades. Early examples can be spotted in observations from the Helios mission (Behannon & Burlaga 1981). These reversals have since been studied at various distances from the Sun and related to different photospheric, coronal or heliospheric phenomena (Kahler et al. 1996; Ballegooijen et al. 1998;Balogh et al. 1999;Yamauchi et al. 2004b,a;Velli et al. 2011;Neugebauer 2012;Wang et al. 2004Wang et al. , 2012Owens & Forsyth 2013;Neugebauer & Goldstein 2013;Matteini et al. 2014;Borovsky 2016;Horbury et al. 2018;Sterling & Moore 2020). SB's observed by PSP display a high degree of alfvenicity (cross-helicity), indicating that they propagate outwards mostly in the form of incompressible MHD waves , that they correspond to a rotation of the magnetic field vector without change to its absolute value (Mozer et al. 2020), and that equipartition between the transverse velocity (δv ⊥ ) and magnetic field (δb ⊥ / √ ρ) perturbations is observed. The physical origin and evolution of magnetic SBs across vast heliocentric distances remains elusive to this date. Currently debated hypothesis interpret them as products of surface and coronal dynamics (reconnection, jets, plumes) or, inversely, claim them to be formed insitu in the heliosphere (turbulence, large-scale wind shear). Numerical simulations of alfvénic perturbations by Tenerani et al. (2020) suggest that switchbacks originating in the lower corona may survive out to PSP distances on their own, but only as long as they propagate across a sufficiently unperturbed background solar wind (free of significant density fluctuations, flow and magnetic shears that could destabilize them, e.g., the parametric instability). Owens et al. (2020), however, argue that solar wind speed shear is an essential factor for the survival of heliospheric magnetic field inversions (switchbacks) produced close to the Sun up to 1 AU, as these should not last long enough without being amplified by solar wind speed shear along their propagation path. Macneil et al. (2020) have furthermore shown that these magnetic inversions grow in amplitude and in frequency with altitude in HELIOS data, favouring the idea that they are either created or amplified by favourable wind shear in the heliophere. From a different perspective, Squire et al. (2020) propose that magnetic SBs are a natural result of solar wind turbulence, and should therefore be produced in loco throughout the heliosphere. Ruffolo et al. (2020) furthermore suggest that shear-driven MHD turbulence is capable of producing magnetic SBs, especially on the regions just above the Alfvén surface, where the observed solar wind flow density transitions from a striated to a flocculated pattern (cf. DeForest et al. 2016). Large-scale solar wind shear builds up naturally from two different components: gradients in wind speed in the meridional plane (transitions between slow and fast wind streams, between open and closed field regions), and gradients in azimuthal wind speed (rotation). If the role of the former is reasonably easy to identify on large scales, that of the latter is much less well understood. The solar photosphere is known to rotate with a welldefined differential latitudinal profile (with the equator having a higher rotation rate than the poles). The corona above, albeit magnetically rooted in the photosphere, seems to exhibit a different rotation pattern, with regions that often appear to be rotating rigidly (Antonucci & Svalgaard 1974;Fisher & Sime 1984). It has been suggested that this rigidity could be the result of the interplay between emerging magnetic flux and the global field, involving sustained magnetic reconnection Nash et al. 1988). Overall, the coronal plasma seems to rotate with a more solid-body like pattern than the photosphere, and the more so at mid and high coronal altitudes (Insley et al. 1995;Bagashvili et al. 2017). The rotation profile of the corona also seems to evolve in time, and to be linked to the solar cycle phase or, at least, to the specific coronal magnetic field configuration at a given moment (Badalyan et al. 2006;Badalyan 2010). Observations by Giordano & Mancuso (2008) using SoHO/UVCS have shown that a number of features superpose to the average large-scale latitudinal trends of the coronal rotation during so-lar minimum. They observed zones displaying particularly low rotation rates (or high rotation periods) that are likely to be located near the coronal hole/streamer boundaries (cf. their Figs. 5 and 6). Similar observations performed during solar maximum (Mancuso & Giordano 2011) indicate a flatter (more solid-body like) rotation profile, albeit with sub-structures that are harder to link clearly to the coronal topology.s In any case, a detailed picture of how the solar rotation establishes throughout the corona, including coronal holes and streamers, and whether it can produce sustained wind shears is still lacking. Closed magnetic loops will tend toward uniform rotation rates all along them (thus opposing rotation-induced shearing; Grappin et al. 2008), while open field lines will develop a wind flow that will see its azimuthal speed decreasing with distance from the Sun (in order to conserve angular momentum, as long as the magnetic tension exerted by the background field becomes weak enough). Streamers are systems of closed magnetic loops, and as such should acquire a shape and rotation pattern that depend on the specific range of solar latitudes that they are magnetically rooted at. Large streamers can encompass a large range of magnetic loop sizes and footpoint rotational speeds, and may develop an internal differential rotation structure. Open field lines will follow either a vertical path or one with strong inflexions around streamers depending on where they are rooted at the surface of the Sun, thus having an impact on the transport of angular momentum from the surface up to the high corona. Thus, rotation shearing layers can develop at specific places of the solar corona, such as at the interfaces between coronal holes and streamers. Such shear layers can be off importance to the formation of magnetic field reversals, if ever they can develop MHD instabilities that allow for the transport of mass, vorticy and helicity across different topological regions (via shearing, resistive or Kelvin-Helmhöltz instabilities; cf. e.g. Dahlburg & Einaudi 2003;Ruffolo et al. 2020). Additionally, the shear patterns formed can be of importance to the transport and amplification of such magnetic structures (or at least a fraction of them). The first four PSP perihelia occurred during solar minimum, between about November 2018 and February 2020. During these close passes, the spacecraft remained within 5 degrees of the solar equator during that time, and also close to the heliospheric current sheet (HCS). Solar minimum conditions translated into a corona displaying a large equatorial streamer and two polar coronal holes, essentially in an equator-symmetric configuration, except for the occurrence of a small low-latitude CH (visible during the first and second orbits) and a few equatorward polar CH extensions and small equatorial CHs. This manuscript focuses on investigating the response of the coronal magnetic field and solar wind flows to photospheric rotation, and on pointing out consequences to the interpretation of recent measurements made by PSP. We used an MHD numerical model of the solar wind and corona, and estimations of the sunto-spacecraft connectivity using the IRAP's connectivity tool to determine the coronal context of the solar wind flows probed by PSP during its first few solar encounters (focusing on the first, second and fourth encounters, due to the lack of continued good quality solar wind data for the third encounter). We suggest that the global dynamics of the rotating solar corona can impact the conditions for the formation of magnetic disturbances such as SBs (among others). Coronal magnetic field, wind velocity and rotation rate at three illustrative moments of the solar cycle (from left to right: solar minimum, maximum and decay phase, corresponding to instants t = 0, 3.8, and 4. yr in Fig. 3 of (Pinto et al. 2011)). The left halves of the panels show meridional slices of the wind speed (sonic Mach number, from 0 in solid blue to 2 in solid yellow, and with the yellow/blue boundary representing the sonic surface), and the right halves show slices of the rotation rate of the solar corona (from 0 in black to 14 • /day in light orange). Top and bottom rows correspond to cases with a standard differential rotation and to solid-body rotation at the lower boundary (the lower boundary of the corona is coloured with the same Ω colour scale as the slices on the right side of the images). Blue lines represent magnetic field lines. The frame of reference is inertial (not comoving with the Sun), hence Ω is positive everywhere. Numerical code and setup We used the numerical code DIP to model a 2.5 D axi-symmetric solar corona, setup in a similar way as in Pinto et al. (2016) and Pinto et al. (2011), although with a higher spatial resolution (768 × 768 grid, non-uniform in radius and uniform in latitude) and including rotation. The code solves a system of MHD equations that describes a one-fluid, isothermal, fully ionised and compressible plasma: The model assumes the corona and the solar wind to be isothermal with a uniform coronal temperature T 0 = 1.3 MK and a specific heat ratio γ = 1. The magnetic field B separates into a timeindependent external component B 0 (a potential field resulting from the internal dynamics of the Sun) and into an induced field b. We adopted several configurations for B 0 in order to simulate different moments of the solar activity cycle. The equations are integrated using a high-order compact finite difference scheme (Lele 1992) with third-order Runge-Kutta time-stepping (cf. Grappin et al. 2000). The diffusive terms are adapted to the local grid scale (∆l), that is non-uniform in the radial direction (∆l is minimal close to the lower boundary). The kine-matic viscosity is defined as ν = ν 0 (∆l/∆l 0 ) 2 , typically with ν 0 = 2 × 10 14 cm 2 · s −1 and 0.01 (∆l/∆l 0 ) 2 10. The magnetic diffusivity η is scaled similarly. The boundary conditions are formulated in terms of the MHD characteristics by imposing the amplitudes of the MHD characteristics propagating into the numerical domain (the outgoing ones being already completely determined by the dynamics of the system). The upper boundary is placed at r = 15 R and is fully transparent (open to flows and transparent to waves). The lower boundary is placed at r = 1.01 R and is semi-reflective with respect to the Alfvén mode (but transparent with respect to all others), and is also open to flows. We treat the chromosphere and the transition region layers as an interface (or rather a discontinuity), and define the chromospheric reflectivity a in terms of the ratio of Alfvén wave speeds above and below: C A representing the Alfvén speed B/ (µ 0 ρ) 1/2 . This approximation is valid for perturbations whose characteristic wavelength is much larger than the thickness of the chromosphere, which is the case for the quasi null-frequency (non-oscillating) rotational forcing that we apply here at the lower boundary. In order to establish coronal rotation, we first let a nonrotating solar wind solution fully develop in the whole numerical domain, and then apply a torque at the lower boundary which accelerates it progressively to the following rotation rate profile Ω (θ) = Ω a + Ω b sin 2 θ + Ω c sin 4 θ, Fig. 1 (from left to right: solar minimum, maximum and decay phase; differential surface rotation on the top row, solid-body rotation on the bottom row). The red curves indicate the rotation period imposed at the surface (eq. 6, with Ω b and Ω c equal to 0 in the solid rotation case). Rotation periods peak just outside CH/streamer boundaries, as in the UVCS observations by Giordano & Mancuso (2008). Global (resonant) oscillations of closed loops within the streamers are visible within these main peaks, especially in the case with differential rotation. Maximum rotational shearing occurs at mid-altitudes (below maximum streamer height). Wind shear is transmitted upwards, well above the streamer heights, at the vicinity of HCSs. At higher heights, the corona tends to progressively approach solid-body rotation with height within coronal holes (note that the case in the third column has a polar pseudo-streamer). with θ being the latitude, Ω a = 14.713 • /day, Ω b = −2.396 • /day and Ω c = −1.787 • /day, following Snodgrass & Ulrich (1990). We also tested solid-body solar rotation profiles, that were achieved by setting Ω b and Ω c to 0 in eq. (6). The duration of the initial acceleration period was defined to be ∼ 1/4 of the average (final) rotation period at the surface, and we let the system relax for at least 10 Alfvén crossing times of the whole domain (lower to upper boundary). The initial transient propagates upwards (after crossing the idealised chromospheric interface) predominantly as an Alfvénic wavefront (with little power on the non-alfvén charateristics), accelerating the open field plasma in the azimuthal direction and exciting a few global oscillations in the closed field regions. After a few Alfvén transit times, and for a small enough , the corona and wind settle down into a quasi-steady state. For ∼ 1, the highly transparent surface spins down very quickly, as this configuration corresponds to a lower boundary that cannot oppose the braking torque that results from net outward angular momentum flux carried away by the solar wind. For ∼ 0, the lower coronal boundary maintains the imposed rotation, and the magnetic field is line-tied to it. Smaller closed loops keep oscillating resonantly for a long time and the larger ones are sheared indefinitely (as their foot-points suffer a larger range of azimuthal speeds due to differential surface rotation). Intermediate (more solar-like) values of allow for the surface rotation to be maintained in the open-flux regions while the minimal required amount of footpoint leakage is allowed for the closed-flux regions to stabilise (cf. Grappin et al. 2008). The streamers gradually evolve towards a nearly solid-body rotation profile, with a rotation rate determined by that of its magnetic foot-points. Large streamers encompass a wide latitudinal range, and therefore a wide range of footpoint rotation rates. As a result, such streamers develop a more distinguishable differential rotation pattern than the smaller ones. The open field regions (coronal holes) develop permanent azimuthal velocity and magnetic field components, with a reasonably complex spatial distribution in the low corona converging into what can be thought of as the beginning of the Parker spiral on the outer part of the domain. The finite magnetic resistivity affects the width of these regions, but has a negligible effect on the overall rotation rates. We note that in order to achieve long-lasting (i.e, stable) coronal rotation profiles we had to resort to values of of about 10 −3 , rather than to a more realistic = 10 −2 . The reason for this is that the convective dynamics of the surface and sub-surface layers of the Sun are absent from our model, and therefore so are the resulting torques that would counter-balance the weak (but finite) braking torque exerted on the lower boundary by the rotating solar wind. s Rotating corona and solar wind: overview We ran a series of numerical MHD simulations of the solar corona and wind on which axisymmetric streamers and coronal holes are set under rotation following the methods described in Sect. 2. Figure 1 shows three-dimensional renderings of different simulation runs, corresponding to different moments of the solar cycle (from left to right: activity minimum, maximum and decay phase) and to different surface rotation profiles (from top to bottom: differential and solid-body). The blue lines are magnetic field lines. The left halves of images display meridional cuts of the solar wind speed (in units of sonic Mach number, from 0 in solid blue to 2 in solid yellow), while the right side halves show the rotation rate Ω (from 0 in black to 14 • /day in light orange, defined with respect to the inertial reference frame). The solar surface is coloured with the same colour scheme as the right side of the images, and hence show the rotation pattern at the lower coronal boundary. It is immediately clear from the figure that the solar corona assumes a rotation state that is highly structured and that reflects the large-scale topology of the magnetic field. As described above, the streamers tend to set themselves into solid body rotation, with the largest ones developing a more complex internal rotation structure (with some of its larger inner loops undergoing global resonant transverse oscillations for a long period of time). Open field lines that pass well within coronal holes (faraway from CH/streamer boundaries) progressively develop a backward bend (opposed to the direction of rotation).The open field-lines that pass close to the coronal hole boundaries suffer stronger expansions and deviations from the radial direction, drive the slowest wind flows, and acquire the lowest rotation speeds at mid coronal heights (not very far from the streamer tips). The rotation rates remain high at and right above the streamers tips (close to the HCS/HPS), nevertheless, as these tend to assume a rotation profile more closely linked with that of the surface (and of the corresponding streamers). As a result, the strongest spatial contrasts in wind speed and rotation rate Ω are usually found across these regions of the corona. Varying surface rotation profiles and cycle phase Different surface rotation profiles lead to qualitatively similar results, with differences between the differentially-rotating and the solid-body cases being more pronounced on large coronal loop systems. Streamers that are fully rooted at mid or high-latitude (i.e, not equator-symmetric) are subject to footpoint shearing (albeit shallow in amplitude), and acquire an orientation oblique in respect to the meridional plane (which does not happen with solid-body rotation). The positions and amplitudes of the fast and slow wind streams do not change when switching between the two types of rotation. The positions of the higher and lower rotation rates regions are also maintained, but their amplitudes and substructure can differ sensibly. Figure 2 shows the rotation period (in days) as a function of latitude at different radii (r = 1.03, 2.0, 4.0, 8.0 and 16 R , from darker to lighter green lines) for the same runs as those represented in Fig. 1 (from left to right: solar minimum, max-Wind speed gradient |∇u| Rotation rate gradient |∇Ω| Fig. 4. Zoomed-in view of the absolute magnitudes of the gradients of flow speed (top) and of the rotation rate Ω (bottom) close to the equatorial streamer on the solar minimum (left) and maximum (right) configurations. For simplicity, only differentially-rotating cases are shown and both quantities are displayed in normalised units (in units of 6.27 × 10 −4 s −1 and 9.0 × 10 −13 rad s −1 m −1 , respectively). The axis indicates distances in solar radii. The green lines are magnetic field-lines. The coronal hole/streamer boundaries systematically develop flow shears due to gradients in the solar wind flow along the magnetic field and to gradients in the rotation pattern (flow across the field) altogether. These shearing regions reach well into the coronal holes, and into the wind flow well past the streamer top height. imum and decay phase; differential surface rotation on the top row, solid-body rotation on the bottom row). The red curves indicate the rotation period imposed at the surface (eq. 6, with Ω b and Ω c equal to 0 in the solid-body rotation case). The rotation regimes of coronal holes and closed-field regions are clearly distinct, with the former tending to progressively approach solidbody rotation with increasing altitude (note that the case in the third column has pseudo-streamers at the north and south poles), and the latter showing a more complex rotation structure. The direct imprint of the imposed surface profile is only observed within the first few solar radii, with the overall latitudinal trend of the rotation period (equator to poles) even reversing in the high corona in some cases (e.g., in the first and third columns of the figure). Remarkably, rotation periods peak strongly just outside CH/streamer boundaries. These peaks are present along the whole extension of the boundaries (note how they get closer together with increasing altitude, especially for the cases with large streamers), and even beyond. They correspond spatially to the slowing down of the rotation rate visible as dark patches in the right halves of the panels in Fig. 1. These features are particularly prominent in the solar minimum configuration, with a very large equatorial streamer, and strikingly consistent with the solar coronal rotation periods measured below 2 R by Giordano & Mancuso (2008) using SoHO/UVCS data (during activity minimum, May 1996 to May 1997). Their observations highlight that, as in our simulation, the larger gradients of rotation rate are found at the boundaries between open and closed magnetic field lines. Our simulations suggest furthermore that this behaviour is universal across the activity cycle, although less easy to dis-tinguish during solar maximum (cf. second column in Fig. 2), due to a more intricate mixture of smaller streamers/pseudostreamers and of thinner coronal holes. This solar minimum to maximum variation is also consistent with the results of Mancuso & Giordano (2011), who performed a similar analysis with UVCS data during solar maximum (March 1999to December 2002. Global (resonant) oscillations of closed loops within the streamers are visible in the interval between these main peaks in rotation period (that is, within streamers), especially in the cases with differential rotation. While latitudinal gradients in rotation rate tend to be maximal at mid-altitudes (below maximum streamer height), they remain significant much beyond the height of the largest streamers, at the vicinities of HCSs. The gradients in rotation period are overall steeper in the cases under differential surface rotation than on the cases with solid-body surface rotation. Shear flow morphology In order to better show the form and amplitude of the shearing flows imposed by the global coronal rotation, Figure 3 displays a series of velocity streamlines (or flowlines) corresponding to flows passing above and below the CH/streamer interface (from streamer mid-height to top). The plots to the left show, for comparison, a non-rotating solar wind solution, while the plots to the right show the solar minimum configuration with solar-like differential rotation. Three perspectives are presented: a front view, a side view, and a pole-on view (from the north pole), all in the inertial (not rotating) reference frame. All streamlines are inte-u φ to u (r,θ) ratio B φ to B (r,θ) ratio Fig. 5. Ratio of azimuthal to meridional vector components of the velocity (top) and magnetic field (bottom) for the solar minimum (left) and solar maximum (right) configurations. Distances are in solar radii, and green lines are magnetic field-lines. For simplicity, only differentially-rotating cases are shown. The core of closed-field regions display high u φ /u (r,θ) ratios and B φ /B (r,θ) ≈ 0 due to them being in solid-body rotation. Open-field regions develop a u φ /u (r,θ) profile that decays with altitude while B φ /B (r,θ) grows. Coronal hole/streamer boundaries show stark contrasts in velocity and magnetic field pitch angles, and also extend coronal regions above the mid-latitude pseudo-streamers on the right panels. grated for the same physical time interval, such that the length of each streamline indicates the total displacement of a given fluid element during that period of time. Streamlines that correspond to plasma lying well within the streamer make an arc of a circle around the Sun (CCW on the pole-on view), while those connected to solar wind streams extend outward. Among the latter, those that are farther away from the CH/streamer boundary cross the domain faster and follow a straighter path, while those that pass closer to it also accelerate slower and suffer a larger azimuthal deviation before they join the bulk of the wind flow above. Some flows transition between the two regions, especially those passing very close to the interface for which diffusive processes favour that transition. The side-view (second row) also shows how coronal rotation makes the equatorial streamer slightly taller and with a sharper transition to the neighboring coronal holes, on the region that develops the strongest magnetic and flow shear. Wind shear near the coronal hole boundaries and on the extended corona The CH/streamer interface region combines two mechanisms that generate shearing flows: one that acts on the meridional plane (due to the transition from the inner streamer stagnant flow, to the slow wind region and finally to the fast wind), and another one that translates into changes in the azimuthal component of the flow velocity (due to the different rotation regimes on each side of the boundary). Figure 4 shows the absolute values of the gradients of the wind speed (top) and Ω (bottom) in the low corona during solar minimum (left) and solar max-imum (right), focusing on the low latitude regions. In absolute terms, flow shear is maximum on broad regions around the CH/streamer interfaces, that extend partially into the close-field regions and into the coronal holes. These broad regions of strong shear clearly extend upwards, surrounding the heliospheric currents sheets (HCSs). The closed-field side of the shear region encloses the transition from the solidly-rotating but windless part of the domain to the slow wind zone, while the open-field side covers the transition from slow to fast solar wind flow (with decreasing rotation rates). These shearing layers are longer and thicker on the solar minimum configuration, which comprises a large equatorial streamer. The contribution of coronal rotation to the overall shear is more clearly visible on the bottom row of the figure. Strong gradients of Ω are found to be more tightly concentrated around the interface layers. However, these gradients extend sideways well into the coronal holes, and upwards asides HCSs and also along pseudo-streamer axis (cf. the two mid-latitude pseudo-streamer structures on the panel to the right of the figure). In addition, current density (∼ ∇×B) also accumulates at the coronal hole/streamer boundaries, at the streamer tip and along the HCS. Plasma β increases with height along the interface, eventually becoming larger than 1 close to the streamer tip (and base of the HCS). Angular deviations of magnetic field lines relative to the meridional plane (due to rotation) are small at these heights, and therefore hard to visualize directly. But they are present nonetheless, and some of the plotted fieldlines can be seen to go in and out of the meridional planes represented in Fig. 1. Open field lines bend backwards, as expected, and more strongly in the slow wind parts, while the closed loops on the other side of the in-Flow vorticity |∇ × u| Ratio of poloidal (rotation-induced) to azimuthal (parallel flow induced) vorcity Fig. 4 are a source of vorticity that extend along the CH/steamer boundaries and above. The yellow regions in the bottom plots indicate zones where the poloidal component (the only one that gives rise to field-aligned vorticity, due to gradients of v φ produced by coronal rotation) is predominant. Pink areas are dominated by wind speed shear (rather than rotational shear). Streamers have a strong rotational signature. Pseudo-streamer axis display elongated rotational shearing zones. terface are close to solid-body rotation. From the high corona upward, the transverse (azimuthal) component of the magnetic field grows in amplitude with growing distance from the Sun in response to solar rotation, as expected from classic wind theory. The azimuthal component of the velocity field, conversely, can be significant in the low corona, but decreases asymptotically in the high corona. Fig. 5 shows the spatial distribution of the ratios between the azimuthal and the poloidal components of the velocity and of the magnetic field vectors (u φ /u (r,θ) and B φ /B (r,θ) , in absolute value). High values correspond to high pitch angles, defined as the angle between the vector and the meridional plane, or the angular deviation in respect to a purely poloidal flow (in r and θ). A non-rotating corona displays null values everywhere on a non-rotating corona. A rotating corona will contain closedfield region in (quasi) solid-body rotation with very high u φ /u (r,θ) and B φ /B (r,θ) ≈ 0, and open-field regions for which u φ /u (r,θ) decreases and B φ /B (r,θ) increases with altitude. Magnetic field-lines are represented as green lines. Some magnetic features, such as the pseudo-streamers on the case represented on the right panels, generate localized enhancements of the transverse (azimuthal) fields that are felt across the radial extent of the numerical domain. The combination of flow shears in the meridional and azimuthal directions act as a persistent source of flow vorticity (defined as ∇ × u), also with multiple components. Pure wind speed shear, due to variations in the radial and latitudinal components of the wind speed gradients, translates into an azimuthal vorticity vector that represents vortical motions contained within the (r, θ) plane. On the other hand, spatial variations in the azimuthal speed (due to rotation) produce a vorticity vector with only radial and latitudinal components (poloidal vorticity), cor-responding to vortical flows that develop in the (θ, φ) or in the (r, φ) planes. Figure 6 shows the spatial distributions of the absolute values of flow vorticity (top) and the ratio of the poloidal to azimuthal vorticity (i.e, ratio of rotation-induced to windspeed induced vorticities; bottom plots). Left and right columns show the differentially-rotating cases at solar minimum and at solar maximum, as in the previous figures. Flow vorticity is maximal at the interfaces between closed and open magnetic field, as can be guessed from Figs. 4 and 5, and extends outward around streamer and pseudo-streamer stalks. The yellow regions in the bottom plots indicate zones where the poloidal component -the one that is rotation-induced and can give rise to field-aligned vorticity -is predominant. The pink areas are, conversely, dominated by wind speed shear caused by the spatial distributions of slow and fast wind flows and of wind-less closed-field regions. Streamers (and generally closed-field regions) have a strong rotational signature that is present up to their boundaries. The accelerating wind flows that surround them rapidly reach speeds high enough to overcome the azimuthal (rotational) speeds, and most of the coronal hole vorticity becomes dominated by wind speed shear. However, some coronal regions develop a strong and extended rotation-induced vorticity signature. The most remarkable examples are pseudo-streamers located at mid-latitudes, which develop elongated rotational shearing zones along their magnetic axis, as shown in the bottom right panel of Fig. 6. These structures separate regions of the corona filled with magnetic field rooted at very different latitudes (polar and quasi-equatorial CHs) with rather different surface rotation rates. Flow shear remains predominantly forced by global rotation through large distances on these layers, and thus developing a mainly field-aligned vorticity (corresponding to vortical motions orthogonal to the main magnetic field orientation). Sun -Parker Solar Probe connectivity context In order to establish links between the results of our numerical simulations and measurements made by Parer Solar Probe, we attempted to determine the source regions of the wind streams detected in-situ, together with their trajectories across the solar corona. This let us verify the applicability of the model to the coronal context at play, and hence to determine the physical conditions most likely experienced by those solar wind flows. The Connectivity Tool We have used the IRAP's Connectivity Tool (http:// connect-tool.irap.omp.eu/) to determine the most probable paths that the solar wind streams took all the way from the solar surface to the position of PSP during its first four perihelia. The connectivity tool calculates continuously the magnetic and solar wind connectivity between the Sun and different spacecraft (or Earth), so as to establish physical links between them all along their orbits (Rouillard et al. 2020b). The tool offers different possible sources of input magnetogram data, extrapolation methods and wind propagation models, and lets the user assess uncertainties and inconsistencies related to the measurements and models used. We chose to setup the connectivity tool with standard Potential Field Source-Surface (PFSS) extrapolations of Air Force Data Assimilative Photospheric Flux Transport (ADAPT) magnetograms (Arge & Pizzo 2000;Arge et al. 2003). It also provides an evaluation of the Potential Field Source-Surface (PFSS) reconstructions by comparing the topology of the neutral line with that of streamers (Poirier et al. 2021, submitted). The choice of the ADAPT magnetograms was based on this evaluation procedure. Propagation paths and temporal delays were determined by adjusting a solar wind profile to the wind speeds measured at the position of PSP at each moment of its trajectory. The wind velocities were obtained from the SWEAP instrument suite (Kasper et al. 2016), and particularly from the plasma moments from the Solar Probe Cup (SPC; Case et al. 2020). The solar wind mapping from the spacecraft position to the low corona, and from there to the surface, can be affected by different sources of error. Uncertainties related to the exact wind acceleration profile can lead to different solar wind propagation paths, and hence to deviations in longitude in the high corona, and total wind travel time. PFSS extrapolations from magnetic fields measured at the surface of the Sun are furthermore occasionally affected by positional errors that translate into few-degree deviations of the coronal structures in latitude. In order to cope with these issues, the Connectivity Tool determines the points at the surface of the Sun that connect to an uncertainty ellipse around the orbital position of the target spacecraft (covering the expected latitudinal and longitudinal uncertainties). For any given time, the tool hence provides a list of surface foot-points with different associated probabilities, rather than unique positions. The time period covered by this manuscript corresponds to solar minimum configuration, with a close to axi-symmetric corona and a rather flat HCS. As a result, the dispersion of the foot point positions predicted for any given date and time leads to region very elongated in the azimuthal direction, with the corresponding errors rarely leading to topologically different regions of the Sun. This holds also for moments when the spacecraft is in close proximity to the HCS and the expected footpoints split into both solar hemispheres (northern and southern end-regions are topologically similar). We also made extra runs with lower and higher solar wind speeds for the duration of the time-periods analysed, and found that the properties of the connected regions do not change significantly. Our aim here is to identify the types of solar wind source regions (in the topological sense) rather than exact surface footpoint coordinates. Therefore the error sources described above do not translate into variations of the dynamical and topological properties of the regions crossed by the solar wind, and therefore should not affect our analysis. -Feb. 2020 (bottom). Pale blue and red layers cover the base of coronal holes with positive and negative polarity, respectively. The colored symbols represent the positions of the solar wind plasma that reached PSP computed using the IRAP's connectivity tool, with in-situ measurement dates being labelled in corresponding colours. Stars indicate solar wind plasma position at the PFSS source-surface altitude (only the centroid of the uncertainty ellipse is shown, for simplicity), and circles indicate the most likely wind source positions at the solar surface. The dotted lines that connect them are visual aids and, for simplicity, do not trace the actual field-line trajectories. Following the description in Sect. 3.1, each star corresponds to several circles that indicate the scatter due to mapping uncertainties. The red dashed-line represents the neutral line (base of the HCS) obtained from PFSS extrapolation of ADAPT/GONG maps. We have excluded the third PSP encounter due to the unavailability of continued good quality solar wind data during the corresponding time interval. Sun to spacecraft connectivity, solar wind trajectories The surface footpoint charts overlaid on the EUV 193 Å synoptic maps indicate that PSP spent a significant fraction of its first passages sampling solar wind streams that developed at the vicinity of predominantly azimuthaly-aligned CHstreamer boundaries. These regions correspond topologically to the boundary shear layers illustrated in the top-left panel of Fig. 4 (for our simulated solar minimum case). The mappings at 5 R on the right panels (white-light maps) show that the streams detected by PSP propagated through the heliospheric plasma sheet (in close proximity to the HCS), and through the bright streamerbelt region. These region corresponds to the equatorial regions in the high coronain our solar minimum simulations, just above the tip of the large equatorial streamer. Solar wind connectivity across this range of heliocentric distances (1 to 5 R ) is better shown in Figure 8 for one selected instant (during the second PSP encounter). The figure shows a three-dimensional rendering of the global magnetic field structure of the corona on March 23rd 2019 (cf. Fig. 7) and the magnetic field lines through which the wind plasma that reached PSP at the instant represented is likely to have escaped from beforehand. The two panels show side and top views, as in Fig. 3 (middle and bottom panels) for our solar minimum MHD simulations. The ADAPT/GONG magnetogram used on our coronal field reconstruction is plotted over the surface of the Sun, with green and red shades representing positive and negative polar- Connectivity is computed by projecting the solar wind speed measured at PSP backwards to the surface of the Sun, considering the most likely Parker spiral, accounting for solar wind travel time and connecting to coronal field reconstructions based on magnetograms corresponding to the expected solar wind release times. The left panels display Carrington maps of EUV emission (SDO/AIA 193Å) overlaid with the most probable footpoints (solar wind source positions, coloured circles) at the surface of the Sun for the dates indicated, and the centroid of the connectivity probability distribution at 5 R (coloured stars). The cyan line indicates the trajectories of the 2.5 R connectivity point. The coloured dotted lines are visual aids to connect source-surface to surface positions (but do not trace the actual field-line trajectories). Date labels indicate in-situ solar wind measurement times. The right panels show similar maps of white light emission at 5 R (SoHO/LASCO C3), overlaid with similar markers (at 5 R ). The red dashed-line represents the neutral line (base of the HCS) obtained from PFSS extrapolation of ADAPT/GONG maps. PSP spent a significant fraction of its first passages sampling solar wind streams that developed at the vicinity of predominantly azimuthaly-aligned CH-streamer boundaries, and that propagated through the heliospheric plasma sheet (in close proximity to the HCS, within the bright streamer belt region). ities. The transparent grey surface represents the boundary between the large equatorial streamer and the polar coronal holes, and the violet ribbon indicates the base of the HCS (polarity inversion line). Light yellow lines represent a series of magnetic field lines that sample the whole uncertainty ellipse taken into account by the Connectivity Tool. A fraction of these lines map towards base of the coronal hole on the northern hemisphere, but are discarded because they correspond to a magnetic polarity inverse of that measured in-situ by PSP. The dark red lines indicate the connectivity paths ranked with the highest connectivity probability. The blue lines trace a few additional open field lines rooted inside coronal holes. It is clear from the figure that the wind streams sampled by PSP at this instant devel-oped along paths that closely delineate azimuthaly-aligned CHstreamer boundaries, in concordance with the MHD simulations discussed in Sect. 2. In view of the temporal sequence of Sun-to-PSP solar wind connectivity displayed in Fig. 7, this configuration was the most common one throughout the periods analysed. The most probable Sun-spacecraft solar wind propagation paths lie, for the most, right along the boundary between the large equatorial streamer and the polar coronal holes. As PSP proceeded on its orbit, the source regions at the surface scanned this boundary continuously, unless during brief periods of connection to low latitude coronal holes or to deep equatorward polar CH extensions. PSP spent a large fraction of its first few encounters probing solar wind streams that formed in the vicinity Fig. 8. Three-dimensional rendering of the Sun-to-PSP connectivity paths on Mar 23th 2019 at 00:00 UT, during PSP's second encounter (cf. middle panels of Fig. 7). The two panels show side and top views, as for the simulations in Fig. 3 (middle and bottom panels). The radial component of the surface magnetic field is represented in green (positive polarity) and red (negative polarity) tones, retrieved from the same ADAPT/GONG magnetograms as in the previous figure. The transparent grey surface represents the boundary between the large equatorial streamer and the polar coronal holes, and the violet ribbon indicates the base of the HCS. The light yellow lines cover the whole uncertainty ellipse taken into account by the Connectivity Tool. A fraction of these lines map towards the northern hemisphere, but can be discarded because they correspond to a magnetic polarity inverse of what PSP measured in-situ (although they fall into topologically and dynamically equivalent regions of the corona). The dark red lines indicate the connectivity paths ranked with the highest connectivity probability. The blue lines trace a few additional open field lines rooted inside coronal holes. PSP is connected to field lines (and wind streams) that closely delineate azimuthaly-aligned CH-streamer boundaries, in concordance with the MHD simulations discussed in Sect. 2. Note how the lines (than sample the whole uncertainty ellipse) clearly fall into compact regions that correspond to the broad shearing zones in the top-left panels of Figs. 4 and 6. of azimuthally aligned CH/streamer boundaries, and especially so during the 2nd and 4th encounters, with occasional connections to small low latitude coronal holes or equatorward polar coronal hole extensions (Griton, et al, 2020; in press). The solar corona retained a high degree of axial symmetry throughout the time periods involved in this analysis. The HCS (and HPS) maintained a predominant E-W orientation in the solar regions connected to PSP. These reasons justify the use of the MHD simulations represented in Figs. 1 to 6.This is especially true for the solar minimum case (with an axi-symmetric large equatorial streamer). Deviations to this configuration, such as the coronal hole extensions and equatorial coronal holes visible near Carrington longitudes 280 and 80 in Fig. 7, are in principle associated with pseudo-streamers such as those in our simulations for the cycle decay phase (right panels of Figs. 4 to 6, at midlatitudes), although perhaps with different sizes and orientations. We consider that the regions rooted at about ±60 deg in our simulations for the solar minimum case (cf. Figs. 4 to 6,left) are the most representative of the source regions of the wind flows detected by PSP. These are coronal hole boundary regions oriented in the east-west direction (parallel to the direction of solar rotation). Solar wind streams originating from these regions are accelerated through an environment with significant and spatially extended solar wind speed and rotation shear, that also correspond to the peaks in rotation period in the left panels of Fig. 2. The dynamical properties of these zones of the corona should have an impact on the properties of the wind measured in-situ, if not being being responsible for some of their characteristics. On PSP data, transitions from streamer (i.e, boundary layer) to non-streamer (core of coronal hole) wind flows were accompanied by a clear decrease in the variablity of the wind (Rouillard et al. 2020a), both in frequency and amplitude of magnetic SBs and on the occurrence of strong density fluctuations. This is suggestive that the physical conditions associated with coronal hole boundaries are favourable to the development of such perturbations. Interchange reconnection, often invoked as a possible SB generation mechanism (Fisk & Kasper 2020), relies on the forcing of these boundary regions by the large-scale rotation of the corona. However, it is expected to be enhanced (or more efficiently driven) at CH/streamer boundaries that are orthogonal or inclined with respect to the direction of rotation as, e.g, a streamer pushes into neighbouring CHs (see e.g. Lionello et al. 2005), and reduced on azimuthally aligned CH/streamer boundaries. This is in contrast with the general spatial orientation of the equatorial streamer observed during this period and that of our simulations, on which streamer/CH boundaries are parallel to the direction of rotation. While less favourable to interchange reconnection, this spatial orientation does not hamper the formation of the shear flows in the solar wind shown in Figs. 3 to 6. Furthermore, these shears remain visible up to large heliocentric distances, which could favour the propagation (or even amplification) of magnetic perturbations formed in the low corona, allowing them to survive more easily up to the altitude of detection Macneil et al. 2020). Beyond this phenomenology, the boundaries of polar coronal holes are also known to be highly dynamic and undergo significant reconfiguration over the roughly 24-hour timescale of supergranules (Wang et al. 2010). These effects are not modelled in this paper, but could induce additional variability that could be felt by PSP. The existence of neighbouring solar wind streams with different rotation rates (as those in the streamer stalk regions on our simulations) should furthermore contribute to increasing the variability of the transverse (rotational) velocities measured by PSP, especially as the HCS is slightly warped (cf. Fig. 7). Summary We investigate the development of spatially extended solar wind shear regions induced by solar rotation and by variations in solar wind speed in the light of recent Parker Solar Probe findings. Our analysis combines simulations made using a MHD numerical model of the solar wind and corona, and estimations of the sun-to-spacecraft connectivity during the first 4 PSP solar encounters to aid in associating model results to s/c data. Our main findings are that: 1. Solar wind flows that develop in the vicinity of coronal hole boundaries are subject to persistent and spatially extended shearing. There are two components to this shearing: a wind speed shear due to the transition from closed-field (no-wind) to the slow and fast wind regions, and a rotational shear due to the way coronal rotation settles in response to the rotation of the solar surface. The most significant shearing occurs in thin layers that lie along CH/streamer (or pseudo-streamer) boundaries, and that stretch outwards in the vicinity of heliospheric current sheets and pseudo-streamer stalks. Wind speed shear generates a spatially broader shearing signature associated with a large-scale vorticity vector oriented in the azimuthal direction Rotational shearing produces shearing patterns with field-aligned vorticity that become predominant in elongated regions above pseudo-streamer stalks that extend to great distances from the Sun. 2. The solar corona acquires a complex rotation pattern that differs significantly from that of the surface rotation that drives it. Closed-field regions (streamers, pseudo-streamers) tend to set themselves into solid body rotation, with a rate consistent to that of the surface regions at which they are rooted. Open field lines show a variety of rotation rates, with those that pass near coronal hole boundaries acquiring the lowest rotation rates at mid coronal heights, in stark contrast with the closed-field regions across the boundary. This results in clear increases of rotation period adjacent to streamers (especially visible in large streamers), in agreement with SoHO/UVCS observations. Streamer stalks (and the vicinity to HCS/HPS) can contain a mixture of wind streams at different rotation rates (slow rotating flows coming from the CH boundaries, faster wind flows coming from the streamer tips). 3. Solar wind flows probed by Parker Solar Probe during its first four orbits form and propagate away from the Sun through regions of enhanced wind speed and rotational shear. Our Sun-to-spacecraft connectivity analysis shows that such solar wind flows originated mostly at the boundaries of quasiaxisymetric polar coronal holes, with occasional crossings of low-latitude coronal holes. The measured wind flows showed a strong and complex rotational signature permeated by pervasive magnetic perturbations such as switchbacks (among others). Our results suggest that the slow wind flows detected by PSP should be experience persistent shears across their formation and acceleration regions, supporting the idea that these should have an impact on the formation localised magnetic field reversals and be favourable to their survival across the heliosphere. Discussion and perspectives Unlike its photospheric counterpart, current knowledge of the rotational state of the solar corona is very limited. Different observation campaigns (relying on different measurement methods) have suggested that the corona rotates in a manner that is not in direct correspondence with the surface rotation, and that it depends to some degree on the solar cycle phase (that is, on the global magnetic topology). More recently, Parker Solar Probe unveiled the prevalence of strong rotational flows that increase rapidly in amplitude as the spacecraft approached the Sun, at least in the regions probed during its first few close passes (Kasper et al. 2019). Our MHD simulations show that surface rotation is transmitted to the corona in a complex manner that depends intrinsically on the organisation of the large-scale magnetic field at any given moment. Coronal rotation is highly structured at low coronal altitudes, with a clear signature of slowly rotating flows that follow the CH/streamer boundaries, in agreement with the observations by Giordano & Mancuso (2008). Some of these strong gradients in rotation rate produce an imprint that extends faraway into the high corona (up to the upper boundary of the numerical model). These non-uniformities in rotation rate translate into solar flow shear with a vorticity component oriented along the magnetic field and solar wind propagation direction, that adds up to the shear caused by the spatial distribution of fast and slow wind flows (and that can only generate an orthogonal vorticity component). The MHD model setup relies upon a number of simplifications to the full physical problem. It uses a polytropic description of the plasma thermodynamics, meaning that the heating and cooling mechanisms are not modelled in detail, but that the main dynamical and geometrical features of the solar wind are retained. This approach furthermore leads to a solar wind with speed variations which are much weaker than those found on the real solar atmosphere (smaller contrast between typical fast and solar wind speeds and broader transitions). We settled on the limiting case in which the rotating solar corona and solar wind are perfectly axisymetric (with CH/streamer boundaries perfectly parallel to the direction of rotation). Rotation-induced interchange reconnection is completely inhibited in this configuration, as is the development of shear instabilities. The formation of full vortical flows in the (θ, φ) plane and the injection of helicity into the wind flow are inhibited. This choice of problem symmetry has, however, a number of advantages in respect to the full 3D equivalent, namely that it allows running many more variations of parameters (different phases of the cycle, different solar surface rotation profiles), and that the runs can be easily made at a higher spatial resolution. As a consequence, the simulations develop sharper CH/streamer boundaries than their full 3D counterparts, which help making the rotation shears more apparent. These boundaries should, notwithstanding, be much sharper in the real solar corona. For these reasons altogether, the amplitudes of solar wind speed and rotational shearing layers should be higher in the real Sun than those that the MHD simulations are capable of producing. We also used an idealised solar dynamo model to constrain the large-scale magnetic field topology at each moment of the solar cycle, meaning that our MHD simulations do not aim at modelling a specific event, but rather at letting us understand the dynamics of the regions of interest (the model produces a full set of typical solar coronal structures -streamers, pseudo-streamers and coronal holes -placed at different latitudes according to solar activity). As we have shown with the help of the IRAP's connectivity tool (http://connect-tool.irap.omp.eu/), the solar wind streams that reached Parker Solar Probe during its first few encounters most often traversed the vicinity of the boundaries between a large equatorial streamer and the adjacent coronal holes. The dynamics at play in such regions suggest that they are potential hosts for the development of shearing instabilities (among others) that are continuously driven by the large-scale wind shears at these boundaries. Such processes can potentially introduce alfvénic perturbations into the solar wind and/or give rise to discrete helical (and pressure-balanced) MHD structures (depending on the aforementioned instability thresholds, growth rates and time-scales of ejection onto the wind). Due to the limitations intrinsic to the global MHD approach expressed above, the simulations do not let us directly identify the onset of shear instabilities (such as Kelvin-Helmhöltz). The velocity and magnetic field gradients remain, in practice, much smoother than those that real solar wind can have, and the global flow remains rather laminar (without fully developed turbulence). This is issue will be addressed in future work. Interestingly, however, the solar wind shear -due both to the transition from slow to fast wind and to the rotation rate gradients -extends well beyond the height of the highest streamers in our simulations, meaning that the driving mechanism could be effective over a rather large range of heights. One significant feature revealed by our simulations is the formation of a very long column with a strong field-aligned vorticity signature atop of a pseudo-streamer (bottom right panel in Fig. 6). Coronal rotation forces a strong shear across the pseudo-streamer axis in the plane perpendicular both to the wind flow and magnetic field direction that acts on the solar wind over large distances. This provides, perhaps, a more efficient mechanism to inject helicity (and/or perturbations in the transverse direction, such as those that characterize switchbacks) than the milder (and effective over shorter distances) shearing present at the boundaries of large equatorial streamers. During its first solar encounter, Parker Solar Probe switched connectivity temporarily from an equatorial streamer boundary region to a low latitude pseudo-streamer that formed near the small equatorial coronal hole visible in the top left panel of Fig. 8. During this period, PSP magnetic field measurements show a transition from a period of strong amplitude and very frequent switchbacks to another with lower amplitude and more spaced events, hence supporting the hypothesis above. Our results therefore give support to a hybrid view on the origin of magnetic switchbacks, by providing both a generation mechanism acting in the low corona, and a way to sustain (if not amplify) at least part of them on their way through the high corona and heliosphere. A more detailed investigation of these hypothetical mechanisms is yet to be undertaken, and will require substantial modeling efforts. We believe, however, that the scenario we describe in this manuscript can shed light on how to bring different characteristics of SBs (generation, propagation) come together harmoniously, and also provides new insights on the mechanisms related to plasma transport between closed and open field regions of the solar corona. We look forward to Solar Orbiter (Müller et al. 2020) and to its combined in-situ and remote sensing campaigns, that will be made from increasingly higher latitudes and at different phases of the solar cycle, hence providing a unique opportunity to detect wind flows that are formed and accelerated in a larger range of coronal contexts. Parker Solar Probe will keep reducing its perihelium distance, and will certainly provide a closer look into the effects of solar wind shear and rotation on increasingly more pristine wind flows.
13,679.4
2021-04-16T00:00:00.000
[ "Physics", "Environmental Science" ]
An Investigation of the Digital Teaching Book Compared to Traditional Books in Distance Education of Teacher Education Programs The present study aimed to investigate the efficiency level of digital teaching materials for higher education programs. The present study had a mixed research methodology to gather in-depth and rich context. Twenty participants were chosen from a distance education program of the Pedagogical Formation (2014-2015) at Near East University in Nicosia in North Cyprus. The sample of the present study was selected by using the purposeful sampling method. The participants who took the course (instructional technology and material design), half of them used electronic sources and the other half used traditional sources during the distance education. The participants’ answers were categorized into who was taking distance education with digital materials and who was taking distance education with non-digital materials. The results indicate that the participants were aware of the facilities of using e-books and they were content with the facilities of using an electronic book. The participants’ view show that using electronic book has function on being successful and interactive in their education. At the same time, using electronic book provide chances to students reach multiple sources. Thus, the participants’ answers in qualitative data also indicate that using multimedia is necessity to increase level of motivation students in their study and using electronic book and electronic sources provide unlimited learning platforms for students. Thus, students’ level of attention and permanent learning are increased. INTRODUCTION AND BACKGROUND In the 21 st Century, new information and communication technologies have rapidly improved and influenced every system all around world.Internet technology has become an inevitable part of our lives and it takes an important place and role in every system of our lives, such as in the education systems (Lissitsa & Svetlana, 2016).Educational systems worldwide have been influenced by new information and communication technologies.Therefore, educational systems worldwide rapidly growing and have been required to use the new information and communication technologies to teach students the knowledge and skills, which they will need in this age.Thus, the development of Internet technology is obviously visible in the field of education because it has taken an enormous place in the contemporary education system and Internet technology is a main resource for the current generation's R. Uygarer & H. Uzunboylu / Digital Teaching Book 5366 students (Wang, Hsu, Campbell, Coster & Longhurst, 2014).For example, computer mediated communication brings new dimension into educational systems and computer-mediated communication in the educational system is one of the well-known developments on the Internet platforms for teachers and students, as well as distance education.In this context, improved, distance education in developing universities provides significant and meaningful advantages for both undergraduate and graduate students.For this reason, the types of teaching materials have a significant meaning for the successful education of students in distance education system as well as traditional education system.With the help of the Internet and Internet technology, facilities and types of teaching materials have rapidly changed and the alternatives available have increased.This is because technology has a great influence on people who are looking for alternatives to meet their needs in every phase of their lives. As it has been mentioned above, computer mediated communication brings new dimension in education systems.Indeed, the history of computer-mediated communication is not new, but it will be really understandable in the light of developments in the field of communication and new communication technologies and using in education systems.In the first place, computer-mediated communication initially provides the chance for students to be interactive, rather than being passive in traditional education.Students do not look like the oldest students who are taught in traditional ways.Today's students have chance to reach information in a variety ways and in the field of education, computer-mediated communication is being used as new teaching and new learning platforms and resources (Chang, 2007).Thus, computer-mediated communication has become a starting point for distance education, and computer-mediated communication has recently become more popular and has attracted large of numbers of people with the development of Internet technology (Riva & Galimberti, 1997).By using computermediated communication, students initially became interactive in their relationships but now; the rapid development of computer-mediated communication has lead them to communicate simultaneously (Wood, 2015).This development shifts to education systems, as mentioned above, distance education is a well-known form of computer-mediated education.In this sense, the Internet based technologies and the World Wide Web are the main foundation for the learning platforms, especially online learning platforms.Internet technology has become an inevitable part of the field of education because it provides more choices, and greater alternatives for flexibility in the process of teaching and learning.Indeed, ways of learning and platforms of learning are changing with the development of Internet technology since education has been reformed with the birth of the Internet (Conley & Udry, 2010).Thus, learning is now not merely based on traditional ways of teaching.Almost all students are State of the literature • The literature asserts that the place of the Internet and new development of technology has become an inevitable part of education systems.Therefore, every level of the education process needs to be updated via new development with respect to technology. • The literature points out that learning and teaching are influenced by new development technology.For example, the learning platforms are changing from offline platforms to online platforms.The importance of distance education is also highlighted. • The literature shows that distance education platforms carry important meaning which is available for everyone who wants to study because online education and online sources provide alternatives to conventional teaching and learning. Contribution of this paper to the literature • This study initially contributes to the place of online platforms for education and highlights the role of Moodle, YouTube and the BigBlueButton in distance education. • The study points out the role of multiple platforms for interactive learning experiences to increase the level of self-learning and motivation to have expected success with respect to electronic resources. • The study asserts that the role of using electronic resources instead of traditional sources tends to increase the level of success and motivation.Therefore, InDesign software was used to design an electronic book for the sample education program. 5367 familiar with Internet technology and they generally use the Internet facilities to maintain and fulfill the needs of schools. Virtual teaching is taking place with respect to Internet technology.For example, methods of teaching are changing 'dramatically from plain lectures to multimedia presentations' (Tham & Werner, 2005).The development of the Internet technology can be assumed to be an essential platform to increase the efficiency level of teaching and learning. In addition to the contribution of traditional classrooms, with the help Internet technology, distance education has now become a popular method of learning and nowadays distance education can be assumed to be an important bridge to provide higher education for many universities (Hassan, Hassan, Dahalan, Zakaria & Wan Mohd Noor, 2009;Meyer, 2002).Thus, distance education has the common characteristics of traditional courses and helps students to meet new variables in teaching and education (Benigno & Trentin, 2000). Distance education is well known as computer-mediated communication in the field of education.Formalization of instructional learning is done with the development of Internet technology.Thus, students do not have to limit themselves with time or their geographic situations.With the help of distance education, students can attend courses online (King, Young, Drivere-Richmond & Schrader, 2004).The power of distance education regarding expanding their time and geographic situations provides convenient ways for learners to learn (Moore & Kearsley, 1996).Distance education does not replace campus education; however, it does provide flexibility with attractive alternatives for learners who are not able, or do not want to, participate in offline, on-campus education.Large numbers of programmers have worked to create distance education platforms for the students.Distance education has become one of the most appropriate elements in lifelong learning (Schneller & Holmberg, 2014). Moodle is one of the well-known online education platforms and it was developed by Martin Dougiamas and is abbreviation of 'Modular Object Oriented Term Developmental Learning Environment' (Moodle).MOODLE is one of the most used distance education platforms and is a course management system and platform for delivering online education and distance education.MOODLE also lets instructors plan their courses for the students (Yousif, 2012).It is a Course Management System accessed via the Internet and is a free web application, which teachers and educators can use to create effective online learning sites.Moodle allows any user with programming knowledge to modify an environment according to the users' needs.In the higher education environment, Moodle has been used to conduct courses fully online or to support on-campus teaching and learning (Moodle, 2017). Another possible distance education platform is YouTube, which is a website that includes thousands of videos.YouTube is an excellent resource for teachers and learners in every discipline (Robinson, 2011)."YouTube makes new demands on learning and they provide new supports to learning, even as they also dismantle some of the learning supports upon which education has depended in the past" (Duffy, 2008).This is because distance learning has been moved towards an on-line delivery or a combination of various media (Motteram & Forrester, 2005). BigBlueButton (BBB) is also another online education platform and a web-based program, which enables instructors to set up an online class collaboration session.Also, people can participate in the sessions so that they do not have to come onto campus.Thus, BBB allows instructors to have the ability to interact with their students' lives, rather than relying on cuLearn announcements, emails, or waiting until the next class to deliver pertinent information (BigBlueButton, 2017). Electronic books (e-books) have now started to be used in education and have quickly become popular in higher education.According to Amazon, in 2011 their customers bought more e-books than printed books (Cain, Miller & Bosman, 2011).More users can access e-books whenever and wherever they wish by downloading the content to their devices from any online site, which offers such files (Wu & Chen, 2011;Vasileiou & Ali, 2009;Gebregriorgis & Altmann, 2015).Daily print reading may be undergoing a dramatic transformation to reading in digital format because of the widespread use of the Internet and mobile devices.Also, electronic textbooks can be a more powerful learning resource if they also provide interactive learning activities, video segments, or review quizzes to reinforce important instructional concepts.Thus, these facilities help to enhance learning resources and help them to have the potential to accelerate student learning (Rickman et al., 2009).The InDesign program is used for research due to the design of the electronic book.InDesign is more popular every day because of this program's accessibility and simplicity.This program is used to make page breaks, which provides users with various possibilities, including editing text and graphics and transparency effects in order to export files into a .pdfformat.With help of the program, users have numerous possibilities for making and editing objects, graphics and text (Brajković, 2012). Today, Internet technology has an importance in education platforms and the numbers of facilities have increased.The views of future teachers are crucial about electronic resources in the education system because they will come across Internet technology facilities in the education environment.Therefore, the aim of the present study is to reveal importance and efficiency level of using electronic sources (designed electronic book by researcher and preparing electronic sources) in for the graduate student who study in distance education pedagogical formation certificate program. The objectives of the present study: • To reveal participants' views on using electronic sources; • To compare participants' views between using electronic sources and traditional sources; To show the importance of Internet technology in post-graduate education programs. To foster learning through the use of technology, it is useful to examine the pedagogical principles behind teaching and learning with the Internet and computer technologies.The motivation of the research was to emphasize the place of novelties in life, as well as the continuity of the need for technology in every platform of life, not only in education.For simple information, it is possible to use social networking sites for content and they are an easy way of learning when students need to learn fast.The age of the students was used for learning, based on the Internet platforms.Therefore, an Internet technology-based source was the main requirement for the group of students that provided the chance for the students to find information all in one place.The design of the electronic book involves an all-in-one piece of technology for the students, just like a Smartphone or tablet. METHODOLOGY In this section, the methodology of the paper is given under the subtitle of the research methodology, research design, population and sample, data collection, data analysis and ethics.The research methodology of the present study was a mixed methods approach.The aim of the chosen mixed research methodology was to provide comprehensive and in-depth information on the objectives of the present study. Research Methodology The research design was a case study used to focus on the participants to indicate their views and attitudes on using a designed electronic book during the distance education pedagogical certificate program. Population and Sample The purposeful sampling technique was used to select the sample of the present study.The aim of preferring this technique was to ascertain the differences among the participants and to collect vital information.The population of the research was composed of graduate students of the Certificate Program of Pedagogy (teacher education), Faculty of Education in Near East University in Nicosia in North Cyprus.This program was approved by the YÖK (Higher Education Committee in Turkey).A total of 450 post-graduate students attended the certificate program in 2014-2015.The selection of students was based on YÖK's criteria to study on the pedagogical formation certificated program. The Context of the Pedagogical Formation Certificate Program This certificate program was based on the suitability of the distance education system to be applicable for students who aimed to study from abroad.During this certificate program, the post-graduate students took ten courses.The present study was based on the Instructional Technology and Material Design course, which had similar facilities compared to other courses but, in addition, it had an electronic course book, as well as a traditional course book.The distance education program was continued over the website, uzem.neu.edu.tr, and with the support of the Moodle system.Live courses were broadcast via the BigBlueButton and recorded courses were provided via YouTube videos each week, with live conservations of each week being used too as sources for this course.The researcher prepared an electronic course book with InDesign software; this electronic course book is compatible with Smartphones and tablets.The program was continued over 12 weeks.YouTube videos of Instructional Technology and Material Design courses, live classroom chats, and activities were available for each week of the program.The post-graduate students took two exams during the program for evaluation.One of them was a mid-term exam and the other was the final exam. Data Collection The researcher prepared an attitude scale and a semi-structured questionnaire.The attitude scale had 30 questions and was designed with the help of a literature review.Six experts, who were experts in computer technology, teaching, measurement and evaluation, evaluated the draft version of the scale.The pilot study had been done with undergraduate students who were taking courses, which were supported by distance education.The researcher distributed the scale and interview questions via Google Documents and collected the participants' electronic mail and sent electronic mail via Google Documents facilities.All data was saved automatically and safely. Data Analysis The researcher used the Google Documents facilities to transfer data from sheet facilities to SPSS because the researcher used the SPSS program for analysis of the quantitative data.The collected quantitative data had been transferred to SPSS.Descriptive statistical analysis was done and general views were given with the frequency and percentage of their views.Qualitative data were transferred using the Microsoft Office Word program to systematically categorize data for analyzing thematically.Initially, 20 participants' answers were listed and their answers were coded and these codes were collected under themes.in the Department of Theology; 2 of the participants (10%) studied in the Department of Coaching; 1 of the participants (5%) studied in the Department of Nursing and 1 of the participants (5%) studied in the Department of Psychology.Of these, 80% of the participants studied in the undergraduate program and 20% of the participants studied in the graduate program.Of these, 15% of the participants were in aged 36+ years, 25% of the participants were 30-35 years of age, 45% of the participants were 26-30 years of age, while 10% of the participants were aged 20-25. Table 2 shows the participants' views and they agreed with almost all the items, except item 23.The participants were not sure about how much e-books increase their level of achievement.The range of attitudes of the participants' answers were determined by Balcı's (2006) ranges. Motivation is the first raised theme in qualitative data.Motivation of the participants are affected by teaching materials according to the participants' answers, designed electronic book and the facilities of the programs in education program (Moodle.InDesign, Youtube, BigBlue Button) have importance point into motivation level of the participant.Because, facilities of distance education program and electronic book and multiple sources linkage help the students to learn on multi-platforms.Thus, these facilities increase level of engage in lesson and increase level of motivation.And, these facilities prevent participants from being bored while studying.For instance, especially with the recorded videos of each chapter, the students cannot be alone while studying and cannot be restricted in the same way when reading sentences of a traditional book. Unlimited-Lack of restriction is another revealed theme in qualitative part of the present study.The findings show that facilities of designed electronic book on material design in teacher education aid the students to reach multiple ways and platforms during studying.Then, these facilities are linked to the developed internet and internet technology.Therefore, the students have numerous chances to engage course and reach several materials which are related to the course.Items X I1: I think that permanent information can be easily reached via e-book features 3.9 Agree I2: I think that using e-books increases the level of motivation 3.75 Agree I3: I think that using e-books is useful for learning 4.15 Agree I4: I think that using e-books is helpful to increase the level of achievement 3.6 Agree I5: I think that using e-books is useful, enjoyable and easy 3.9 Agree I6: I think that using e-books can be without time and place limitations 3.8 Agree I7: I think it is useful to buy and download e-books in e-learning 3.8 Agree I8: I think that graphics and sound applications of e-books motivate one to learn in elearning 3.7 Agree I9: I think that e-books can be chosen for every level of sufficiency 3.9 Agree I10: I think that reading e-books, which have high-resolution graphics and sound, is more enjoyable 4 Agree I11: I think that listening activities in e-books are useful in e-learning 3.8 Agree I12: I think that reading activities of e-books are useful for e-learning 3.7 Agree I13: I think that video activities of e-books are useful for e-learning 4.05 Agree I14: I think that using of e-books is useful to have permanent information in e-learning 3.5 Agree I15: I like using e-book opportunities to receive feedback and receive replies 3.95 Agree I16: I think that I can control the learning process with a single process 3.4 Agree I17: I think that using e-books is helpful for creative and critical thinking 3.6 Agree I18: I think that e-books draw the attention of targeted students in light of detected educational messages- Agree I19: I think that e-book learning with different options presented by selecting appropriate categories can target educational messages 3.9 Agree I20: I think that e-books result in access to information and if you want to learn they increase your confidence 3.82 Agree I21: I think that we should encourage the use of e-books for distance education.4.16 Agree I22: I think that reaching permanent information is easy using e-books 3.5 Agree I23: I think that using e-books increased my level of achievement 3.32 Undecided I24: I think that using e-books generally increases my level of motivation 3.6 Agree I25: I think that I will suggest using e-books to my friends Agree Multimedia is the third theme in the qualitative part of the present study.Using electronic book is possible everywhere in electronic areas (i.e. personal computer, netbook, notebook, tablet, smart phones, etc.,).This means that there are no limitations for reaching the electronic books and read it everywhere.At the same time, today's students and people are nearly all get used to read screens.The mobility of electronic books is applicable for all sorts of smartphones, netbooks and laptops, as well as personal computers; therefore, one time is enough to save the electronic book to these places.Having Internet is not compulsory for reading electronic the book. Attention is the fourth theme in the qualitative part of the present study: The facilities of the designed electronic book and opportunities of the reading electronic book increase the level of success of the students.And today's students' interest is linked today's technology.In the light of attention, students' profile is linked to technology on the course.Because of the opportunities, they provide to catch the attention of the students while studying.For example, the electronic books aid students to reach the author of each chapter to keep in touch and to ask whatever they want and the students can reach the instructor of the course at the same time. Interactivity is the fifth theme in qualitative part of the present study. The distance education setting creates interactivity between the students instructor of the course. Permanent information is the sixth theme in qualitative part of the present study.Using electronic sources, helps students to gain the chance to learn permanent information.The participants' claims show similar assertions about the benefits of using electronic books.These six themes were revealed from the experimental group of participants because they used the electronic book for studying their course.Important factors were the multiplicity of the electronic book, the ease of access and the cheaper cost.These increased the level of motivation to study and the facilities of the electronic book meant that they got more benefit from using the electronic books than traditional ones.The electronic book was designed for Pedagogical Formation students, as well as the Faculty of Education students.This means that the participants were future teachers.Therefore, the participants' views were taken in respect of how to reach students and teach in an effective way. The control group of the participants mentioned their views regarding using traditional books instead of electronic books in distance education.The students expected and needed an easy way of studying the course.The course was based on traditional books and the alternatives were only Power Point slides and recorded videos.They are available on Moodle, but they are available separately.The students could not fully find all in one.However, the electronic book maintained these facilities for the other group of students. Discussion Currently, Internet technology covers an enormous area in the education setting so it is inevitable that teachers and educators use Internet technology (Pal, Mukherjee, Choudhury, Nandi & Debnath, 2013).The modern learning environment revolves around the use of digital and online tools for education (Mukhlif & Amir, 2017).Besides this, large numbers of universities are only based on Internet technology (Sanchez & Hueros, 2010) because they provide distance education (Dahalan, Hasan, Hassan, Zakaria & Noor, 2013).People have changed the way they read books with the advancement of information and communication technology and digitalization (Lee, 2013).Specifically, various forms of data have been transferred to electronic formats that allow users to access information.Unfortunately, in education, reading is regarded as the most essential skill for acquiring knowledge and gathering information for academic achievement and research (Alfassi, 2004;Wei, 2005).Therefore, mastering online reading skills can enhance learners' reading ability, increase their reading process and assist them in comprehending any difficult online texts (Noor et al., 2011).In fact, various studies have revealed that reading is a very complex and demanding process, which requires students to actively use metacognitive processes (Mukhlif, 2012).In today's digital age, daily reading may be becoming digital reading.Although non-textual activities, such as watching movies and television and playing games, are not regarded as digital reading, digital reading includes reading e-books, e-mail, websites, and content on social networking services (SNSs).In their article, Hsieh & Dwyer (2009) illustrated the different online reading strategies and different styles that learners use to make their reading useful.In graduate studies, distance education is popular because post-graduate students prefer suitable programs to study in easy and economic ways (Fernandez, Simo & Sallan, 2009).In this paper, the participants were postgraduate students who attended a distance-learning program to receive a pedagogical certificate to be teacher.Therefore, our findings showed that the participants looked for multiple platforms to learn and achieve their certificate.Multiple platforms provide many facilities to increase the level of teaching and learning (Stella & Gnanam, 2004). The present study considers preference as a broad concept that includes, not only simple liking, but also an awareness of the utility or usefulness of the particular media.Preferences have been considered as factors affecting the diffusion of new technology and media adoption.Reading preference changes in a wide variety of circumstances.The preference for digital or print media varies according to the circumstances (purpose, situation and context).In the research, the main multi-platform facilities were given via Moodle for the participants.Moodle is an important platform to provide multiplicity and variety in an educational setting (Costinela & Luminita, 2011).Cubukcu (2008) argued that access to learning materials via the Internet is now necessary.Designing and using electronic books has advantages and provides benefits to readers to increase their self-learning (Bravo, Enache, Fernandez & Simo, 2010).At the same time, YouTube video recordings are an alternative way to reach large numbers of students easily (Craciunas & Elsek, 2009).Recent studies have focused on e-book use in the highereducation sector (e.g., Lam et al., 2009;Jou et al., 2016).While e-books have factors or benefits that may assist authors or publishers in developing a successful agenda, the "perceived benefit" is used to explain and predict user's intentions to adopt e-books (Chia et. al., 2017).Electronic and soft course notes are also a crucial resource to motivate students to study, without the restriction of time and place (Tang & Austin, 2009).Multi-platforms also bring motivation (Copley, 2007).Motivation is increased via technological varieties (Otta & Tavella, 2010).Furthermore, according to previous studies on e-books, the easier it is for an individual to use an e-book, the greater the perceived ease of use is, and the greater the usage intention (Chung and Tan, 2004;Wu and Kuo, 2008).Firstly, we found that performance expectancy had a significant impact on usage intention.This is consistent with arguments made in the previous literature (Zhou et al., 2010;Im et al., 2011).Students who are engaged in reading are more likely to spend time reading and research shows that increased motivation for reading can improve reading competency (Ciampa 2012c;Guthrie & Wigfield 2000;Morgan & Sideridis 2006;Sideridis & Scanlon 2006).Interactive learning has appeared with help of the Internet technology (Vrasidas, Zembylas & Chamberlain, 2003).E-books may include interactive features that support their reading development, such as highlighted text that is read aloud, opportunities for independent reading, graphics and animations that closely match the text, an individualized reading pace, and dictionary features (Karemaker et al., 2008).Likewise, e-books can support students' comprehension development by providing multimedia and dictionary features that assist them in defining words when they encounter unfamiliar vocabulary (Verhallen et al., 2006).The students can comment under YouTube videos and they can share their views about the course.Also, electronic book design can be used in an easy way to increase the level of interaction because it includes all in one.(UNCLEAR) For instance, you can use a Facebook connection to like a page of the design of the electronic book, YouTube, Moodle and PowerPoint slides of the course.The electronic book helps to reach the authors of each chapter.Accessibility is an important benefit of the electronic book.Also, the cheaper cost of an e-book is the main variable for most of the students and most of the students look for the cheapest ways to study and, at the same time, time saving is also an important benefit for them (Zoran & Rozman, 2010).Reading is used throughout our everyday lives and includes a variety of texts, both traditional print media and electronic texts.The electronic book provides all these facilities for the students.Boundaries have thus been changed with help of Internet technology and electronic book designs (Miller & King, 2003) CONCLUSIONS AND SUGGESTIONS Conclusions To conclude, the importance of the new improved communication and information technologies is highlighted in the present study.In this context, the present study is conducted to reveal participants' views on using electronic sources in distance education pedagogical formation certificate program and it mentioned that the new improved communication and information technologies have opened new dimensions to education platforms in the 21 st Century.Therefore, the present study conducted to reveal new generation students' views on using new information technologies and developments in higher education (i.e.designing electronic book, using electronic book and electronic sources in higher education).As a result of conducting the present study, the well-known form of new information technologies in field of education is online education, distance education and digital teaching materials for students and today students' profile require these novelties.In this sense, the Internet has become the latest vehicle through which institutions offer credit and noncredit distance learning courses to students all around the world as mentioned in literature.This is because the Internet has allowed for a variety of asynchronous (twoway communication involving a time delay between transmission and receipt), as well as synchronous (communication without extended time delay) activities, such as chat sessions and online discussions, which can be used to engage learners in student-to-student, as well as student-to-instructor interactions.Thus, multiplicity and alternative ways of teaching in the education process have become the main requirements to maintain individual requirements of students and the profiles of the students are familiar and needed to search multiplicity in their learning instead of learning in one way.Thus, many students ask alternative ways to attend to and engage in education sufficiently as discussed in the findings.As a result of the present study, the main resource of alternatives and multiplicity in education is the teachers, therefore, the teachers and teaching candidate should have a relationship with technology and should use it in sufficient level in their current and future classes.Also, each teacher should follow technology and apply technology in their course materials.Therefore, the design of the digital electronic book is prepared to be adaptive for this generation of the students because they are generally digital readers.The Internet and communication technologies also provide convergence for any class material into digital form.Like previous forms of distance learning, online education allows students to do coursework at times to fit in with their lives and schedules, rather than conform to a specified class time and location.Today's Internet and communication technologies to provide opportunities and alternatives to fit in with the students' reading and engaging profiles of any teaching materials.The main assertion of the students is to supply the needs of the students.Even simple books are useful for learning something.The needs of the students for learning can be met with the help of creative and alternative ways.In this respect, individuality and learning are highlighted in education platforms.Future teachers are enlightened with the need for alternatives for the uniqueness and individual differences in the learning process.At the same time, there is a focus on the need to be adaptive to the time.In the 21 st Century, researchers should design electronic books from elementary to higher education to follow this transition from traditional ways of teaching to mobile learning. Suggestions Using the Internet and communication technologies is commonly significant in all around the world due to provide global platforms all students.Therefore, using the Internet and communication technologies is a necessity especially to maintain today's students profile and increase opportunities for their success.Revealing students' views and attitudes on using improved internet and communication technologies (designed e-book) show that teaching materials should prepare in the light of new millennium's students profile who are digital native and multiple platforms and electronic sources should be used in higher education.Indeed, the present study would be fruitful to pursue further research about using new millennium technologies to provide sufficient education platforms for today's students especially in developing countries. Figure 1 . Figure 1.Themes of the participants' views on the benefit of the design electronic books for the course.Source: Authors' Compilation Table 1 . Demographic Information about the Participants of the Research Table 2 . Attitudes of the Participants of Using Electronic Books in Online Learning
7,558.6
2017-06-15T00:00:00.000
[ "Education", "Computer Science" ]
The Electronic Impact of Light-Induced Degradation in CsPbBr3 Perovskite Nanocrystals at Gold Interfaces The understanding of the interfacial properties in perovskite devices under irradiation is crucial for their engineering. In this study we show how the electronic structure of the interface between CsPbBr3 perovskite nanocrystals (PNCs) and Au is affected by irradiation of X-rays, near-infrared (NIR), and ultraviolet (UV) light. The effects of X-ray and light exposure could be differentiated by employing low-dose X-ray photoelectron spectroscopy (XPS). Apart from the common degradation product of metallic lead (Pb0), a new intermediate component (Pbint) was identified in the Pb 4f XPS spectra after exposure to high intensity X-rays or UV light. The Pbint component is determined to be monolayer metallic Pb on-top of the Au substrate from underpotential deposition (UPD) of Pb induced from the breaking of the perovskite structure allowing for migration of Pb2+. Cesium oleate In a three neck 100 mL flask, 0.8 g of Cs2CO3 are mixed with 2.5 mL of OA and 30 mL of ODE.The flask is degassed under vacuum for the next 30 minutes at 110 °C.The atmosphere is then switched to Ar and the temperature is raised to 200 °C for 10 min.At this point the cesium salt is fully dissolved.The temperature is cooled down below 110 °C and the flaks is further degassed for 10 min.The obtained gel is used as stock solution. CsPbBr 3 nanocrystals In a 100 mL three-neck flask, 320 mg of PbBr 2 are introduced with 20 mL of ODE.The flask is degassed under vacuum for 60 min at 110 °C.Then, 1 mL of OA is introduced.Once the vacuum level has recovered, 1 mL of OLA is introduced.Quickly after that the lead salt gets fully dissolved.The flask is further degassed for another 30 min at 110 °C.The atmosphere is then switched to nitrogen and the temperature raised to 180 °C.1.6 mL of cesium oleate solution are quickly added.The reaction is conducted for 30 s before removing the heating mantle and cooling the flask with ice bath.The content of the flask is centrifuged without addition of non-solvent.The formed pellet is dried and finally re-dispersed in fresh hexane. Sample preparation CsPbBr3 perovskite nanocrystals (PNCs) were spin-coated on Au substrate (Au evaporated onto Si wafer) with a speed of 2000 rpm for 20 sec with an acceleration of 100 rpm/sec.The long oleyamine (OLA) and oleic acid (OA) ligands used during synthesis of NCs were exchanged with short and conductive acetate ligands by dipping the NCs thin films in a saturated solution of lead acetate ((Pb(OAc)2) in ethyl acetate (Et(OAc)2) for 30 sec and then were rinsed in pure ethyl acetate to remove non-reacted precursor, as reported in. 1 Sample characterization UV-visible absorption was performed using a JASCO V-730 spectrometer.In-plane X-ray diffraction of spin-coated PNCs of CsPbBr3 on Au was performed using a Smartlab diffractometer using a Cu K-α source. All XPS measurements except those at high X-ray flux were performed at the Low DosePES station 2 of the PM4 beamline at the synchrotron BESSY II operated by the Helmholtz-Zentrum Berlin, with X-ray flux of 1 × 10 7 − 1 × 10 8 photons/sec.The high X-ray flux (4×10 13 photons/sec) XPS measurements were performed at the TEMPO beamline at the synchrotron SOLEIL, with an MBS A-1 hemispherical analyzer. 3r light exposure experiments, the femtosecond laser system coupled to the LowDosePES station was used.In particular, the fundamental wavelength of the laser was used for IR exposure, and its third harmonic (344 nm) was used for UV exposure.All core levels were measured, before during and after laser exposure, with a photon energy of 360 eV using the 360 l/mm grating and a C ff value of 1.2. For the NIR and UV exposure, the measurements consist of a time series including two exposure periods, 30 and 60 minutes, with dark periods before and after.The Pb 4f core-level was continuously measured during the light on periods and the first 15 min of the subsequent dark period, followed by measurements of the other core-level spectra. Data treatment The binding energy scale for all measured core levels were calibrated to the Au 4f 7/2 set to 84.00 eV.They were fitted using Voigt functions with a background deemed appropriate, either polynomial or Shirley-type, using the CasaXPS software (for Figure S6 the fitting were performed using the SPANCF package in IGOR PRO).For the Pb 4f spectra the binding energy position of Pb 0 and Pb int was kept fixed.The position and width for Pb 0 was found by fitting the high fluence UV-exposure spectra where both the Pb 2+ and the Pb 0 is well separated.With these components fixed, the Pb int component was subsequently added and fitted.Binding energies are within an error limit of 0.05 eV unless otherwise specified.To showcase the existence of the intermediate component a comparison of the Pb 4f spectra of the CsPbBr3 nanocrystals on Au and ITO (prepared using the same procedure as for Au) exposed to long and high intensity UV light is presented in Figure S6.The ITO spectrum was fitted using two components, Pb 2+ and Pb 0 , which are well separated.The purple component is Sn 4s from the substrate which shows that the nanocrystal film is not full covering but we also probe the substrate surface where the Pb int should appear.The Au spectra was then fitted using the same parameters as a start, keeping the width for the Pb 0 the same, while allowing the Pb 2+ to be slightly narrower (1.2 vs 1.4 eV FWHM) to match the peak shape, the energy position was also allowed a free parameter as the two substrates have different surface interactions.As can be seen in the figure, the fit for the ITO is in good agreement with the data whereas for the Au it does not fit well for the region in-between the Pb 2+ and the Pb 0 .looking at the residuals it is clear that there needs to be added an extra contribution in-between the Pb 2+ and the Pb 0 . Figure S1 . Figure S1.XRD pattern of CsPbBr3 NCs.The nanocrystal size (d) was estimated using the Scherrer equation: = • cos () with the Scherrer constant (K) of 0.84, λ of 0.154 nm (Cu Kα radiation), β of 0.60° (fitted from the 200 plane) and θ of 30.53° from the (200) plane position.The average size was estimated by taking all Bragg peak into consideration. Figure Figure S2.UV-vis spectra of a typical batch of CsPbBr3NCs from which the band gap calculation was performed, here with the slope at 518 nm.Several batches of nanocrystals were used throughout the experiments and the error estimations if from the slightly varying bandgap from the different batches. Figure Figure S5.C 1s spectra before and after UV light exposure.Recorded with a photon energy of 360 eV. Figure S6 . Figure S6.Comparison of Pb 4f spectra of CsPbBr3 nanocrystals on Au and ITO.Recorded using 360 eV photon energy. Table S1 . Intensity and intensity ratios from the fit in FigureS4.Intensity as area under the peak.Pb Int Pb 0 Pb 2+ Pb Int Pb 0 Pb Int /Pb 2+
1,653.4
2024-02-06T00:00:00.000
[ "Materials Science", "Physics" ]
Single-nucleotide polymorphisms of the IL-12 gene lead to a higher cancer risk: a meta-analysis based on 22,670 subjects The associations between interleukin-12 ( IL-12 ) gene polymorphisms and cancer risk have been discussed extensively, with controversial results. Therefore, we conducted the present meta-analysis to better assess the potential roles of IL-12 gene variation in cancer occurrence. Eligible articles were found via PubMed, Medline, EMBASE, Google Scholar and CNKI. Odds ratios and 95% confidence intervals were used to evaluate the associations between IL-12 gene polymorphisms and cancer risk. Thirty-one studies with 10,749 cancer patients and 11,921 healthy subjects were included in the analyses. The overall results showed that cancer risk was increased by IL-12A rs568408 (GG versus GA + AA: P = 0.004; G versus A: P = 0.005) and IL-12B rs3212227 (AA versus AC + CC: P = 0.004; CC versus AA + AC: P = 0.03; A versus C: P = 0.007) polymorphisms. Further subgroup analyses for IL-12A rs568408 and IL-12B rs3212227 revealed that the positive results could be impacted by the ethnicity of the population, cancer type and/or genotyping methods. However, we failed to detect any significant associations between the IL-12A rs2243115 polymorphism and cancer risk in either the overall or the subgroup analyses. The current study suggests that certain IL-12 gene polymorphisms serve as biological markers of cancer susceptibility. INTRODUCTION Cancer is a major threat to public health. According to a recent investigation, over 14.1 million new cases and 8.2 million deaths are caused annually by cancer (Siegel et al., 2016). Furthermore, by 2020, the burden of cancer is expected to rise by 50%, due to the rapidly aging population (Popat et al., 2013). To date, the exact pathogenic mechanisms of cancer remain ambiguous. Although tobacco use, heavy alcohol intake, high caloric diet and chemical dyes have been identified as risk factors for cancer (Jemal et al., 2010), the fact that only a small proportion of individuals exposed to these carcinogenic agents ultimately develops cancer suggests that genetic factors play a crucial part in cancer pathogenesis. In addition, the familial aggregation tendency of cancer has long been acknowledged (Risch, 2001), and a number of genetic variants have been found to be associated with cancer susceptibility in different populations (Mtoert et al., 2014;Jamieson et al., 2015). Nevertheless, the mechanisms of cancer pathogenesis are highly complex, and the genetic determinants involved in cancer development are not fully clarified. Interleukin-12 (IL-12) is a pro-inflammatory cytokine that is mainly secreted by antigen-presenting cells. IL-12 targets T-helper (Th) cells and natural killer cells, and stimulates the synthesis and secretion of interferon gamma (IFN-γ), which is a well established antitumor factor (Croxford et al., 2014). Additionally, lower serum IL-12 levels have been observed in patients with various types of cancer (Green et al., 2012;Stanilov et al., 2012;Tao et al., 2012;Wang et al., 2013;Fang et al., 2015), suggesting that IL-12 functions as a potent tumorsuppressive factor. Biologically active IL-12 consists of two functional subunits, p35 and p40, which are encoded by the IL-12A and IL-12B genes, respectively (Croxford et al., 2014). Since the tumor-suppressive effect of IL-12 is well documented, functional polymorphisms of the IL-12A and IL-12B genes are thought to be good genetic candidates for cancer susceptibility. Recently, extensive studies have explored the potential associations of IL-12A and IL-12B genetic variants with cancer risk. Among these, 3'UTR A > C (1188A > C; rs3212227) in the IL-12B gene and 3'UTR G > A (277 G > A; rs568408) and 5'UTR T > G (564 T > G; rs2243115) in the IL-12A gene are the most intensively studied sites. It is well established that the 3'UTR and 5'UTR are important to mRNA stability, which may contribute to gene expression (Wu and Brewer, 2012). Thus, although these variants are located in untranslated regions, some studies note that they are able to impact IL-12 levels. Morahan et al. (2001) reported that IL-12B rs3212227 was responsible for altered levels of IL-12B mRNA expression in cell lines, and, compared to the AA genotype cell line, decreased IL-12B levels were observed in the CC line. Other scientists have subsequently found similar phenomena in different diseases, including cancer, suggesting that serum IL-12 levels could be modulated by this SNP (Seegers et al., 2002;Windsor et al., 2004;Yilmaz et al., 2005;Stanilova et al., 2007;Wang et al., 2013). For IL12A rs568408 and rs2243115, studies on their potential association with IL-12 expression are still at early stage, and no evidence was detected that they affect IL-12 production (Tao et al., 2012;Wang et al., 2013). It is notable that most studies have simply discussed whether the distributions of IL-12 SNPs in cancer patients are statistically different from that in healthy individuals, and the exact underlying mechanisms between these three SNPs and cancer remain unclear. However, the results of relevant studies were inconsistent and inconclusive. Thus, we conducted the present meta-analysis to better assess the relationship between the IL-12 gene polymorphisms and the risk of cancer. MATERIALS AND METHODS Literature search To retrieve all relevant articles, we searched the electronic databases PubMed, Medline, EMBASE, Google Scholar and China National Knowledge Infrastructure (CNKI) without a time limit using the following terms: Interleukin-12, IL-12, Interleukin 12, IL 12, polymorphism, variant, genotype, allele, cancer, tumor, carcinoma, neoplasm and malignancy. Moreover, the reference lists of all retrieved articles were reviewed manually for other potentially eligible articles. Inclusion criteria and exclusion criteria Litera-ture matching all the following criteria was included in the present meta-analysis: (1) published articles; (2) a case-control study of associations between IL-12 gene variants and cancer susceptibility in unrelated cancer patients and control subjects; (3) identification of both genotypic and allelic distributions of the IL-12 gene polymorphisms; and (4) availability of full text in English or Chinese. Abstracts, case reports, case series, pedigree studies, reviews and experts' opinions were intentionally excluded. Additionally, if the same patients were enrolled in multiple studies, only the most recent and complete study was included. It should be particularly noted that, since there is no consensus on how to handle studies with a control group that is not in Hardy-Weinberg equilibrium (HWE), in this meta-analysis we did not exclude studies deviating from HWE as long as they were eligible according to the inclusion criteria and were not of poor quality (Zintzaras and Lau, 2008). Data extraction Two of the authors (Shi and Jia) extracted the following information from every included study: name of first author, year of publication, country of origin, ethnicity of the study population, type of cancer, number of cases and controls, genotypic and allelic frequencies in the cases and controls, and P values of HWE in the control group. Quality assessment of included studies The Newcastle-Ottawa quality assessment scale (NOS) was used to evaluate the quality of each included study (Stang, 2010). As a classical rating tool of observational studies, NOS assesses studies from the following three aspects: selection, comparability and exposure. The score range of NOS is from 0 to 9, and studies with a score greater than 7 are assumed to be high-quality. The quality assessment was conducted independently by two of the authors (Shi and Jia), and any discordance between the investigators was resolved by discussion with a third author (Xie) until an agreement was achieved. Statistical analysis Odds ratios (ORs) and 95% confidence intervals (CIs) were used to evaluate the associations between the IL-12 gene variants and cancer susceptibility in dominant, recessive and allelic models. In addition, a raw probability value (P value) of 0.05 or less was considered statistically significant, and was further subjected to Bonferroni correction to account for multiple statistical tests. The significance threshold was set at 0.017 (0.05/3) for a single SNP because three statistical models were performed for each SNP. The chi-square test was used to explore HWE. Q test and I 2 statistic were applied to assess the heterogeneity between studies. If the P value of the Q test was less than 0.1 or I 2 was greater than 50%, the between-study heterogeneity was considered statistically significant and the random-effect model (REM) was employed for the analyses. Otherwise, the analyses were carried out with the fixed-effect model (FEM). Furthermore, subgroup analyses by cancer type, genotype method and ethnicity were conducted to trace the source of the heterogeneity. Sensitivity analyses were performed to examine the stability of the results. Publication bias was tested by funnel plots. All data were analyzed with ReviewManager Version 5.3.3 (The Cochrane Collaboration, Software Update, Oxford, U.K.). Quality assessment of included studies The average NOS score of the included studies was 7.53 (range, 7 to 8), indicating that all enrolled articles were of high quality. Potential biases in the eligible studies mainly originated from unmatched baseline characteristics of the cases and controls, such as age and gender. IL-12A rs568408 polymorphism and cancer susceptibilit Eight studies containing 2,820 cancer patients and 3,134 control subjects were included to assess the association between the IL-12A rs568408 variant and cancer risk. For GG versus GA + AA (dominant model) and G versus A (allelic model), analyses were conducted with REMs because of obvious between-study heterogeneity. For AA versus GG + GA (recessive model), the heterogeneity between studies was mild, and the FEM was therefore used for the analysis. A significant association between the IL-12A rs568408 polymorphism and By subdividing the studies according to genotype method, we observed that polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) was used in six studies, and TaqMan was used in two. The heterogeneity between the studies was trivial in the PCR-RFLP group, and FEM-effects metaanalysis pooled significant results in both GG versus GA + AA (P < 0.00001, OR = 0.67, 95%CI 0.59-0.76) and G versus A (P < 0.00001, OR = 0.72, 95%CI 0.65-0.80), which were consistent with the overall results. In the TaqMan group, REMs were carried out in the analyses because the between-study heterogeneity remained obvious. However, no significant association was detected in this subgroup (Table 4). IL-12A rs2243115 polymorphism and cancer susceptibility Estimation of the association between the IL-12A rs2243115 polymorphism and cancer risk was based on seven studies, including 2,401 cases and 2,782 controls. FEMs were applied for all analyses, because no obvious between-study heterogeneity was detected. No significant associations with cancer risk were observed for the IL-12A rs2243115 polymorphism (Fig. 3). Sub- group analyses were performed by the genotype method and were mostly under FEMs, because there was negligible heterogeneity between the studies. Nevertheless, neither the PCR-RFLP group nor the TaqMan group showed positive results in any of the comparison models (Table 4). Sensitivity analyses and publication bias Sensitivity analyses were performed by removing one individual study at a time. The overall results were not impacted by any individual study, including those that were not in HWE for all of the three analyzed SNPs. No other changes in the results were found in the subgroup analyses. Funnel plots were used to evaluate the potential publication bias. A visual inspection of the funnel plots revealed no apparent asymmetry for any of the IL-12 polymorphisms, and these results suggest that there was no significant publication bias in the present metaanalysis ( Supplementary Fig. S1). DISCUSSION IL-12 is an anti-tumor cytokine with multiple functions. First, IL-12 can stimulate the synthesis and secretion of IFN-γ, which serves as a pro-inflammatory and tumor-suppressive factor (Croxford et al., 2014). In addition, it was found that p53, which triggers apoptosis in cancer cells, was up-regulated by the secretion of IFNγ, thus, subsequently, inhibiting tumor growth (Takaoka et al., 2003). Second, IL-12 can promote Th1 cell differentiation by stimulating the synthesis of interferon regulatory factors 1 (IRF1) and 4 (IRF4) (Lehtonen et al., 2003). Additionally, when IL-12 binds to its receptor (IL-12R), binding sites form for Tyk2 and Jak2 kinases. These kinases are able to recruit transcription factor STAT4, which accumulates at the promoter region of IL-2R. Consequently, the expression levels of IL-2R increase, leading to enhanced Th1 cell proliferation and Th1-cell related immune responses, especially cytotoxic reactions (Lee et al., 2012;Stark et al., 2013). Third, IL-12 can inhibit the formation of tumor microvessels by reducing the production of vascular endothelial growth factor (Cavallo et al., 2001). As mentioned above, the IL-12 p35 and IL-12 p40 subunits are encoded by the IL-12A and IL-12B genes, respectively (Croxford et al., 2014). Consequently, IL-12A and IL-12B genetic polymorphisms, which may affect the expression levels and normal functioning of IL-12, are thought to be implicated in the occurrence and development of various types of cancer (Yuzhalin and Kutikhin, 2012). Among these, IL-12A rs568408, IL-12B rs2243115 and IL-12B rs3212227 polymorphisms were the three most investigated sites. IL-12A rs568408 and IL-12B rs3212227 are located in the 3'-untranslated region (3'UTR). These two polymorphisms may disrupt exonic splicing and influence the production level of IL-12 (C) Fig. 4. Forest plots on association between IL12B rs3212227 polymorphism and cancer risk. (A). Forest plot of AA versus AC + CC for IL12B rs3212227 polymorphism and cancer risk is shown. (B). Forest plot of CC versus AA + AC for IL12B rs3212227 polymorphism and cancer risk is shown. (C). Forest plot of A versus C for IL12B rs3212227 polymorphism and cancer risk is shown. (Chen et al., 2009;Liu et al., 2011). IL-12A rs2243115, however, is situated in the 5'UTR, and its functional significance has not been reported. Despite these potential molecular mechanisms, the results of relevant studies remain conflicting. Therefore, we conducted the present study in an attempt to obtain a more conclusive result. The results of the current meta-analysis support the idea that the IL-12A rs568408 and IL-12B rs3212227 polymorphisms significantly correlate with cancer risk in the Asian ethnicity, especially when the PCR-RFLP genotype method is used. In addition, the A allele of the IL-12A rs568408 variant and the C allele of theIL-12B rs3212227 variant contribute to the increased risk of cancer. However, no associations were detected between the IL-12A 2243115 variant and cancer risk. It is worth noting that Zhou et al. (2012) performed a meta-analysis for these three IL-12 polymorphisms in 2012. Compared with that study, similar results were found for the IL-12A 2243115 and IL-12B rs3212227 polymorphisms in the present meta-analysis, whereas the results for IL-12A rs568408 were opposite. Since the sample size of our analysis was much larger than the previous one (22,670 subjects versus 13,875 subjects), our findings should be more reliable. Further subgroup analyses yielded similar positive results for the IL-12A rs568408 and IL-12B rs3212227 polymorphisms in Asians. However, no significant associations were found in Caucasians. In addition, the IL-12B rs3212227 polymorphism was found to be implicated in nasopharyngeal carcinoma and cervical cancer but not in hepatocellular carcinoma, colorectal cancer, brain tumor, skin cancer, breast cancer, esophageal cancer or gastric cancer. The between-study heterogeneity remained significant for the IL-12A rs568408 and IL-12B rs3212227 polymorphisms in different ethnicities, TaqMan subgroups and cancer subgroups, suggesting that the correlations between the IL-12 genetic polymorphisms and cancer risk may also be modulated by other unevaluated factors. Several limitations of our study should be noted. First, the number of studies investigating the associations between the IL-12A rs568408 and rs2243115 polymorphisms and cancer risk is still limited, and the sample sizes of several of the included studies were not sufficient, which precluded us from drawing definite conclusions. Second, our results were based on unadjusted estimates, because the majority of the included studies failed to report baseline characteristics of the individuals, such as age, sex, smoking status and eating habits, and the lack of analyses adjusted for these potential confounding factors might affect the reliability of our results. Third, although funnel plots revealed no apparent publication bias, we still could not eliminate the possibility of publication bias, because only published studies were included. Fourth, all included studies were published in English or Chinese; therefore, some qualified articles in other languages may have been missed. Fifth, the relationship between a certain gene polymorphism and cancer risk could be affected by gene-gene or geneenvironment interactions. It is possible that a particular polymorphism may be associated with cancer susceptibility, but, due to interactions with multiple genetic and environmental factors, the association would no longer be observed. Given these limitations, we should interpret the current results with caution. In conclusion, the present meta-analysis of 31 studies demonstrated that the IL-12A rs568408 and IL-12B rs3212227 polymorphisms serve as promising candidate biomarkers of cancer susceptibility in the Asian population. Further studies with larger sample sizes from different populations are warranted to confirm our results. Moreover, since interleukins play a crucial role in regulating anti-tumor immune responses, future investigations are needed to explore the potential roles of other interleukin gene polymorphisms in cancer pathogenesis.
3,848.6
2017-04-12T00:00:00.000
[ "Biology" ]
Liquid metals as soft electromechanical actuators Leveraging the unique properties of liquids, such as surface tension, capillary action, reconfigurability, nearly unlimited stretchability, and viscosity has enabled the development of a wide range of soft actuators, presenting vast potential to revolutionise wearable healthcare devices, manufacturing, reconfigurable electronics, and robotics. Gallium (Ga) based liquid metals (GaLMs) are a remarkable family of functional fluidic materials that can actuate electrically for realising electromechanical functions. Such actuators are simple, highly responsive, highly controllable, and reversible, which has led to the creation of useful devices such as reconfigurable antennas, artificial muscles, electrical switches, and soft robots, just to name a few. Herein, this review succinctly and critically summarises recent advances in research on using GaLMs as electromechanical actuators. First, the properties of GaLMs are introduced, and then the methods for their electrical actuation and the applications thereof are discussed. Finally, an outlook is offered, highlighting the research challenges faced by liquid metal electromechanical actuators in order to develop into commercial devices. Electromechanical soft actuators transduce electrical energy into mechanical motions or deformation. Among all stimuli, electrical signals have numerous advantages; for example, they allow for easy control of magnitude, frequency, and phase. Additionally, components for generating electrical stimuli are compatible with conventional electronics and therefore, could be readily integrated with and powered by batteries. The design, synthesis, and integration of non-ionic (e.g., dielectric elastomers, electrostrictive polymers, liquid crystal elastomers, etc.) 4,13,[18][19][20] and ionic (e.g., ionic conducting polymers and their composites, ionic gels, etc.) [21][22][23] electro-responsive polymers, as well as fluids that can change viscosity, surface tension, and pressure upon electrical stimulation 5,[24][25][26][27] have been extensively explored to make soft actuators over the past decade. Liquid enabled soft actuators are able to make use of the inherent advantages of fluidic systems. For example, fluids can deform freely without mechanical constraints to movement and are naturally self-healable. Thus, incorporating electroresponsive fluids in solid materials, or using fluids themselves as actuators minimises mechanical fatigue and heals electrical breakdown. 3 Moreover, interfacial tension of fluids can be readily tuned electrically using the electrowetting effect, making fluid actuators an attractive alternative in small-scale robotic systems. 2,25 GaLMs provide many unique properties that make them particularly suited for usage in soft actuators. For instance, GaLMs have the highest surface tension (4600 mN m À1 ) among all liquids and have a negligible vapour pressure even at a high temperature (4500 1C). 28 They are also immiscible in aqueous and organic fluids. More importantly, their metallic properties enable many extraordinary effects that cannot be reproduced using conventional fluids, such as electrochemical oxidation/reduction, continuous electrowetting, and the induction of the Lorentz force. These effects have enabled unparalleled actuation methods that have been harnessed for making innovative electromechanical soft actuators, and eventually lead to the construction of more complex systems for various applications (Fig. 1). This review seeks to summarise and highlight the fundamental principles and applications of GaLM enabled electromechanical actuators. We will first briefly discuss the unique properties offered by GaLMs that can be harnessed for realising electromechanical effects. After this, the different electronically controlled actuation methods are described, alongside examples of its various uses in robotic, electronic, and microfluidic systems. Finally, we offer a perspective on the opportunities and challenges for the future development of such GaLM-based electromechanical actuators. Brief summary of the properties of gallium-based liquid metals GaLMs includes alloys such as eutectic gallium indium (EGaIn, 75 wt% Ga, 25 wt% In) 29 and Galinstan (68% wt% Ga, 22 wt% In, 10 wt% Sn). 28 Pure Ga has a melting point of 29.8 1C, 29 slightly above room temperature, whereas EGaIn and Galinstan have melting points of 15.7 1C and B11 1C respectively. 29,30 In contrast to mercury, GaLMs have a very low toxicity. 31 They also have a negligible vapor pressure, 28,29 meaning there is no danger of accidental inhalation. Another feature of GaLMs is its oxide layer 'skin' which quickly forms in air with a thickness of between 0.7-3 nm. 32 Oxide formation occurs even at very low concentrations of oxygen (Bfew ppm). 32 The oxide layer reduces the surface tension from 4600 mN m À1 for bare GaLMs 33,34 to B350 mN m À1 . 35 Interfacial tension is also reduced when GaLM is immersed in electrolyte (down to 415 mN m À1 in 1 M sodium hydroxide, NaOH) 36 due to electrical double layer formation. The oxide layer can be removed chemically by using acid or base, or electrochemically. 37,38 With the oxide layer present, the LM has increased adhesion, which can be used for patterning the LM. 39 However, the stabilizing effect and adhesion of the oxide also stops the bulk metal from flowing, so in some circumstances the oxide layer has to be continuously removed, or a surface coating applied to prevent the oxide sticking. 40 GaLMs also readily alloy with some metals, such as copper, 41 and aggressively corrodes some aluminium alloys. 42 Other metals it has been shown to be non-corrosive towards however. 42 A comprehensive review on the properties of GaLMs can be found elsewhere. 32,[43][44][45] The high conductivity and fluidity of GaLM, and its oxide layer mean that it can be actuated using various electrically based means. As the focus of this review is on electrically based actuation, other ways to actuate GaLM (such as by utilizing the redox reaction between GaLM and aluminium in NaOH Fig. 1 The unique properties of GaLMs can be harnessed for realising various effects, enabling the formation of numerous soft electromechanical actuators and eventually lead to applications in more complex systems. solution) 46 will not be discussed. See two recent review papers for more information on these other actuation regimes. 47,48 In this review, the focus is on GaLMs. Both of the terms 'LM' and 'GaLM' are used in the review but should be understood to refer to GaLMs only. See elsewhere 49 for a review that discusses actuation by interfacial tension modulation of both mercury and GaLMs. Interfacial tension modulation by electrochemical oxidation and reduction GaLMs exhibit a huge change in interfacial tension from 4400 mN m À1 to B0 mN m À1 under certain conditions. 38 This change is fast ({1 s), reversible, and only requires a small voltage (B1 V). The interfacial tension reduction is achieved by electrochemically oxidising the LM droplet while in electrolyte, making an oxide layer grow. If the electrolyte used is an acid or base that removes the oxide layer, then the oxide is continually removed as it is formed. This means the LM has no impedance to its spreading, and with such a low interfacial tension, it flattens and forms random shapes ( Fig. 2A). 50 The interfacial tension can be increased back to its maximum value of 4400 mN m À1 by removing the oxidative potential if in acid or base, or by applying a reductive potential. This makes the droplet reform its spherical shape. The interfacial tension change arises from the oxide layer acting as a surfactant and compressive stresses resulting from oxidation. 50,51 The speed and repeatability of this effect have been showcased by using it to make droplets that jump over 5 mm in electrolyte (Fig. 2B), 52 and a beating heart gallium droplet that is capable of 610 beats per minute. 53 The shape of LM in 2D can be controlled electrochemically using oxidation and the Marangoni effect. 54 The Marangoni effect is when liquid flows from regions of low interfacial tension to regions of high interfacial tension. To achieve the shape control, a LM droplet was placed in a round dish with an anode touching it, immersed in NaOH electrolyte, and with cathodes at the edge of the dish. By applying a potential, oxide forms preferentially where the LM side is facing a cathode. This reduces its interfacial tension locally, and results in Marangoni flow towards where the LM has a higher interfacial tension at the centre of the dish. This makes the LM spread towards the cathodes (Fig. 2C). The LM could be spread to up to three different cathodes. The minimum angle between two protrusions was 301. Using a feedback control system, the LM stayed in position for up to 12 s. A switch that makes use of oxidation and reduction to coalesce and split LM droplets has also been made. 55 The switch consists of electrolyte encapsulating two LM droplets wetted on two copper electrodes, with two additional outer electrodes on either side. The LM preferentially wets the copper by alloying, 41 preventing the LM moving off it. When the switch is off, the droplets are separate. To turn the switch on, an oxidative potential is applied to one droplet and a reductive potential to the outer electrode on the far side. This causes one droplet to oxidise and spread out, merging with the second droplet (Fig. 2D). To split the droplets, a positive potential is applied to one outer electrode and a negative potential is applied to the other outer electrode. This causes one side of the merged droplet to oxidise, and one side to be reduced, which makes the droplet unstable and splits. Soft artificial muscles have been made that make use of the change in surface tension of LM. [56][57][58] As LM preferentially wets copper by alloying, two copper pads can be used to hold the droplet in place for actuation (Fig. 2E). The droplet is then immersed in an electrolyte, with a periodically oscillating potential applied to it. The electrolyte is usually a caustic acid or base to increase the speed at which the oxide layer is removed, although an alternative electrolyte (such as sodium chloride solution) has been shown to work if a reductive potential removes the oxide layer. 56 With the LM droplet pinned from above and below, the force generated can result in either pushing or pulling of the pads, depending on the configuration. 56 Artificial muscle droplets can be used in parallel to increase the force or in series to increase the stroke (Fig. 2F). The artificial muscle droplets are unstable if the distance between the pads becomes too great, as this causes the LM droplet to be pulled apart during actuation. Maximum distance between the pads is B2-4 mm for a 1 mm radius LM droplet muscle, depending on its oxidation/reduction state and load force. 57 Another issue is that electrolysis of the electrolyte during electrochemical reactions causes hydrogen to be formed as an unwanted by-product. The gas generation can degrade the performance of the actuator. If the voltage used is too high (B8 V) the amount of bubbles generated can block the actuation. 56 Gas build-up would be a major issue in sealed contractile units, and requires release valves to be used, or an alternative electrolyte that does not create a gaseous by-product. The artificial muscles are able to actuate with strains up to 87%, with negligible response time. 56 Voltages typically used for actuation were from 4 V to À0.5 V. The artificial muscle was able to actuate repeatedly at 0.5 Hz for 2 hours with no loss of performance. LM artificial muscle was used to make an untethered swimming fish (Fig. 2F), a bimodal display, cargo carrier and reconfigurable optical reflector. 56 The work density of such LM muscles created with 1 mm diameter droplets is B100 kJ m À3 . 57 Reducing the size of the droplets will result in a greater energy density due to an increase in the surface area to volume ratio. Note that interfacial tension scales linearly with length (L), whereas forces such as electrostatic and magnetic scale as L 2 and L 3 respectively. 2 This means that interfacial tension forces dominate at small scales. A droplet size of 2 mm will result in a theoretical energy density of 10 3 kJm À3 , greater than dielectric elastomers and shape memory alloys. 57 Continuous electrowetting When a LM droplet is immersed in an electrolyte, it gains a net surface charge. For example, when in NaOH solution, chemical reactions result in formation of [Ga(OH) 4 ] À on the LM surface. An electrical double layer (EDL) is then formed around the droplet as oppositely charged ions in the electrolyte are attracted towards it (Fig. 3A). 59 The interfacial tension varies with the potential across it. The Young-Lippman equation describes the variation of the interfacial tension, g: where g 0 is the maximum interfacial tension at the point of zero charge, C is the capacitance per unit area of the EDL, and V EDL is the voltage across the EDL. 60 When a potential is applied across a LM droplet in an electrolyte, the low electrical conductivity of the electrolyte means that there is a potential drop along the channel. The LM, however, has a high conductivity, so its potential can be regarded as the same at all points on the droplet. Therefore, the potential difference across the EDL is higher at one side of the droplet than the other. This results in a non-uniform interfacial tension. With no external applied potential, the interfacial tension everywhere on the droplet is still uniform. When there is an interfacial tension gradient across the LM droplet, this results in the generation of Marangoni flows along the surface to drive the droplet towards the anode ( Fig. 3A and B). It also results in flow of electrolyte from the lower interfacial tension side of the droplet to the higher interfacial tension side. This type of LM actuation is known as continuous electrowetting (CEW). CEW offers a large degree of control over the movement of the LM droplet. LM can be controlled in two dimensions, 61 and even made to travel up a slope. 62 Accurate manipulation of multiple LM droplets in 2D has been achieved using infrared lasers to selectively trigger phototransistors. 63 The experimental set up used contains a grounded graphite electrode, which surrounds a circuit board immersed in NaOH solution, with a phototransistor and copper electrode array on it. A transparent epoxy is coated on the circuit board, with only the tips of the electrodes exposed, meaning the LM droplets are able to move freely. When a laser is shone onto a phototransistor, the appropriate copper electrode is activated, and a LM droplet is actuated towards it by CEW. This can be used to control the position of multiple LM droplets concurrently. It can also be used to merge droplets by moving them at speed towards one another. If the laser is left on a phototransistor while the LM is situated on its electrode, the LM is oxidised, flattens and spreads. Removing the laser stops the oxidation, and the NaOH solution causes the oxide to be removed, beading the LM back up to a spherical shape. The LM can be split into two droplets during this process if it spread enough during its oxidation. The electrolyte used affects the actuation of the LM. For example, using a hydrochloric acid (HCl) solution rather than NaOH results in the LM surface becoming positively charged 64 (it is negatively charged in NaOH solution). An EDL then forms which has a reversed polarity compared with that made in NaOH solution. This causes the LM to travel to the cathode rather than the anode. The LM also moves slower and needs a greater potential however (25 V minimum for actuation compared with 2 V). 65 The worse performance is due to the low surface activity of chloride ions. 36 Using an acidified (0.1 M HCl) potassium iodide (KI) solution generates a higher surface charge density on the EDL due to iodide ion adsorption. 36 This gives improved electrical actuation in an acidic electrolyte, and it even exhibits better performance than when using NaOH electrolyte. The actuation required a lower voltage (4 V rather than 9 V) and droplets moved faster. 36 When immersed in NaOH solution, LM droplet speed initially increases with greater potential applied. 66 Once a maximum droplet speed is reached however, an increased potential does not result in greater interfacial tension difference on the LM droplet, and hence its speed is not increased. If the potential applied is large enough, it can cause oxidation of LM, making its interfacial tension gradient in the opposite direction, causing the droplet to travel towards the cathode rather than the anode. 66 A surface coating applied to the LM can also affect its actuation behaviours. Micro-or nanoparticles can coat the surface of GaLM droplets to form LM marbles. 67 For LM marbles with semiconductive nanoparticle coatings, a n-type nanomaterial coating (e.g., tungsten trioxide, WO 3 ) induces actuation in a way similar to that of bare LM droplets in NaOH solution -the marble moves towards the anode upon the application of an external potential gradient. 65 However, a p-type nanomaterial coating (e.g., cupric oxide, CuO) induces more complex actuation behaviours -the marble elongates and actuates towards the cathode while the nanoparticles migrate to form a dense cluster tail. 68 The tail eventually falls off and then the LM droplet abruptly changes the direction of actuation towards the anode. Although the voltages used for LM CEW actuation are relatively low (Bfew volts usually), the potential difference is still large enough to for electrolysis to occur (41.2 V required for H 2 generation). However, bubble generation has been shown to have only a very slight impact on the actuation of the LM. 65 Materials Advances Review Open This was achieved by comparing the actuation of stainless steel beads and LM in electrolyte solution with a potential difference across them. The bead actuation was caused only by bubble generation, whereas LM actuation was caused by both bubble generation and the surface tension gradient. The beads moved very slowly in HCl solution (o0.2 mm s À1 ), and didn't move at all in NaOH solution. In contrast, LM actuation was much faster -up to 67 mm s À1 in NaOH solution and 33 mm s À1 in HCl solution. This shows that the surface tension gradient has a much greater effect on LM actuation than bubble generation. The high conductivity and fluidity of LM means it is particularly useful in making reconfigurable antennas. 76 The LM can be moved to different positions to change the properties of the antenna. Accurate positioning of the LM is important for radio frequency applications; however, this is difficult to achieve when actuating LM using CEW due to the inertia of the LM and low friction between LM and the channel. 69 Also, the motion of the LM can create a pressure differential between the electrolyte at either side of it. After actuation, the pressure differential could cause the LM to move from its desired position. 69 One method to improve accuracy of positioning of the LM is to shape the channel to create minimum energy states for the LM droplet. 69 An example of this is a series of rounded indentations (Fig. 3C). The high surface tension of LM means that it will try to minimise its surface area. Therefore, after actuation, it will stay within a set position. This results in a discrete number of positions of the droplet and corresponding radio frequency properties. A radio frequency shunt switch has also been made which makes use of CEW of LM. 77 For the switch, a signal line is placed across a channel, and LM is actuated into a position over the signal line to complete the connection to ground. If no LM is above the signal line, then the connection to ground is removed. The use of electrolyte in radio frequency devices can introduce unwanted losses, however. In this case, when electrolyte is above the signal line, it absorbs the radio frequency signal. In order to use CEW while reducing the losses due to electrolyte, the switch design used a bubble trapped in the channel, and capillary troughs to carry the electrolyte to either side of the main channel. The capillary troughs gave a continuous path in the electrolyte for CEW to function. The troughs were smaller than the channel containing the LM, preventing the LM from moving into the troughs as this would increase its surface area. A bubble was positioned between two LM droplets, it stayed in position due to capillary action of the electrolyte preferentially bringing the electrolyte into the trough rather than the bubble. This design reduced the losses caused by the electrolyte. CEW of LM can also be used for tuneable capacitors. 70 The tuneable parallel-plate capacitor comprises a copper bottom layer with dielectric above it, and a top conductor made of two stainless steel plates, with a channel placed between the plates (Fig. 3D). The channel is filled with electrolyte and a LM slug. Actuating the LM slug by CEW to bridge the gap between the two stainless steel plates causes the effective area of the plates to be increased, resulting in a higher capacitance. The capacitance varied from 9.76 pF with the LM slug not contacting the plates to 10.34 pF with the slug fully between the plates. LM droplets can be used to propel vehicles by using the droplets as soft 'wheels' in electrolyte (Fig. 3E). 71,78 By building a platform around the droplets, cargo can be carried, and the droplets are steered by CEW to a specified location. Alternatively, the LM droplet itself can be used as the vehicle. 79 For example, as LM preferentially wets copper, LM was used to encapsulate a hollow copper-coated sphere. The hollow sphere contained three cabins -the driving, counterweight, and cargo cabins. The driving cabin contained magnetic particles to aid in moving the droplet in addition to using CEW. The counterweight was used to keep the correct position of the sphere within the droplet. The cargo cabin was filled with the payload, and was sealed with wax, which could be melted with a laser. The droplet was successfully actuated using CEW and magnets before unloading its payload. LM soft robots are interesting but have a major drawbackthey are only able to move in an electrolyte, which limits their real-world applications. It is possible to use LM to actuate robots outside of an electrolyte, however. This has been achieved by encasing a channel with a LM droplet, electrolyte, electrodes and a battery within an untethered wheeled robot. 72 The LM position changes depending on the potential applied, which in turn causes the centre of gravity of the wheel to alter, making the wheeled robot move (Fig. 3F). A similar, alternative approach is to create a LM universal mechanical module, which is inspired by a water wheel (Fig. 3G). 73 CEW of LM drives the spinning of the wheel, which is then used to generate rotational motion outside of the solution. Aluminium flakes were also added to the LM, which created a greater negative charge on the LM due to the galvanic reaction between aluminium, EGaIn and NaOH. The greater charge is then utilised to generate a stronger Marangoni flow with the applied potential. This in turn creates a larger force from the module. Another device that uses LM to do mechanical work is a LM motor. 80 The motor contains a rotor with multiple LM actuating units. 80 Each of the actuating units contained a LM droplet immersed in NaOH electrolyte, controlled with a pair of electrodes. The actuating units work together to generate a continuous output torque that is greater than that made by a single LM droplet. LM was also used as the electric brush between the rotor and stator, making use of its high conductivity and low viscosity. This reduced friction and prevents wear and sparks. The motor was used to successfully drive an untethered vehicle and boat. When a LM droplet is trapped within an electrolyte filled chamber with an applied electric potential across it, it functions as a pump. 74 The surface tension gradient causes Marangoni flow around the droplet, but as the droplet it trapped, it cannot move out of the chamber. Therefore, the electrolyte is pumped by the droplet (Fig. 3H). The applied potential may cause an oxide layer to form on the more anodic pole of the droplet after a few seconds. This would then stop the pumping due to the reduction in the surface tension on that side (as that was the side with the highest surface tension previously). Oxidation can be prevented by using an AC potential difference with a DC offset. 74 The optimum frequency (B200 Hz) ensures surface charges can be accumulated and released, preventing oxidation and allowing pumping to continue. Too low of a frequency results in gradual oxidation. Too high of a frequency means that the ions in the EDL do not have time to redistribute, reducing the pressure difference across the droplet and lowering the pumping rate. Under optimum conditions, the LM pump exhibited a high flow rate with low power consumption. It was shown to work with a range of electrolytes including solutions of NaOH, NaCl, and phosphate buffered saline. It could also pump a liquid made by mixing glycerol and deionised water, which had a viscosity of 0.209 N s m À2 (B230 times water viscosity). LM pumps have been used as cooling systems, 81 and have also been shown to be able to pump an ionic liquid. 82 Using ionic liquid offers an advantage over using NaOH as electrolyte due to an increased interfacial ion adsorption, which results in a higher pumping rate. In addition to pumping of liquid, mixing is also important for flows at low Reynolds numbers. Mixing can be achieved by applying an AC electric potential in electrolyte to a LM droplet wetting on a copper pad, causing the Marangoni flow generated at the surface of the LM to oscillate back and forth (Fig. 3I). 75 The oscillating potential causes the interfacial tension on different parts of the surface to vary rapidly with time, and LM wetted onto the copper pad prevents the bottom layer of the droplet moving. Mixing can also be achieved without the use of a copper pad. 83 By using a DC biased AC potential with a greater potential variance, this induces chaotic advection between the neighbouring laminar flow. Electrocapillarity The flow of LM in capillaries can be achieved by using electrochemical and electrowetting techniques-both of which will be elaborated on here. Electrochemically controlled capillarity (or 'recapillarity') involves removing the oxide layer of GaLM, causing LM to retreat from a channel (Fig. 4A). 84 This prevents the oxide layer from sticking to the walls of the channel (which can occur if the LM is pumped out). A non-caustic electrolyte such as sodium fluoride (NaF) is used for electrochemically controlled capillarity, as it does not remove the oxide without a reducing potential. Applying a reducing potential to the LM causes a current to flow and the LM to move out of the channel due to removal of the oxide and its increase in interfacial tension. Removing the potential stops the current and the LM flow, because oxide that formed on the sidewalls of the channel needs to be removed for the LM to flow out. Speeds of up to 0.3 ms -1 have been achieved using a high concentration of electrolyte (1 M NaF). Localised reduction of planar GaLM is also able to 'write' by removing LM near an electrode ( Fig. 4B and C). 85 Electrowetting based control of LM in capillaries is similar in principle to CEW. By applying a positive potential to the electrolyte relative to the LM, a surface tension gradient is generated on the LM. 59 For the case of LM in a reservoir connected to a channel, the point of the LM with the lowest interfacial tension is at the entrance to the channel. The interfacial tension gradient results in Marangoni flow along the channel, which acts as a conveyor belt to pull the LM out of the reservoir. If this force is greater than the capillary pressure, the LM is pulled into the channel. 59 Small capillary sidechannels can be used in addition to the main channel to ensure LM does not block the flow of electrolyte. Alternatively, applying a negative potential to the electrolyte relative to the LM causes oxidation of the LM. 59 This also causes LM to move along the channel from the reservoir due to interfacial tension reduction and Marangoni flow. The potential used dictates how far the LM will travel along the channel, as when the LM gets closer to the negative electrode and the resistance decreases, the current increases, which leads to more rapid oxide growth. This can eventually stop the LM actuation. The oxide growth has been used to make LM retain its shape in a channel after actuating it to a desired position (Fig. 4D). 59 Reducing the oxide will then quickly bring the LM back to the reservoir. Electrocapillarity is also able to steer LM as it is pumped along branching microchannels. 86 Applying a positive potential to an electrolyte relative to LM in one of two branching channels causes LM to flow preferentially to the channel with the applied potential, as that requires less energy (Fig. 4E). Alternatively, applying a high (B5 V) oxidative potential to the LM in electrolyte causes a thick oxide layer to form as it nears the negative electrode. This blocks the channel, making the LM travel along the other channel. 86 LM has been used in reconfigurable antennas that use electrocapillary actuation. 87,88 One antenna design varied its polarization by making LM move into 5 discrete states by filling different channels that were connected to a central reservoir. There were also notches at the end of the channels to create a minimum energy state for the LM, keeping it in position without having to use any power. 87 Another design combined electrocapillarity and CEW of LM in a reconfigurable LM pixel array. 88 Electrowetting on dielectric (EWOD) Electrowetting on dielectric is a widely used technique to modify the surface wetting properties of conductive liquid droplets. 89 A droplet placed on a dielectric surface, with an electrode beneath it and an electric potential applied, causes a redistribution of charges towards the droplet-surface interface. The charge accumulation on the surface of the droplet reduces its surface tension and hence its wetting angle. Surface tension varies according to the Young-Lippman equation, which here is written to show how the angle changes based on the applied voltage: where y is the contact angle, y g is the Young's contact angle under zero applied potential, e is the relative permittivity of the dielectric used, e 0 is the permittivity of free space, d is the dielectric layer thickness, and g LV is the interfacial tension between the liquid and vapor phases. 89 Using LM for EWOD has some challenges associated with it. For example, the oxide layer adheres well to most surfaces and can impede the actuation of the LM. One solution is to prevent the oxide layer forming by reducing the oxygen concentration o1 ppm. 28 However, this is not feasible for most applications. Alternatively, infiltrating a silicone oil with HCl results in an acidic oil that is electrically insulating and removes the oxide layer. 90 Also, the very high surface tension of LM means that a high voltage (41 kV) is required to be able to change the surface tension appreciably. This high voltage, coupled with the thin dielectric layer means it nears or is greater than the dielectric breakdown limit for some materials. To reduce the voltage required, the shape of the channel can be changed so that the LM droplet is actuated with a lower electrostatic pressure. This technique was applied to actuate LM in acidic silicone oil using EWOD, with grooved channels in order to reduce the difference in electrostatic pressure required to merge the droplets (Fig. 5A). 90 The shape of the grooves also meant the droplets separated when the voltage was removed (Fig. 5B). The LM EWOD device was used as an electromagnetic polarizer with a low switching time of B12 ms. Electrostatic Electrostatic forces have also been used to successfully actuate non-spherical LM particles electrophoretically. 91 The particles were created using shear forces to pinch off LM flow out of a capillary tube. The silicone oil used as the fluid was saturated with oxygen to ensure rapid oxidation of the LM. The oxide layer stabilised the non-spherical shape formed by the shearing forces. Various morphologies were created, such as an ellipsoid, single tail, double tail and rod, and had a size of B250 mm. The particles were actuated using a high potential difference (Bfew kV) with needle electrodes. Bringing a charged electrode towards a particle and contacting it caused it to be charged. It then was repelled due to electrostatic forces (Fig. 6A). Placing the positive and the negative electrode opposite each other made the particle move between the electrodes. At first the LM particle is positively charged and moves towards the negative electrode, then it becomes negatively charged and moves the other way (Fig. 6B). The period of the oscillation reduces with increasing potential difference between the electrodes, from a 148 ms period at 2.4 kV potential difference, to 55 ms at 3.6 kV. The thinner tails of the particles line up with the applied electric field due to uneven charge distribution. Multiple particles line up in series between the two electrodes (Fig. 6C), and can form a short-lived electrical connection, which is destroyed as a result of the high voltage causing rapid electrolysis. Magnetic The high conductivity of LMs means that a changing magnetic field is able to induce a large enough eddy current for Lorentz forces to move a LM droplet in NaOH solution. 92 The alkaline solution is required to remove the oxide layer to prevent the LM droplet sticking to the container. Interestingly, the LM droplet moves in the opposite direction to solid gallium and copper spheres under the same conditions (Fig. 7A). The Lorentz forces experienced by solid metal spheres result in a horizontal force (which is in the same direction as the moving magnet), and a torque that makes it rotate towards the opposite direction. The slip layer on LM acts as a lubricant, preventing it rolling. Therefore, LM moves in the opposite direction to solid spheres, which roll due to the applied torque. Rotating magnetic fields have been shown to induce surface patterns in large LM droplets (Fig. 7B). 93 The setup used comprises a LM droplet in NaOH electrolyte, with graphite electrodes in the centre and around the outside. With a rotating magnetic field and a low potential applied (2.5 V), the LM rotates around the inner electrode due to Lorentz forces. With an increased potential (4 V), the LM stops rotating, and displays the folding and rippling patterns. The combination of the applied potential and the rotating magnetic field causes variations in interfacial tension to generate the patterns. A ferromagnetic LM alloy has been made that is able to be actuated electrically by CEW and using magnetic fields. 94 LM was mixed with copper-iron nanoparticles in HCl solution to create the ferromagnetic LM. The HCl solution removed the oxide on the LM and on the nanoparticles. A galvanic cell is also formed between the Ga and the copper-iron nanoparticles in HCl solution, oxidising the Ga and preventing the nanoparticles from dissolving in the HCl solution. Mixing for 30 minutes resulted in the LM gaining a thick solid shell composed of various oxides and alloys. CEW could be used to separate the more fluidic LM core from the solid shell. The resultant fluidic, ferromagnetic LM had a greater weight content of copper-iron with larger amount of nanoparticle added initially. LM alloys with a higher weight content of copper-iron had increased viscosity, which resulted in slower actuation speed during CEW and a reduced pumping flow rate, but had greater magnetic sensitivity. Conclusion and outlook In this review, the electrical methods to actuate LMs have been discussed, along with the many different applications for this, such as in reconfigurable antennas, pumps, switches, motors, and artificial muscles (Table 1). It is certain that the unique and beneficial properties of LM means that it offers great advantages to a range of technologies. However, there are still some challenges that must be overcome in order for LM enabled electromechanical actuator-based devices to become widely accepted and even commercially viable. For example, electrolyte is needed for most of the LM actuation types. This liquid-liquid system has to be carefully sealed, and is difficult to get useful work out of. Electrolyte is especially unwanted in radio frequency devices, where it introduces additional losses. Moreover, electrochemical oxidation/reduction and CEW effects for GaLMs work best in a strong basic solution or acidified KI solution; this compromises the performance and limits the applications of LM actuators in neutral, other acidic, and non-ionic liquids. Instead of using electrolyte and electrical based actuation, pressure can be used in conjunction with a non-wetting coating. However, this introduces additional components and doesn't use one of the benefits of LM (low voltage electrical actuation). An acidic silicone-based oil has been shown to remove the oxide layer for EWOD, however this is a corrosive liquid and is electrically insulating so cannot be used for CEW or oxidation/ reduction. Electrolysis is another issue. Bubbles of hydrogen are generated at the cathode during electrochemical reactions in aqueous solutions when the potential difference used is 41.2 V. The bubble generation can impact the performance of the LM actuator itself (for example by blocking the actuation of LM artificial muscle). Electrolysis also reduces the efficiency of the actuation. Using an ionic liquid rather than a NaOH solution is one potential remedy, as LM has been shown to be able to pump an ionic liquid. Finding an ionic liquid that is safe to use, doesn't generate gas bubbles at the potential voltages needed and able to be used for oxidation/reduction and CEW would be of great benefit. Another challenge is how best to induce larger forces with LMs? The actuation methods mentioned in this review mainly use interfacial tension modulation to induce motion. The artificial muscles created were only able to lift B1 g, whereas for most real-world applications the forces required will be 4100 times greater. Decreasing the size of LM droplets would increase the surface area to volume ratio, which scales with the inverse of the droplet radius. This would result in actuators with a greater power density. However, this could also be more complex to make and to get work out of the system. Conflicts of interest There are no conflicts to declare.
8,988
2021-01-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Gold- and Silver Nanoparticles Affect the Growth Characteristics of Human Embryonic Neural Precursor Cells Rapid development of nanotechnologies and their applications in clinical research have raised concerns about the adverse effects of nanoparticles (NPs) on human health and environment. NPs can be directly taken up by organs exposed, but also translocated to secondary organs, such as the central nervous system (CNS) after systemic- or subcutaneous administration, or via the olfactory system. The CNS is particularly vulnerable during development and recent reports describe transport of NPs across the placenta and even into brain tissue using in vitro and in vivo experimental systems. Here, we investigated whether well-characterized commercial 20 and 80 nm Au- and AgNPs have an effect on human embryonic neural precursor cell (HNPC) growth. After two weeks of NP exposure, uptake of NPs, morphological features and the amount of viable and dead cells, proliferative cells (Ki67 immunostaining) and apoptotic cells (TUNEL assay), respectively, were studied. We demonstrate uptake of both 20 and 80 nm Au- and AgNPs respectively, by HNPCs during proliferation. A significant effect on the sphere size- and morphology was found for all cultures exposed to Au- and AgNPs. AgNPs of both sizes caused a significant increase in numbers of proliferating and apoptotic HNPCs. In contrast, only the highest dose of 20 nm AuNPs significantly affected proliferation, whereas no effect was seen on apoptotic cell death. Our data demonstrates that both Au- and AgNPs interfere with the growth profile of HNPCs, indicating the need of further detailed studies on the adverse effects of NPs on the developing CNS. Introduction Nanotechnology and nanobioscience are major research areas that are rapidly expanding. Recent advances in these areas have stimulated new applications within biomedicine where nanomaterial can be used to achieve a more effective and safe drug delivery approach. A nanomaterial is by definition an object with at least one dimension in the range of 1-100 nm, which includes nanogels, nanofibers, nanotubes and nanoparticles (NPs, e.g. rods, cubes, and spheres) [1,2]. NPs can be manufactured from a wide range of materials e.g. polymers, metals, carbon, silica, and materials of biological origin such as lipids or lactic acid. Engineered NPs are attractive for medical purpose due to their translocational properties in tissue and the fact that their surface to volume ratio is larger than for microsized particles and hence the ability to adsorb and carry other compounds [3]. NPs can also serve as probes appropriate for different medical purposes, e.g. imaging thermotherapy. Furthermore, nanomaterial is increasingly used in a variety of commercial products including clothing, cosmetics, electronics, and food [1,4], and therefore the risk of unintentional exposure becomes obvious. The small size of NPs make them more reactive due to the larger surface area per volume and therefore such particles may enhance the desired effects, as mentioned above, but also new unwanted toxic effects may be introduced [5]. Especially, two metal NPs, Au-and AgNPs, have been intensively studied, AuNPs due to their good intrinsic properties such as high chemical stability, well-controlled size and surface functionalization, and AgNPs, due to their antibacterial effect, often applied in wound disinfection, in coatings of medical devices and prosthesis, and commercially in textiles, cosmetics and household goods [6]. Au has been widely described as highly biocompatible and Au-based NPs have been extensively investigated and also clinically used in drug and gene delivery applications [7]. However, it has become evident that the field of nanotoxicology is lagging the study of biomedical applications of different NPs. To avoid repeating historical mistakes such as asbestosis or silicosis, it is crucial to unravel the possible toxic effects of NPs before they are spread into the ecosystems and become a common health issue in our society [8]. A very recent report shows that commercially manufactured polystyrene NPs can be transported through an aquatic food chain from algae, through zooplankton to fish, and affect lipid metabolism and behavior of the top consumer [9]. One should bear in mind that the accumulation of other heavy metals, such as Hg in ecosystems, has lead to a common acceptance to decrease intake of top consumer fish types [10]. This is especially true for pregnant women where excessive intake of heavy metal-containing fish can cause detrimental damage to the developing fetus. Today, the knowledge on human exposure and possible toxicity of engineered NP-based products are very limited. Therefore, investigations on the biokinetics of different NPs in an organism is currently given much attention since there is an urgent need for information on the absorption, distribution, metabolism and excretion (ADME) of NPs and validated detection methods of engineered nanoparticles. The number of reports describing unwanted non-target effects of various NPs is increasing. In experimental studies it has been described that NPs can cause adverse effects not only to primary organs directly exposed, but also to secondary organs, such as the cardiovascular system and the central nervous system (CNS) [11]. Neuronal systems are especially vulnerable, both during development but also in adulthood, to metal intoxication that have been associated with the development of the major neurodegenerative diseases, such as Alzheimer's-and Parkinson's diseases [4,12]. The adult CNS can be reached both by AuNPs and AgNPs through different routes in the body and reports on their cytotoxic effects are increasing [13]. Nanosized Au has recently been found to have damaging effect on epithelial cells and a human carcinoma cell line [14,15,16,17]. AuNPs have also been reported to pass the blood-retinal-barrier (BRB) after intravenous injection in mice [7,18]. Moreover, AgNPs have been reported to pass the bloodbrain-barrier (BBB), i.e. manganese oxide-and AgNPs, respectively, reach the CNS after inhalation and subsequently uptake in the respiratory tract [19,20]. Moreover, after both oral intake and intravenous administration, Au-and AgNPs have been reported to reach several secondary organs, including the CNS [21,22]. Observed pathological effect in brain tissue to NP exposure has mainly been oxidative stress and inflammation [19,23,24,25,26]. If there are concerns about the toxic effect of NPs to the adult CNS, the concerns should be even greater with regard to possible risk of translocation of NPs across the placenta and effects on embryogenesis. Little is currently known about whether NPs can cross the human placental barrier or interfere with placental function, but suitable transport models have been developed which can be used to clarify the mechanisms of cellular interaction and transport across the placenta. A few investigations provide evidence for transport of commonly used NPs, such as silica, TiO 2 -and cadmium-NPs, which hence warrants further investigation into the potential of various NPs to possibly affect embryogenesis [27,28,29,30,31,32,33]. Even at low concentrations toxic NPs accumulation inside the cells during development can have a detrimental impact on the fetus development and thereby the viability of the post-natal fetus. For the developing CNS, cell division, cell differentiation and cell migration follow a highly coordinated scheme where DNA needs to be replicated, proteins distributed to cells in a controlled fashion, cells need to divide at the right point in time, and so forth. All these processes are extremely prone to the slightest changes occurring in the intra-or extracellular environment. Up-to-date, a sparse number of in vitro studies have been published using neural-like cell lines for testing the effect of NPs. Studies including PC-12 cells (a rat cell line with a neuronal-like phenotype) have shown changes in gene expression and interference with signal transduction pathways after Cu-and AgNPs exposure [34,35]. Two different reports using primary neural cultures from mice demonstrated interference with electrical activities after treatment with Ag-, TiO 2 -, carbon black-, or hematite-derived NPs [36,37,38]. Lately, oxidative stress responses including interference with calcium-based signaling processes were further reported after AgNP exposure, using mixed primary neural cultures [39]. Furthermore, in a multiparametric study AuNP exposure affected cellular proliferation and differentiation in an immortalized neural cell line (C17.2), primary HUVECs and PC12 cells [17]. Here, for the first time, we analyze the effect of low concentration Au-and AgNPs on the growth characteristics of a human CNS-derived neural precursor cell line. The human neural cells are cultured as neurospheres (aggregates of freefloating cells), thus serving as a good model of a three-dimensional neural system under development. The cell line is established from cells isolated from seven week old fore-brain and can be expanded into high numbers in the presence of epidermal growth factor (EGF), human basic fibroblast growth factor (hbFGF) and human leukemia inhibitor growth factor [40,41]. We studied the eventual cytotoxic effects on these cells after two weeks exposure of 20 and 80 nm Au-and AgNPs at low concentrations (ranging from 0.00022-0.22 mg/ml). Low concentrations of the NPs were used to mimic the low fraction of NPs reported to reach secondary organs (0.1-3%) [42,43]. The following end-points were used: 1) uptake of NPs was studied using transmission electron microscopy (TEM), 2) gross-and detailed morphological studies were performed using routine Htx-Eosin staining as well as scanning electron microscopy (SEM), 3) ratio of viable vs. dead cells was calculated by counting cytochemical stained cells 4) numbers of proliferative cells (Ki67 immunostaining), and dying apoptotic cells (TUNEL assay) were quantified. For the first time, we describe uptake of both 20 and 80 nm Auand AgNPs, by HNPCs under proliferation conditions. Our data demonstrate that both Au-and AgNPs interfere with the growth profile of human neural precursor cells, indicating the importance of further detailed studies on the adverse effects of NPs on the developing CNS. Translocation of NPs Over the Placenta and to Secondary Organs There are several primary routes for NPs to enter the body; inhalation, via the skin or orally, as well via the eye. From these primary routes many types of NPs has been described to reach also secondary organs (passing through important barriers e.g. BBB), such as the cardiovascular system and the CNS [13,19,20,21,22]. In addition, over the last five years reports have emerged describing a detrimental effect on embryogenesis and fetus vitality after NP exposure of pregnant rodents, indication NP translocation over the placenta [31,32]. NPs were found localized in the developing CNS, suggesting a risk of interference with the complex events responsible for normal CNS development. In agreement, several studies demonstrate that NPs exposure cause oxidative stress and inflammatory response using in vitro models of neural systems [23,24,25]. Such pathological conditions are well known to be associated with the progression of the most common neurodegenerative diseases, including Alzheimer's-and Parkinson's diseases [4,12]. Investigations using models of neural system-resembling cell assays begin to gather evidence on the great impact even very low concentrations of NPs [43]. The limited numbers of reports elucidating the key mechanisms for NP-related CNS cytotoxicity often rely on the neuronal-like rat PC-12 cell line or other immortalized neural cell lines, such as C17.2 [17]. Since immortalized cell lines have mutated cellular functions, such as disturbed cell proliferation and cell death mechanisms, the use of these types of cells in toxicological studies are questionable. Several others have shown that NPs may or may not affect the cell proliferation and cell death characteristics of tumour cells, however, these reports cannot totally be related to cell growth characteristics of normal cells. Today, there are NP-based chemotherapeutic agents on the market even though the impact NPs have on normal cells is not fully understood [44]. Here, for the first time, we use a human neural precursor cell-based assay to gain knowledge on how normal cells are affected by Au-or AgNPs. The cell line can be regarded as model of a developing brain, since the cells originate from forebrain tissue, obtained from one 7-week (post-conception) human embryo and are grown as a so-called neurosphere culture. The cell line can be cultured and expanded for at least one year in vitro with maintained multipotentcy [40,45]. HNPCs grown as neurospheres have been used in developmental cytotoxicity (DCT) studies for environmental chemicals. The use of HNPC lines as a model for mimicking basic processes of brain development; such as cell proliferation, differentiation, migration and apoptosis, has proven to be a useful tool [46]. We here use two different types of NPs, Au since it is believed to be inert and suggested for different therapeutical applications, and Ag since it is known to have toxic effects and is widely used in consumer products. The diameter of the NPs as measured in transmission electron microscope (TEM) was 20 and 80 nm. The NPs were used in low concentrations since it has been reported that low levels of NPs were found in secondary organs after systemic administration using different study designs [42,43]. We also included silver nitrate (AgNO 3 ), as a positive control for cytotoxicity, since the release of Ag + ions is known to be very toxic. An overall goal with the present and on-going studies using the current HNPC-based assay is to establish a battery of appropriate end-points to be utilized in further investigations on cellular effects of other NPs. This cell line can also be used for testing cell effects both during proliferation and differentiation conditions. Light microscopy was used to study both gross and detailed morphology after Htx-eosin staining, and scanning electron microscopy (SEM) was used to study detailed surface morphology of the neurospheres. Cellular Uptake of Au-and AgNPs by HNPCs First we investigated, by using TEM, whether the current used HNPCs had the ability to internalize Au-and AgNPs. Since AuNPs and AgNPs have high atomic numbers it is possible to distinguish them from cellular structures using TEM [47,48]. We demonstrate that 20 and 80 nm Au-and AgNPs can be taken up by human neural cells under expansion conditions, a crucial finding for all our further investigations on NP exposure effect on HNPCs (Fig. 1). In agreement, recent studies showed that mesenchymal stem cells (MSC), neural progenitor cells, and HeLa cells can take up 20 nm coated-and uncoated AuNPs in the same size range as used here [17,49,50]. Au-and AgNPs of both sizes were only found in various cellular regions, including the cytosol, except the nucleus, judged from scanning through the entire spheres including detailed high magnification analysis of at least 50 cells/experimental group. Here, the NPs were found as compact aggregates in the cytoplasm of the cells, as has been reported by others [17,50,51,52]. This finding is in line with many reports on cellular localisation of ingested NPs in the cytoplasm [17,50]. In contrast, Hackenberg et al showed uptake of AgNPs (,50 nm) also in the nucleus of MSCs [51]. Dziendtzowska et al [21] demonstrated that AgNPs are able translocate from the blood to the main organs and the concentration of silver in tissues was significantly higher in rats intravenously injected with 20 nm AgNPs compared with 200 nm AgNPs. The silver concentration in the kidneys and brain increased during their experiment and reached the highest concentration after 28 days [21]. Heavy metals are known to be hazardous and studies both in vitro and in vivo implicate that Auand AgNPs are able to pass both the intact BBB [53,54] and the BRB [18]. Typical pathological signs such as inflammation and oxidative stress have been described in the rodent brain after manganese oxide and silver exposure [19,26]. NP Exposure Affects Sphere Size and Sphere Aggregation Properties After two weeks of exposure to NPs or AgNO 3 , images of the HNPC neurospheres were acquired for size measurements. The spheres were thereafter dissociated and total cell numbers were determined. AuNPs or AgNPs did not significantly affect the total number of living and dead cells, respectively, compared to control ( Fig. 2A). The fraction of dead NP-exposed cells, identified by Trypan Blue (TB) staining, ranged from 46-56% of the total cell number as compared to control where this fraction was 46%. An increase of up to 22% of dead cells as compared to control was shown after NP-exposure (n = 223). However, the rather high number of dead cells also found in the control may result from the mechanical dissociation used to dissociate the cells before counting. TB staining is a commonly used method for excluding viable cell from dead cells, since TB is unable to pass the intact cell membrane of viable cells. In the current study, only low concentrations of the NPs were used and our results show no effect on cell viability judged from the cell counting, while others have used higher NP concentrations and have reported that cell viability decrease with increased concentration or incubation time of AgNPs [55,56]. For example, Soenen et al show that cell viability decrease after exposure to large AuNPs, i.e. 200 and 500 nm, but not with smaller AuNPs [17]. In parallel, AuNPs have not been reported to alter cellular functions after two weeks of exposure [51], while AgNPs on the other hand have been shown to decrease the cell viability of MSC [49]. Observations from three independent experiments indicated that exposure to all NPs and all concentrations used resulted in an increased sphere aggregation and darker spheres cores, by using phase contrast microscopic analysis, compared to control. After NP-exposure, the majority of the spheres were larger; approximately 450 mm in diameter although with a larger variation as compared to 335 mm of the control spheres (Fig. 2B). Some of the spheres exposed to NPs were up to twice the size, 700 mm, of control spheres. The smaller spheres found after NP-exposure were in the size-range of 200-270 mm. Although, the increase in sphere aggregation and overall larger spheres were more profound after AgNP-exposure, while AuNP-exposure resulted in darker sphere cores, and some of the spheres were twice as large compared to control spheres. For all four NPs, the concentration seems to affect the outcome in a dose-response fashion, 800 particles/cell (p/c) had more effect than 50 p/c on sphere aggregation and sphere size variation. AgNO 3 was used as a positive control since it is known to have a severe cytotoxic effect (although the responsible mechanism for this effect is not fully understood), which was here confirmed (Fig 2A). The NPs used in this study are dissolved in water. In our recent studies the NPs were centrifuged and discarded, with only the supernatant remaining (unpublished results). The supernatant was added to microglial cells and a cytotoxic test (MTT) was performed. The supernatant alone did not induce toxicity. Therefore, we expect no toxicity associated with the vehicle. As mentioned above, NPs are known to often form aggregates and they do so when they come in contact with biological fluid. The material, size and surface may alter the biological nature of the proteins in the corona, and subsequently the biological impact [57,58]. This change of protein action in the corona could be one reason to the differences in size and the different ability of the NP-exposed neurospheres to form aggregates consisting of several spheres closely attached to each other. Together, our findings that Au-and AgNP affect HNPC Effect of Nanoparticles on Human Neural Cells PLOS ONE | www.plosone.org normal growth are very important with regard to the studies showing that NPs can indeed cross the placenta and inhalation of cadmium oxide NP during pregnancy affects the reproductive fecundity, hence alters fetal and postnatal growth of the developing offspring [33,59]. Thus, our results on changes in human neurosphere size and capacity of neurosphere-neurosphere aggregations suggest that Au-and AgNP exposure could affect the basic growth characteristics of a three-dimensional developing human CNS. Au-and AgNPs Affect both Cell Proliferation and Apoptosis of HNPCs Since no significant difference were found when counting the total numbers of HNPCs after two weeks of NP-exposure, despite a clear overall morphological difference compared to control cultures was observed when studying the neurosphere sizes and composition of small and large spheres, we decided to study expression pattern of markers for cell proliferation and apoptosis, respectively. In order to investigate if the cells had a disturbed cell proliferation, we estimated the relative numbers of Ki67-expressing cells in NP-exposed and control cultures, respectively (Figs. 3A, B). Changes in cell proliferation were examined by using immuno-labelling with a Ki67 antibody, a protein strictly associated with cell proliferation that can be exclusively detected in the nucleus during interphase [60]. For AuNP-exposed HNPC a significant increase in cell proliferation was only found after exposure to 20 nm AuNP using the highest concentration (Fig. 3B). However, a significant increase in cell proliferation was found after AgNP-exposure using 20 and 80 nm (both concentrations) as compared to control (Fig. 3B). As expected, the cell proliferation markedly decreased after AgNO 3 exposure (Figs. 3A, B). Only few cells and no spheres were remaining after exposure to AgNO 3 5.0 mg/ml, hence no data or image are presented for this experimental group in any of the analysis presented hereafter. These findings can be directly correlated to the darker sphere cores and larger spheres present after NP exposure when imaging with light microscopy (Fig. 3B). The altered cell proliferation shown after NP exposure could be due to various stress factors. It has been shown that AuNPs can induce oxidative stress in neural progenitor cells [17] and AgNPs induce oxidative stress in skin fibroblasts [61]. Direct cytotoxic effects of Au-or AgNPs on HNPCs, was investigated using a fluorescein-conjugated dUTP TUNEL-staining to detect cells undergoing apoptosis. TUNEL-staining is commonly used to detect DNA fragmentation, a hallmark for cells undergoing apoptotic cell death [62]. The number of TUNEL+ cells increased after exposure to either Au-or AgNPs, compared to control (Fig. 4A). However, the increase of TUNEL+ cells was only significant after exposure to AgNP (Fig. 4B). The increase of TUNEL+ cells seems to be correlated to increased concentration of the NPs. In approximately 30-50% of the large spheres dense aggregates of TUNEL+ cells were observed. The aggregates varied in size, from small aggregates containing only few cells to large aggregates containing up to hundreds of cells (Fig. 4B). These aggregates were found in both Au-or AgNP-exposed neurospheres. A DNA leakage from the nucleus into the cytoplasm during apoptosis, and further into the extracellular space within the spheres could explain the aggregate-like TUNEL-staining pattern [63,64]. AgNO 3 -toxicity was confirmed by the significant and very large increase of TUNEL+ cells (Figs. 4A, B). Small spheres exposed to NPs resembled control spheres. This may suggest that smaller aggregates of HNPCs are less vulnerable for Au-and AgNP exposure, but this needs to be further explored. Our results are in line with several recent reports using different assays describing interference with DNA regulation after NP exposure. In mouse embryonic stem cells, AgNPs induce apoptosis shown by increased Annexin V expression, up-regulation of p53, and DNA damage [56]. By using a lactate dehydrogenase (LDH) assay AgNPs was shown to induce cell death in a size and dose dependent manner in a primary mixed neural cell culture, while AuNPs had no affect [39]. The activation of the transcription factor NFkB, a critical immediate early response gene to injury, can another way to study apoptosis, and large AuNPs have been found to induce activation of the NFkB pathway due to the oxidative stress, while small AuNPs have no effect [17]. In summary, it is clear that AgNPs have the ability to increase apoptosis in different stem cell-based assays, again indicating the risk for a developing organism to be negatively affected by NP exposure. Gross-and Detail Morphological Studies of Human Spheres The biology of NSCs, cultured as neurospheres in vitro, is still not fully understood. The NSCs and neurospheres are both functional and morphological heterogeneous, they differ in cell size, viability, and cytoplasmic content. Viable cells are found mostly in the periphery of the sphere, while apoptosis mainly occurs in the inner part of the sphere, indicating hypoxia. When studying the neurospheres in high magnification, cilia-like cytoplasmic processes extended from the sphere can be visualized, as found in vivo in ependymal stem cells lining the ventricular system [65]. In the present study, cryosectioned Htx-eosin-stained HNPC spheres were carefully analysed regarding detailed morphology of the neurospheres. Actin filaments were stained using an actin specific Phalloidin probe in order to investigate alterations in the cytoskeleton network. SEM analysis was used to study the morphology of the surface of the spheres. Overall, after two weeks exposure, the Au-or AgNPs affected the single cell morphology seen as changed orientation of the nuclei and cytoplasm, with the nuclei being positioned more central while the majority of the cytoplasmatic volume was turned towards the sphere edge (Fig. 5A). This phenomenon was more profound with an increased concentration of all four NPs used. AgNPs seemed to affect the sphere morphology more than AuNPs (Fig. 5A). In general, NP exposure increased the number of pyknotic cells in the spheres compared to control (Fig. 5A). However, no attempt was made to quantify numbers of nuclei or pyknotic cells. An overall decrease in density and stability of the spheres was detected after two weeks of NP exposure. Both Auand AgNPs induced channel formation in the spheres, although most prominent after AgNP-exposure (Fig. 5A). AgNO 3 was confirmed to be toxic as shown by the high number of pyknotic cells found in the spheres, but also as more loosely arranged spheres with fuzzy edges and extended channel formation (Fig. 5A). Exposure to Au-or AgNPs for two weeks altered the cytoskeleton architecture and resulted in overall up-regulation of actin, shown by a more intense staining, especially at the sphere edges, when compared to control (Fig. 5B). The increase in actinexpression was more profound for spheres exposed to Ag 80:800. Only few cells and no spheres were remaining after exposure to AgNO 3 5.0 mg/ml, hence no image is presented. SEM analysis of HNPC spheres revealed a distinct change in their morphological feature after two weeks of Au-or AgNP exposure compared to control. At two weeks of NP exposure, the human spheres displayed loose and fuzzy spheres, while control spheres are smooth and compact (Fig. 5C). As mentioned above, NP exposure increased sphere aggregate formation. Only few cells and spheres were remaining after exposure to AgNO 3 1.0 or 5.0 mg/ml, hence no image is presented. The morphological changes were more profound with increased concentration of the NPs. However, a material effect was also evident since AgNPs affected the cellular morphology more than AuNPs. Taken together, the detailed morphological analysis of both the gross sphere profile as well as on a cellular level, clearly demonstrate that both Au-and AgNP trigger a change in the cytoarchitecture, i.e. cell-to-cell contact, of the whole sphere as well as on the single cell level judged by the changed nucleus-and cytoplasm orientation. The morphological findings from the Htx-eosin staining, actin staining, and SEM-analysis, in addition to the increased aggregation of the spheres after NP exposure, indicate both the intra cellular and surface properties may have been altered in the HNPCs. The actin fibrils are less stretched and has a more ''dot''like feature after two weeks of NP exposure, compared to control. On the contrary to our findings, others have demonstrated a decrease in actin-expression after 6 days of incubation with Au/ citrate-NPs shown by immuno-labelling [66]. Soenen et al also showed a clear loss of actin network after AuNP exposure, but in C17.2 neural progenitor cell line the actin-loss was less prominent compared to epithelial cells [17]. Further studies may elucidate the changed expression of cell surface markers, which is highly achievable with regard to the access to well-described surface molecules. For example, integrins are important factors for maintaining functional such as cell proliferation, migration, differentiation and survival [67]. Integrins are heterodimers that consist of two transmembrane chains, one a and one b, where the b1-subunit is e.g. involved in the regulation in epidermal stem cell maintenance [68,69,70], but also neural precursor cell populations contain cells that express b1 integrins [71]. Neurons that express high levels of b1-integrin are more likely to form neurospheres than those expressing intermediate levels, although no difference in cell viability was found. Cells that express high levels of b1integrin are found at the neurospheres edges [72]. If the neurospheres express high levels of b1-integrin, single neurospheres might stick to each other and induce aggregate formation. In further studies we will explore whether the integrin expression is changed after NP exposure of the HNPCs. Markers for Cellular Differentiation Since we observed a clear change in the cellular morphologies in the experimental groups, we were interested in investigating a possible accompanied alteration in the expression pattern of typically expressed early neural markers in the HNPC cultures. Immuno-labelling using GFAP-and DCX-antibodies, respectively, revealed the composition of neural cells in the neurospheres. Glial/immature neural cells express GFAP [73,74] and neuronal progenitor cells express DCX [41,75,76,77]. Exposure to Au-or AgNPs during two weeks resulted in overall up-regulation of GFAP, especially at the sphere edges, and larger cell bodies were observed compared to control (Fig. 6). A clear difference between size and concentration of the NPs was not revealed. The NP exposure also resulted in longer GFAP-positive processes. Some of the processes were thicker than the processes found in control spheres. Small spheres exposed to NPs resembled control spheres. When spheres are in a proliferative state, GFAP is found in the sphere core resembling the superficial regions of the cortex [46,72], and upon differentiation the GFAP is homologous distributed throughout the sphere indicating a maturing cell culture. Mercury exposure to neural progenitor cells decreased the migration and increased the glial/neuron ratio [46]. In agreement, our results on increased GFAP+ staining and changed morphologies of the GFAP+ cell profiles, together with the reports by others may indicate that the NP-exposed cultures have been stimulated/stressed to begin differentiation into a more mature neural progenitor type, and most probably of glial fate. In contrast, staining with DCX showed low levels of DCXexpression and did not reveal any obvious difference between control and NP-exposed spheres after two weeks (data not shown). Thus, the increased overall GFAP-expression, changed GFAP+ processes and no difference in DCX expression after NP exposure may indicate that Au-and AgNP induce early glial differentiation. In agreement, stressful conditions such as hypoxia induce immature neural cells to differentiate and up-regulation of GFAP expression [78], and oxidative stress caused by ozone exposure resulted in increased GFAP expression and decreased levels of DCX [79]. Minor changes, that may be detrimental in the end, in the spatial-temporal development of a neural system, can be investigated more in detail using other more refined methods that can present subtle alterations in markers for phenotypic development, such as western blotting, genomic-and proteomic techniques. Encouragingly, the present used HNPC line has been extensively characterized with regard to its in vitro differentiation properties and is known to consistently generate a multipotential progeny. Studies are on-going using same parameters of NP exposure also on these HNPCs in differentiation conditions. Concluding Remarks With the fast progress in nanotechnology, nanomedicine is also developing rapidly, but the required investigations of an eventual toxicological side-effect are yet a few steps behind. Here, we for the first time efficiently used a HNPC-based assay for investigating whether Au-or AgNPs can be taken up into the intracellular space, and whether they have a cytotoxic effect on the growth characteristics. Human neural cells can, indeed, take up both 20 and 80 nm Au-or AgNPs, respectively, exposed to the cells in very low concentrations. We demonstrate that especially AgNPs exposure cause a significant stress response in the growing HNPCs, by simultaneously affecting cell proliferation and apoptotic cell death. The composition of neural precursors in the cell line was also affected, judged by the increased expression of GFAP and the low levels of DCX expression. AuNPs, on the other hand, did not significantly affect cell proliferation or cell death, but influenced sphere morphology. Together, our results suggest that exposure to even very low concentrations of NPs may have a damaging effect on the developing human neural system and support further in-depth studies on the effect of NPs both on the developing and differentiating CNS. and 800 particles/cell; Ag 80:800, silver 80 nm and 800 particles/cell; AgNO 3 1.0, silver nitrate 1.0 mg/ml. Scale bars equal 214 mm. B. Only AgNPs significantly induced cell proliferation in HNPCs. Evaluation of the data presented under A, showing the number of Ki67-+ cells/mm 2 of the HNPC neurospheres. Au 20:50, gold 20 nm and 50 particles/cell; Au 20:800, gold 20 nm and 800 particles/cell; Au 80:50, gold 80 nm and 50 particles/cell; Au 80:800, gold 80 nm and 800 particles/cell; Ag 20:50, silver 20 nm and 50 particles/cell; Ag 20:800, silver 20 nm and 800 particles/cell; Ag 80:50, silver 80 nm and 50 particles/cell; Ag 80:800, silver 80 nm and 800 particles/cell; AgNO 3 0.5, silver nitrate 0.5 mg/ml; AgNO 3 1.0, silver nitrate 1.0 mg/ml. *p,0.05 compared to control, **p,0.01 compared to control. doi:10.1371/journal.pone.0058211.g003 Generation and Expansion of the Human Neural Precursor Cell Line The human neural precursor cell line (HNPC) used for this study was originally established by L. Wahlberg, Å . Seiger, and colleagues at the Karolinska University Hospital, Stockholm, Sweden (original work with the cell line is described in [40]) and the cell line was kindly provided to us via Prof. A. Björklund (Dept. Exp. Med. Sci., Lund University, Sweden). In brief, the cell line was established from forebrain tissue, isolated and obtained from one 7-week (post conception) human embryo. Cells were cultured as free-floating neurospheres in defined DMEM-F12 medium (Invitrogen, Paisley, UK) supplemented with 2.0 mM L-glutamine (Sigma, St. Lois, MI, USA), 0.6% glucose (Sigma), N2-supplement (Invitrogen), 2.0 mg/ml heparin (Sigma) at 37uC in a humidified atmosphere of 5% CO 2 . Every third day, human basic fibroblast growth factor (hbFGF, 20 ng/ml; Invitrogen), human epidermal growth factor (hEGF, 20 ng/ml; PROSPEC, Rehovot, Israel), and human leukemia inhibitory factor (hLIF, 10 ng/ml; PROSPEC) were added to the culture. By using mechanical dissociation, the neurospheres were sub-cultured every 10-14 days and reseeded as single cells at a density of 1610 5 cell/ml. Viable cells (opalescent cells excluding Trypan Blue; Sigma) were counted in a haemocytometer. To prevent cell attachment, the flasks were gently knocked every other day. Cells passaged 10-15 times were used. Nanoparticles and Silver Nitrate Prefabricated colloidal gold and silver nanoparticles (AuNPs and AgNPs) of 20 and 80 nm in diameter, respectively, dissolved in water were purchased from BBInternational (Cardiff, UK). Further dilutions were made in complete growth medium. Silver nitrate (AgNO 3 ) (VWR International, Radnor, PA, USA) was dissolved in deionized water to give a stock solution of 1 mg/ml. The stock solution was sterile-filtered and stored at 4uC. Further dilutions were made in complete growth medium. Experimental Design Human cells were seeded as described above, and after 48 hours AuNPs, AgNPs, or AgNO 3 were added to the medium. Au After two weeks of incubation, neurospheres were either fixed or directly frozen at 220uC for morphological and histological evaluations. Prior to freezing, neurospheres were embedded for cryo-sectioning and sections of 16 mm were cut. After serial section on a cryostat, selected sections were stained for Hematoxylin-eosin (Htx-eosin). All other sections were fixated in 4% para-formaldehyde for 10 minutes at room temperature before immunocytochemistry and TUNEL staining. TUNEL Assay Cryo-sections were stained with a fluorescein-conjugated terminal deoxynucleotidyl transferase-mediated dUTP nick endlabelling (TUNEL) assay according to the manufacturer (Roche, Mannheim, Germany). For counterstaining of nuclei, the sections were cover-slipped using DAPI-containing mounting medium. For TEM, the neurospheres were fixed in 2.5% glutaraldehyde in 0.15 M Na-cacodylate buffer (pH 7.2) for 4 hour at 4uC. After particles/cell; Ag 80:800, silver 80 nm and 800 particles/cell; AgNO 3 0.5, silver nitrate 0.5 mg/ml; AgNO 3 1.0, silver nitrate 1.0 mg/ml. *p,0.05 compared to control, **p,0.01 compared to control, ***p,0.001 compared to control. doi:10.1371/journal.pone.0058211.g004 Figure 5. Gross morphology of HNPCs after Au-or AgNP exposure. A. The neurospheres were stained with Htx-eosin after NP-exposure. The NPs affected the single cell morphology seen as changed orientation of the nuclei and cytoplasm. Spheres that were exposed to NPs had increased number of nucleus and they were more centred, while the cytoplasm was turned towards the sphere edge. The NP-exposure resulted the spheres more loosen, increased the number of pyknotic cells, and induced channel formation. Au 20:800, gold 20 nm and 800 particles/cell; Au 80:800, gold 80 nm and 800 particles/cell; Ag 20:800, silver 20 nm and 800 particles/cell; Ag 80:800, silver 80 nm and 800 particles/cell; AgNO 3 1.0, silver nitrate 1.0 mg/ml. Scale bar equals 214 mm. B. The neurospheres were stained with Phalloidin after NP-exposure. NP exposure resulted in over-all altered cytoskeleton architecture, indicated by the increase in actin network. Au 20:800, gold 20 nm and 800 particles/cell; Au 80:800, gold 80 nm and 800 particles/cell; Ag 20:800, silver 20 nm and 800 particles/cell; Ag 80:800, silver 80 nm and 800 particles/cell; AgNO 3 1.0, silver nitrate 1.0 mg/ml. Scale bar equals 50 mm. C. Scanning electron microscopic evaluation of the effects of AuNPs and AgNPs on HNPCs. NPs were added to the medium 48 h after seeding and spheres were prepared for scanning electron microscopy two weeks later. NP-exposure resulted in loose aggregates and fuzzy spheres compared to control spheres, which were smooth and compact. The morphological changes were more profound with increased concentration of the NPs. However, AgNPs affected the morphology more than AuNPs. Au 20:800, gold 20 nm and 800 particles/cell; Au 80:800, gold 80 nm and 800 particles/cell; Ag 20:800, silver 20 nm and 800 particles/cell; Ag 80:800, silver 80 nm and 800 particles/cell. Scale bars equal 50 mm. doi:10.1371/journal.pone.0058211.g005 Figure 6. Both Au-and AgNP exposure increased the GFAP-expression in HNPCs. Exposure to the NPs resulted in longer, more perforated, and partly thicker GFAP+ processes, while nanoparticle-size or particles/cell did not have any effect the GFAP-expression. Au 20:800, gold 20 nm and 800 particles/cell; Au 80:800, gold 80 nm and 800 particles/cell; Ag 20:800, silver 20 nm and 800 particles/cell; Ag 80:800, silver 80 nm and 800 particles/cell; AgNO 3 1.0, silver nitrate 1.0 mg/ml. Scale bars equal 107 mm. doi:10.1371/journal.pone.0058211.g006 rinsing in 0.1 M Na-cacodylate buffer and dehydration, the neurospheres were post-fixed in 1% osmiumtetraoxide in 0.1 M Na-cacodylate buffer at 4uC for 1 hour. After dehydration, the samples were embedded in Epon. By using an ultra microtome (Leica ultracut, Leica Microsystems GmbH, Germany) ultrathin sections were cut. The sections were stained with 2% uranyl acetate in Pb-citrate [80]. Sections were imaged in a JEOL JEM 1230 microscope (JEOL, Japan). Data Analysis The growth rate after 14 days of NP exposure was estimated by counting the total numbers of viable and dead cells, respectively, after mechanical dissociation of the neurospheres. Viable cells, excluded by Trypan blue, and dead cells from three independent experiments with n = 223 per experimental group, were counted by using a haemocytometer. Gross-as well as detailed morphological analysis of the neurospheres was performed using light microscopy (Nikon, Tokyo, Japan). Neurospheres, from three independent experiments, of each treatment were measured and compared to control to detect differences in size, minimum counting of 8 spheres per experimental session. Both prior to fixation and freezing, images of the neurospheres were captured for evaluation of the morphology. Counter-stained and immunostained sections of the neurospheres were examined using light-and epifluorescence microscope (Nikon Eclipse E800) equipped with appropriate filters. Images were captured with digital acquisition system (DCP Controller). Quantifications of numbers of cells expressing the proliferation marker Ki67 and the marker for apoptotic cells TUNEL, respectively, were made on 6 spheres/section and experimental group from two independent experiments. The sphere area was measured and results given as cells/mm 2 6SD. Analysis of changes of GFAP and DCX expression, respectively, were made on 6 sphere/section and experimental group from two independent experiments. Quantifications were preformed using Image J64. For TEM analysis, samples from two independent experiments were evaluated, and detailed analysis of at least 50 cells was performed. SEM analysis was made on one sample/experimental group from one culture session and approximately 20-30 spheres/ sample were evaluated in the respective treatment groups. For statistical evaluation, a two-tailed t-test was used. All data, when n$3, is presented as mean6SD, and p values less than 0.05 were considered statistically significant.
9,276.4
2013-03-11T00:00:00.000
[ "Biology" ]
The Genus Solanum: An Ethnopharmacological, Phytochemical and Biological Properties Review Over the past 30 years, the genus Solanum has received considerable attention in chemical and biological studies. Solanum is the largest genus in the family Solanaceae, comprising of about 2000 species distributed in the subtropical and tropical regions of Africa, Australia, and parts of Asia, e.g., China, India and Japan. Many of them are economically significant species. Previous phytochemical investigations on Solanum species led to the identification of steroidal saponins, steroidal alkaloids, terpenes, flavonoids, lignans, sterols, phenolic comopunds, coumarins, amongst other compounds. Many species belonging to this genus present huge range of pharmacological activities such as cytotoxicity to different tumors as breast cancer (4T1 and EMT), colorectal cancer (HCT116, HT29, and SW480), and prostate cancer (DU145) cell lines. The biological activities have been attributed to a number of steroidal saponins, steroidal alkaloids and phenols. This review features 65 phytochemically studied species of Solanum between 1990 and 2018, fetched from SciFinder, Pubmed, ScienceDirect, Wikipedia and Baidu, using “Solanum” and the species’ names as search terms (“all fields”). Distribution and Ethnopharmacological Uses Sixty-six species commonly used as important folk medicine, ornamental plants, or wild food sources were selected in this review, and their local names, distribution and ethnopharmacologi-cal uses were summarized in Table 1. Local names are given in different languages with which the inhabitants of a particular region use to identify a specific species. Each species' natural habitat and/or places of cultivation are mentioned. Traditional as well as modern day applications are presented. Steroidal Alkaloids Sixty-three steroidal alkaloids , as other principal components in Solanum were reported from this genus (Fig. 2). Compounds 139-156 are derivatives of solasodine (145), one of the main glycoalkaloid constituents in Solanum spp., even as indicated by several numbers of species from which it has been isolated. Solamargine (139) is the major steroidal alkaloid constituent of Solanum plants and literature data showed that it has been revealed in 18 species. Compounds such as 139, solasonine (142), β1-solasonine (143) and solanigroside P (156) with three sugar units and α-l-rhamnose at C-2 or a hydroxyl group on the steroidal backbone may be potential candidates for the treatment of gastric cancer [228]. Antioxidant activity of 145 and tomatidine (167) from the berries of S. aculeastrum was investigated using DPPH, ABTS and reducing power assays, and the highest inhibition was observed when the two compounds were combined, followed by 145 and 167 [13]. Furthermore, 145 exhibited significant anti-inflammatory activity at doses of 30 mg/kg, with a maximum inhibition of 77.75% in carrageenan-induced rat paw edema, comparing to indomethacin (81.69%). It also showed stronger (46.79effect in xylene induced ear edema in mice [303]. Intraperitoneal injection of 145 (25 mgkg) significantly delayed latency of hind limb tonic extensor phase in the picrotoxin-induced convulsions, and it also potentiated thiopental-provoked sleep in a dose-dependent manner [294]. Moreover, 145 exhibited not only the antibacterial activity against Klebsiella and Staphylococcus spp. at concentration of 1 mg, together with 139 and 141 [403], but also a potent stemness and invasion inhibitory effect on human colorectal cancer HCT116 cells [155]. Colony Spheroid formation assay showed that solasodine dosedependently prohibited HCT116 cell stemness. CD133, CD44, Nanog, Oct-4 and Sox-2 were inhibited by 145 to reverse stemness and similar mechanism was stimulated in vivo. Transwell and scratch wound assays revealed that 145 impeded HCT116 cell invasion and migration potential strengthened by TGF-β1. Moreover, solasodine attenuated TGF-β1-induced EMT and decreased MMPs while in vivo study showed the same trend. The results of this study implied that 145 may be a novel therapeutic drug for CRC treatment [155]. Burger et al. documented that the crude extract and aqueous fraction containing 139 displayed potent nonselective cytotoxicity (IC 50 15.62 μgmL) and noteworthy 9.1-fold P-glycoprotein inhibition at 100 μgmL [15]. Zhang et al. assessed the molecular mechanism underlying the anti-cancer effect of 139 in human cholangiocarcinoma QBC939 cells. The results revealed that 139 inhibited the viability of QBC939 cells in a dose-dependent manner. Furthermore, it significantly induced the apoptosis of QBC939 cells and altered the mitochondrial membrane potential of cells. Quantitative polymerase chain reaction analysis revealed that 139 decreased the mRNA level of B cell lymphoma-2 (Bcl-2) Bcl-extra-large and X-linked inhibitor of apoptosis protein but increased the mRNA level of Bcl-2-associated X protein (Bax) In addition, western blot analysis demonstrated that 139 inhibited the protein expression of Bcl-2 and poly ADP ribose polymerase (PARP) and promoted the protein expression of Bax, cleaved PARP, caspase 3, cleaved caspase 3 and caspase [97]. [407]. Moreover, 139 and solasonine (142) displayed not only leishmanicidal activity against promastigote forms of Leishmania amazonensis [185], but also antidiabetic activity by inhibiting the serum glucose increase in oral sucrose-loaded rats and suppressing gastric emptying in mice [182]. A synergistic effect was observed for a mixture of the compounds [183]. Compound 139 also expressed stronger trypanocidal activity (IC 50 = 15.3 μg/mL), when compared to benznidazol (IC 50 = 9.0 μg/mL), the only drug used to treat Chagas' disease [186]. Solanum triterpenes have indicated to possess anticancer properties. For instance, 213 presented significant activity against KB-Oral cavity cancer (IC 50 = 26.73 μgmL) [297], while 213 exhibited selective activity against lung tumor cell line (NCIH460). The anti-nociceptive activity observed for 213 and 214 was found to be related to the inhibition of different mediators involved in inflammation and nociceptive process. Both compounds decreased cyclooxygenase 2 (COX-2) protein expression, although only 214 reached a significant response (P < 0.05 vs control) [107]. Lignans Lignans, widely distributed in the plant kingdom, are a family of secondary metabolites produced by oxidative dimerization of two phenylpropanoid units. Although their molecular scaffold consists only of two phenylpropane (C6-C3) units, lignans exhibit an enormous structural diversity originating from various linkage patterns of these phenylpropane units. As the C-8-C-3′/C-7-O-C-4′ linked lignans containing two chiral centers (C-7 and C-8) comprise the core of 2, 3-dihydrobenzo[b]furan [480]. Coumarinolignoids Four coumarinolignoids known as indicumines A-D (614-617) were obtained from the seeds of S. indicum [535] (Fig. 14). Coumarinolignoids, including cleomiscosins, aquillochins and malloapelins, are unique and rare in nature. Coumarinolignoids of the cleomiscosins type bearing cleomiscosins A-D, 8-epi-cleomiscosin A, and malloapeli A functionalities have been identified in a few genera, including Cleome viscosa, Mallotus apelta, and Rhododendron collettianum. The compounds with such functionalities, especially cleomiscosins A-C and 8-epicleomiscosin A, which contributed to biological activities, have been reported with hepatoprotective and tyrosinase inhibition activities [535]. The genus Solanum seems to possess great potential, yet majority of the species remain unknown or scantily studied for the chemical constituents. It would be very necessary for the phytochemistry researchers to explore and investigate more of its species. The vast pharmacological activities envinced by many compounds from Solanum genus should attract the attention of the pharmacological community to determine their exact target sites, structure-activity relationships and other medicinal applications. Compliance with Ethical Standards Conflict of interest The authors declare no conflict of interest. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1,685.6
2019-03-12T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Mechanistic Insight into the Early Stages of Toroidal Pore Formation by the Antimicrobial Peptide Smp24 The antimicrobial peptide Smp24, originally derived from the venom of Scorpio maurus palmatus, is a promising candidate for further drug development. However, before doing so, greater insight into the mechanism of action is needed to construct a reliable structure–activity relationship. The aim of this study was to specifically investigate the critical early stages of peptide-induced membrane disruption. Single-channel current traces were obtained via planar patch-clamp electrophysiology, with multiple types of pore-forming events observed, unlike those expected from the traditional, more rigid mechanistic models. To better understand the molecular-level structures of the peptide-pore assemblies underlying these observed conductance events, molecular dynamics simulations were used to investigate the peptide structure and orientation both before and during pore formation. The transition of the peptides to transmembrane-like states within disordered toroidal pores occurred due to a peptide-induced bilayer-leaflet asymmetry, explaining why pore stabilization does not always follow pore nucleation in the experimental observations. To fully grasp the structure–activity relationship of antimicrobial peptides, a more nuanced view of the complex and dynamic mechanistic behaviour must be adopted. Introduction The increasing levels of antimicrobial resistance of many pathogenic microorganisms have led to a crucial need for the development of new alternative antimicrobial drugs with novel mechanisms of action [1].Antimicrobial peptides (AMPs) have been highlighted multiple times as a class of drug that could play a key role in tackling the increasing difficulties encountered in treating microbial infections [1,2].With a unique mechanism of action targeting the cell membrane as the main point of attack, in addition to a variety of other potential targets, resistance development towards AMPs is less prevalent compared to traditional antibiotics [3].While AMPs are a rich and diverse group of compounds, they are most commonly amphiphilic and alpha-helical in structure with an overall positive charge.Their main mechanism of action involves the disruption of the bacterial cell membrane(s), causing leakage of the internal cellular components and eventual cell death.However, the specifics of the mechanism behind the disruption are still highly debated.While the three traditional mechanisms of actions (barrel-stave, toroidal pore and carpet mechanism) are often still described [4], new alternative mechanisms are being proposed focusing less on the AMPs acting like ion channels by forming distinct pores and more on them acting as bio-detergents in a dynamic relationship with the bacterial membrane(s) [5,6]. Venoms have been found to be a rich reservoir of naturally occurring AMPs, with one example being the venom of the Egyptian scorpion Scorpio maurus palmatus.Multiple AMPs have been found in this venom, including the 24-residue peptide Smp24 (IWSFLIKAATK-LLPSLFGGGKKDS).This peptide has previously been shown to have broad antimicrobial activity against a range of clinically relevant bacteria species [7,8].While unstructured in water, the peptide adopts a majority helical structure in a 60% trifluoroethanol solution, which is typical of amphiphilic AMPs [7].Interestingly, when comparing the primary sequence of Smp24 with similar AMPs, another possible structural motif can be identified.Like Smp24, melittin [9], magainin 2 [10], brevinin-1EMa [11], piscidin 1+3 [12] and gaegurin P14 [13] all have a proline or glycine residue positioned around the middle of the sequence and cause a kink in the helical region that is also likely present in Smp24 due to a proline residue at position 14 [7].The mechanism of action of Smp24 has previously been investigated using several different biophysical techniques such as atomic force microscopy (AFM), quartz crystal microbalance with dissipation monitoring (QCM-D) and liposomal leakage assays showing a clear dependency on the specific lipid composition [14].Further study around how the peptide interacts with the membranes on a molecular level could help in establishing an improved structure-activity relationship allowing the effective employment of a rational structure-based drug design strategy. Advances in planar patch-clamp equipment such as the port-a-patch by Nanion (Munich, Germany) allows for improved in vitro investigation of peptide-induced pore formation at the level of a single pore [15,16].By measuring the peptide-induced change in conductance across a bilayer, information can be obtained related to the molecularlevel structure and behaviour of the pores.However, to fully understand the structure mechanism relationship of the peptides, knowledge about the pore structure by itself is not enough.The structure of the peptides themselves, and how this structure allows them to take part in and stabilise the overall structure of the pores, must also be considered to allow for better design of new peptides in the future. Molecular dynamics (MD) simulations have been shown to be an extremely useful tool for in silico investigations of the biophysical properties of antimicrobial peptides at a molecular level [17][18][19].MD simulations have previously been used to explore several different individual stages of the mechanism of action of AMPs, such as their mechanism of insertion into lipid bilayers [20,21], their position and interactions once they are fully inserted into the bilayer [12,19], and their configuration when associated with a membrane pore structure [22][23][24]. In this study, we have utilised planar patch-clamp electrophysiology to investigate the early stages of peptide-induced membrane disruption and pore formation caused by Smp24 and contextualised these observations using MD simulations, to better explain the underlying biophysical phenomena leading to the membrane disruption and how the specific structure of the peptides relates to the stabilization of the peptide-pore assemblies. Materials and Methods All reagents and materials were purchased from Sigma-Aldrich (Gillingham, UK) unless otherwise stated. Patch Clamp Giant unilamellar vesicles (GUVs) for the patch-clamp experiments were produced using the vesicle prep pro from Nanion (Munich, Germany).A phospholipid mixture 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) and 1,2-dioleoyl-sn-glycero-3-phospho-(1 -rac-glycerol) (DOPG) (3 mg/mL 1:1 ratio) in chloroform was placed on an ITO slide and allowed to fully dry.Electroformation was performed in the presence of a 1 M sorbitol solution using a stepwise protocol.At 37 • C and an amplitude of 5 Hz, the voltage was raised to 3 V over 5 min and then held there for 200 min.The voltage was then lowered to 0 over 3 min and the GUVs were harvested and stored at 4 • C and used within 4 days. Electrophysiology experiments were performed on planar lipid bilayers using Nanion's (Munich, Germany) Port-a-Patch planar patch-clamp setup.The bilayers were formed at pH 4 (200 mM KCl, 10 mM HEPES) on a 3-5 MOhm borosilicate chip (Nanion, Munich, Germany) by applying 10-30 mbar negative pressure till a GOhm seal was achieved.The buffer was exchanged to pH 7 (200 mM KCl, 10 mM HEPES) and the current was measured at a holding potential of +60 mV for five minutes to insure the stability of the bilayer.Finally, different concentrations of Smp24 diluted in the pH 7 buffer were added (active concentrations of 2.29-7.76µM) and the current trace was recorded using an HEKA EPC 10 amplifier (Heka Elektronik, Lambrecht, Germany), with 5 repeats for each peptide concentration. Molecular Dynamics Simulations To investigate the structure of Smp24 and its interplay with a lipid bilayer through different stages of the mechanism of action, three stages of MD simulations were performed.All simulations were performed using the Gromacs 2020.2-4packages, using the leapfrog algorithm with a 2 fs timestep, hydrogen bond constraints were incorporated using the Lincs algorithm; Wan der Vaals and short-range electrostatic interactions cut-offs were 1.2 nm, a Nosé-Hoover thermostat was used for temperature control and the Parrinello-Rahman barostat for pressure control with semi-isotropic conditions.Unless otherwise specified, all production runs were performed in the NPT ensemble at 293 K, 1 atm.The starting 3D structure of Smp24 used in all models was generated using the Pepfold3 server [25].The simulations were analysed using VMD [26] and Gromacs [27] with the FATSLiM addon [28]. Single Peptide Bilayer Models To investigate the basic structure of the bilayer-associated peptide, a simple singlepeptide bilayer model was designed.The base bilayer model was built using CHARMM-GUI [29] with both the lipid and peptide topology being described using the CHARMM36m force field.The bilayer was made using 72 DOPG and 72 DOPC lipids (approximately 7 nm 2 in size).A 3 nm TIP3P water layer was added to each side of the bilayer, with a single peptide being placed in the centre of the xy plane, approximately 1.5 nm above the bilayer, with the helix paralleled with the bilayer.Potassium or chloride ions were added to neutralise the system.Following an energy minimisation, the models were equilibrated in 3 steps.Steps 1 and 2 were 100 ps in the NVT and NPT assembly, respectively, followed by a longer 900 ps NPT simulation.Positional restraints were applied to the peptide in all steps and to the lipids in steps 1 and 2. Following the equilibration, 3 replica production simulations were performed, ranging from 500 to 1000 ns in length, to account for the variation in the time taken for the peptide to fully insert within the bilayer. Multi-Peptide Bilayer Models To investigate the effects of peptide insertion on the properties of the bilayer, several multi-peptide bilayer models were made.The starting points for these models were created by performing single-peptide bilayer simulations as described above except smaller 3.3 nm 2 or 4 nm 2 bilayers were used.Once full peptide insertion was achieved, the models were multiplied in a 4 × 4 or 3 × 3 matrix in the x and y directions using the gmx genconf function using a similar approach as Chen et al. [30].This approach allows for the rapid production of bilayer models with several inserted peptides and ensures that all of them are inserted in the same bilayer leaflet, but leaves the peptides at an artificial, symmetric distance from each other.Therefore, prior to analysis, the larger models were first equilibrated by simulating at an elevated temperature (323 K) for 500 ns to facilitate rapid diffusion and mixing of the peptides in the bilayer plane.Lastly, models were simulated for 250 ns at the normal temperature (293 K), with the trajectories from this simulation used for analysis. Peptide-Pore Models To investigate the structure, position and orientation of Smp24 while associated with a pore, toroidal pores were induced into the endpoint of the previously described peptidemembrane models via electroporation.This was performed by applying a 0.3 V/nm electric field tangential to the bilayer.After a consistent pore had formed, the strength of the electric field was lowered to 0.065 V/nm and the x-y compression was set to 0. This allowed for the modelling of a stable pore using both a 7 nm 2 bilayer with a single peptide inserted (3 repeats, 500 ns in length) and a 14.2 nm 2 bilayer with 16 peptides inserted (5 repeats, 100 ns in length).A more detailed protocol can be found in Supplementary Materials S1. Statistical Analysis The patch clamp-derived pore formation kinetics for Smp24 were analysed using unpaired two-sided t-tests with independent spread.An α value of 0.05 was used for identification of significant differences. Evaluation of Peptide-Induced Membrane Disruption Using Patch-Clamp Electrophysiology To investigate the pore formation of Smp24, planar patch clamp using 1:1 DOPC:DOPG bilayers was performed at five different peptide concentrations, with five repeats of each concentration.The bilayer composition was chosen to broadly mimic the negative charge of the bacterial (inner) membrane [31], while the unsaturated lipid chains allowed the bilayer to remain in the liquid phase state at room temperature.Due to the broad spectrum of activity of Smp24 [7], further customisation of the lipid composition to mimic a specific bacteria species was not deemed necessary to achieve a reasonable degree of representativity, at least as to what could be expected from a pure synthetic membrane model. The kinetic aspects of the pore formation were analysed based on the time it took for the first conductance event to occur and the time it took between the peptide addition and the complete destruction of the bilayer.In 64% of the experimental runs, the lag time between the peptide addition and the occurrence of the first conductance event was 20 s or less.However, at peptide concentrations below 4.85 µM, longer lag periods (1-26 min) start to occur, although not consistently for all repeats (Figure 1A). Statistical Analysis The patch clamp-derived pore formation kinetics for Smp24 were analysed usin paired two-sided t-tests with independent spread.An α value of 0.05 was used for id fication of significant differences. Evaluation of Peptide-Induced Membrane Disruption Using Patch-Clamp Electrophysi To investigate the pore formation of Smp24, planar patch clamp using DOPC:DOPG bilayers was performed at five different peptide concentrations, with fi peats of each concentration.The bilayer composition was chosen to broadly mimic the ative charge of the bacterial (inner) membrane [31], while the unsaturated lipid chai lowed the bilayer to remain in the liquid phase state at room temperature.Due to the b spectrum of activity of Smp24 [7], further customisation of the lipid composition to mi specific bacteria species was not deemed necessary to achieve a reasonable degree of r sentativity, at least as to what could be expected from a pure synthetic membrane mo The kinetic aspects of the pore formation were analysed based on the time it too the first conductance event to occur and the time it took between the peptide addition the complete destruction of the bilayer.In 64% of the experimental runs, the lag tim tween the peptide addition and the occurrence of the first conductance event was 2 less.However, at peptide concentrations below 4.85 µM, longer lag periods (1-26 start to occur, although not consistently for all repeats (Figure 1A). Similarly, highly variable lag times between peptide addition and the complet struction of the bilayers was also seen.At high peptide concentrations, the bilayers often destroyed within the first 5 min of peptide addition, whereas at concentration low 3.9 µM, a large increase in the variation of destruction time was seen, and the av kinetics were slowed (Figure 1B).Furthermore, for 40% of the experimental runs a two lowest peptide concentrations (2.91-3.88µM), the bilayer was still intact after 30 with half of those experiments resulting in no membrane disruption at all.To further evaluate the individual membrane disruptive events induced b Smp24 peptide, the current traces from the individual experiments were qualitativel alysed.Multiple different types of event signatures were found throughout most Similarly, highly variable lag times between peptide addition and the complete destruction of the bilayers was also seen.At high peptide concentrations, the bilayers were often destroyed within the first 5 min of peptide addition, whereas at concentrations below 3.9 µM, a large increase in the variation of destruction time was seen, and the average kinetics were slowed (Figure 1B).Furthermore, for 40% of the experimental runs at the two lowest peptide concentrations (2.91-3.88µM), the bilayer was still intact after 30 min, with half of those experiments resulting in no membrane disruption at all. To further evaluate the individual membrane disruptive events induced by the Smp24 peptide, the current traces from the individual experiments were qualitatively analysed.Multiple different types of event signatures were found throughout most runs, which could broadly be categorized into three distinct event types, multilevel, spike and erratic, based on Chui et al. [32]. Multilevel events most closely represent what would be expected for a distinct pore.The start and end of the signal are both sharp transitions from the baseline.During the event, a consistent but highly variable increase in conductance could be observed, lasting from 100 ms to around 3 s (Figure 2A).For longer events of this type, an average conductance level could be estimated using amplitude histograms fitted with a Gaussian distribution function.However, the spread of the current distribution corresponding to the multilevel event was much greater than the baseline.Comparing average current measured between different multilevel events also showed a large distribution of values ranging from 2.8 to 18.3 pA. Investigation of Interactions between Smp24 and Lipid Bilayers Using MD Simulation In order to contextualize the different conductance event types on a molecular level, MD simulations of Smp24 in association with the lipid bilayer were performed.However, while the patch-clamp experiments can only provide information related to the peptide bilayer interactions after the formation of a pore has occurred, MD simulation can also Like multilevel events, spike events (Figure 2B) also have a clear transition to and from the baseline, albeit the length of the event is much shorter (<50 ms).Again, no consistent average/maximum current level could be found when comparing different individual events, even within a singular run.Spike events were both observed as lone events or as multiples with a short time gap between them. Erratic events (Figure 2C) were long (often multiple seconds) but with a relatively low and very variable conductance level.Unlike the other event types, they did not have a very clear beginning or end, but rather gradually increased or decreased the conductance.Over the lifetime of the event, the conductance level could shift multiple times and often return to a partial baseline with increased noise and conductance in between.Due to the more gradual conductance evolution, the amplitude histograms did not show an independent conductance level for the event, but rather a broadening of the baseline peak. Multilevel, spike and erratic event types were found at all peptide concentrations; however, as with the kinetic data, a large variation in the number of events were seen within the same peptide concentration.Therefore, no consistent relationship between the peptide concentration and the likelihood of a specific event type occurring could be found.However, in general, across all the peptide concentrations, spike events were the most likely to occur followed by erratic events and lastly multilevel events. Investigation of Interactions between Smp24 and Lipid Bilayers Using MD Simulation In order to contextualize the different conductance event types on a molecular level, MD simulations of Smp24 in association with the lipid bilayer were performed.However, while the patch-clamp experiments can only provide information related to the peptide bilayer interactions after the formation of a pore has occurred, MD simulation can also predict the behaviours leading up to these events.To establish a baseline for the behaviour and structure of the peptide, simulations of the interactions between Smp24 and phospholipid bilayers were first performed, mimicking the conditions prior to pore formation. Insertion of Smp24 into Bilayers In all simulations, the peptide followed a consistent mechanism of insertion into the negatively charged bilayer that can be separated into multiple distinct steps.Following an initial lag period with some sporadic electrostatic interactions between the peptide and the bilayer, the first consistent step seen in all the simulations is the anchoring of the N-terminal region to the bilayer (Figure 3B).This interaction is, initially, driven by electrostatic interactions between the N-terminal amine and the lipid phosphate groups then, following a short delay, supported by further electrostatic interactions between the bilayer and the two lysine residues (lys7 and lys11) positioned in the helical part of the peptide (Figure 3A).In this position/orientation, most of the hydrophobic residues were orientated facing away from the bilayer except for the sidechains of the N-terminal ile1 and phe4, so hydrophobic interactions were limited to those residues.Due to the position of the helical lysine residues, the helical region of the peptides was orientated with a tilted angle relative to the bilayer normal of 125-140 degrees which inhibits interactions between the latter half of the peptide and the bilayer (Figure 3C).The length of this stage of the insertion varied from a few ns to hundreds of ns, likely dependent on how consistent the lysine-phosphate interactions were.The next stage of the insertion is defined by a major rotation of the helical region of the peptide, changing the orientation of the hydrophobic residues to facing down towards the core of the bilayer (Figure 3D).This rotation also drove further changes to the peptide orientation, such as a reduction in the tilt angle and a change in the overall position of the peptide's centre of mass, bringing it closer to the core of the bilayer.These processes were not instant, taking around 50 ns or more from the beginning of the rotation until the peptide reached a stable orientation and insertion depth (between 4.2 × 10 5 ps and 5.5 × 10 5 ps for the repeat shown in Figure 3).ical region of the peptide, changing the orientation of the hydrophobic residues to facing down towards the core of the bilayer (Figure 3D).This rotation also drove further changes to the peptide orientation, such as a reduction in the tilt angle and a change in the overall position of the peptide's centre of mass, bringing it closer to the core of the bilayer.These processes were not instant, taking around 50 ns or more from the beginning of the rotation until the peptide reached a stable orientation and insertion depth (between 4.2 × 10 5 ps and 5.5 × 10 5 ps for the repeat shown in Figure 3). Structure, Orientation and Position of the Inserted Smp24 Following the full insertion of a single Smp24 peptide into a bilayer, a relatively consistent 3D structure was observed for the peptide in all repeat simulations (Figure 4A).For further analysis, this structure was separated into several structural/functional regions with distinct positional distributions relative to the bilayer leaflet (Figure 4B).The peptide adopted a helical structure between residues 1 and 17 which could be divided into two distinct regions separated by a kink induced by the proline residue at position 14.The primary helical region (r1-13) was the part of the peptide that was inserted the deepest into the bilayer with its position overlapping with the lipid glycerol esters and the top of the acyl chains.It was orientated in parallel with the bilayer surface with an average tilt of between 92 and 102 degrees.The secondary helical region (r14-17) was positioned at around the same level, although it varied somewhat within each simulation as the direction of the bend is flexible.Towards the C-terminal end, the peptide adopted a random coil structure which again could be separated into two regions.The last four residues are all polar or charged and can be combined as a tail region.These residues were positioned almost exclusively amongst the headgroups of the lipids, although with a high degree of positional flexibility, as seen with the broad partial density peak and much higher RMSF compared to the rest of the peptide (Figure 4C).The elevated positioning of the tail region is possible due to three glycine residues (r18-20) that serve as a linker region between the secondary helix and the tail. of between 92 and 102 degrees.The secondary helical region (r14-17) was positioned at around the same level, although it varied somewhat within each simulation as the direction of the bend is flexible.Towards the C-terminal end, the peptide adopted a random coil structure which again could be separated into two regions.The last four residues are all polar or charged and can be combined as a tail region.These residues were positioned almost exclusively amongst the headgroups of the lipids, although with a high degree of positional flexibility, as seen with the broad partial density peak and much higher RMSF compared to the rest of the peptide (Figure 4C).The elevated positioning of the tail region is possible due to three glycine residues (r18-20) that serve as a linker region between the secondary helix and the tail. Concentration-Dependent Effects of Bilayer Properties To evaluate if the peptide insertion affected the biophysical properties of the bilayer, simulations were performed at different peptide-to-lipid ratios. The area per lipid and bilayer thickness was compared between a bilayer only simulation and three simulations with peptides inserted at different peptide-to-lipid ratios (Table 1).Both Concentration-Dependent Effects of Bilayer Properties To evaluate if the peptide insertion affected the biophysical properties of the bilayer, simulations were performed at different peptide-to-lipid ratios. The area per lipid and bilayer thickness was compared between a bilayer only simulation and three simulations with peptides inserted at different peptide-to-lipid ratios (Table 1).Both aspects seemed to be dependent on the peptide-to-lipid ratio.As the number of peptides increased, the area per lipid also increased while the bilayer thickness decreased.The deuterium order parameters of the lipid chains (Scd) were also investigated to evaluate if the inserted peptides affected the lipid order (Figure S4).A small increase in the order could be seen for the top of the lipid chain but the main effect was a concentrationdependent reduction in the order for the lower half of the lipid chains between C13 and C18 (Table 1).No differences could be seen between the bilayer-only simulation and the lowest peptide-to-lipid ratio (less than 1% difference); however, as more peptides were inserted into the bilayer, the lipid order started to be affected (6.4 to 12.9% reduction in the Scd). Additionally, the lateral diffusion constants for the peptides were also calculated for the simulations using the linear part of their mean square displacement.Again, a concentration-dependent effect could be seen, with the ability of the peptides to freely diffuse around the bilayer plane being inhibited as the number of inserted peptides increased. Modelling of the Pore-Associated Peptide Configurations To investigate how Smp24 might associate with a membrane pore and which configurations the peptide could adopt in order to stabilise the pore structure, pores were created in bilayer models with Smp24 already inserted.This was carried out by applying a strong electric field across the bilayer, leading to pore formation via electroporation.Once a small pore had been created, the strength of the electric field was reduced which, together with locking the expansion of the bilayer in the x and y direction, resulted in a dynamic but stable pore, with a pore lumen diameter of around 3 nm.Three repeat simulations were performed, each with a different starting point for the pore position relative to the position of the single Smp24 peptide.Throughout these simulations, two different pore-associated peptide configurations were observed, where in both cases the peptide was positioned at the interface between the pore and the rest of the bilayer.However, at no point did the peptide insert fully into the pore lumen as would be expected with a true transmembrane peptide configuration. In the first configuration (Figure 5A), the peptide was orientated with the N-terminal towards the centre of the pore.The primary helical region was positioned at pore-bilayer interface and thus, to follow the higher curvature of the bilayer in this region, it adopted a higher tilt angle (120 degrees relative to the bilayer normal) compared with the normal orientation of the helix in the non-pore bilayer.The secondary helix was positioned further away from the pore where the bilayer curvature was less extreme and thus adopted a lower tilt angle (100 degrees vs. bilayer normal) more in line with the normal orientation.The Trp2 residue was inserted the deepest within the pore, with an average position around 0.6 nm above the bilayer centre. Pharmaceutics 2023, 15, 2399 10 of 19 compared with the normal orientation.In this configuration, the leu16 residue was positioned the deepest at an average position of around 0.85 nm above the bilayer centre. In both cases, the tail region seemed to behave relatively independently of the membrane pore, while the helical regions facilitated the interactions between the peptide and the pore. Throughout the three simulations (combined 1.5 µs simulation time), the peptide spent approximately 150 ns in the A configuration, 720 ns in the B configuration and 630 ns in a configuration not associated with the pore.The A configuration only occurred in the one simulation where the pore nucleation site was close to the N-terminal of the peptide whereas the peptide transitioned to the B configuration at some point in all three simulations.As no transmembrane configuration was observed throughout the simulations, a centre of mass pull function was applied to artificially position the peptide deeper within the pore, to evaluate the feasibility of a more deeply inserted configuration.Starting from either of the pore-associated configurations, the peptide was pulled towards the centre of the pore in both the x and z direction.Three different configurations were generated from each starting point with the peptide inserted into the pore to increasing degrees.However, following 50 ns of simulation, the peptide had in all cases either returned to the original interface-associated configuration or moved completely away from the pore, thus indicating that a more deeply inserted state is not favourable under the applied conditions (Figure S5). To investigate if a change in the peptide-to-lipid ratio and the presence of multiple peptides could affect the pore-associated configurations, pores were induced into the 16peptide model described earlier.Similar electric field conditions as in the single peptide simulations were used.However, the larger size of the bilayer meant that the pore also grew to a larger size, with a diameter of around 5 nm. As with the single peptide simulations, most of the peptides adopted either a configuration associated with the pore interface or one completely independent of the pore.The number of peptides associated with the pore at the endpoint of each simulation ranged from four to five (Figure 6F), with 33% being in the A configuration, 58% in the B config- In the second configuration (Figure 5B), the helical regions of the peptide were positioned in reverse.As such, the secondary helical region was now orientated towards the centre of the pore.Therefore, it was the secondary helix that adopted the more tilted state (65 degrees vs. bilayer normal), while the primary helix was only tilted slightly (84 degrees vs. bilayer normal) compared with the normal orientation.In this configuration, the leu16 residue was positioned the deepest at an average position of around 0.85 nm above the bilayer centre. In both cases, the tail region seemed to behave relatively independently of the membrane pore, while the helical regions facilitated the interactions between the peptide and the pore. Throughout the three simulations (combined 1.5 µs simulation time), the peptide spent approximately 150 ns in the A configuration, 720 ns in the B configuration and 630 ns in a configuration not associated with the pore.The A configuration only occurred in the one simulation where the pore nucleation site was close to the N-terminal of the peptide whereas the peptide transitioned to the B configuration at some point in all three simulations. As no transmembrane configuration was observed throughout the simulations, a centre of mass pull function was applied to artificially position the peptide deeper within the pore, to evaluate the feasibility of a more deeply inserted configuration.Starting from either of the pore-associated configurations, the peptide was pulled towards the centre of the pore in both the x and z direction.Three different configurations were generated from each starting point with the peptide inserted into the pore to increasing degrees.However, following 50 ns of simulation, the peptide had in all cases either returned to the original interface-associated configuration or moved completely away from the pore, thus indicating that a more deeply inserted state is not favourable under the applied conditions (Figure S5). To investigate if a change in the peptide-to-lipid ratio and the presence of multiple peptides could affect the pore-associated configurations, pores were induced into the 16peptide model described earlier.Similar electric field conditions as in the single peptide simulations were used.However, the larger size of the bilayer meant that the pore also grew to a larger size, with a diameter of around 5 nm. As with the single peptide simulations, most of the peptides adopted either a configuration associated with the pore interface or one completely independent of the pore.The number of peptides associated with the pore at the endpoint of each simulation ranged from four to five (Figure 6F), with 33% being in the A configuration, 58% in the B configuration and 8% adopting a more sideways-facing configuration.However, while the peptide configurations in the multi-peptide setup broadly aligned with what was observed for the single peptide, in some cases the specific positions and orientations of individual peptides were more extreme.In all the simulations, one to three of the peptides adopted a more deeply inserted position with part of the peptide reaching below the centre of the bilayer (Figure 6A-E).Both types of pore-associated configurations were observed to be able to adapt to this greater inserted state.However, to accommodate the new positions of the peptides relative to the curvature of the bilayer, increases in the tilt angles were observed.The A configuration (Figure 6A,B) was not dissimilar to the single peptide configuration although the tilt angle increased by a further 20-30 degrees.However, for the B configuration (Figure 6C,D), the deeper insertion meant that the primary helix was now also placed within the pore lumen and thus had to adopt a much steeper tilt angle (40-70 degrees) in order to align with the bilayer curvature. Even though these configurations are more extreme, the peptides were still mainly associated with the top half of the pore and thus a true transmembrane configuration was not observed. To investigate how the increased peptide-to-lipid ratio affected the properties of the bilayer in the membrane pore simulations, the movement of lipids between the upper and lower leaflets was quantified.Due to the presence of the membrane pore, the lipids could relatively easily translocate between the leaflets without the need for a slow lipid flipflop process to occur.The one-directional electrical field applied in these simulations caused the negatively charged DOPG to move from the top leaflet to the bottom which can be seen occurring over time in the simulations with a low peptide-to-lipid ratio (Figure 7A).However, in these simulations, the translocation of the DOPG lipids was counterbalanced by an equal but opposite movement of the neutral DOPC lipids, leaving the lipid density of the bottom leaflet relatively constant over time.This was not the case for the simulations with a high peptide-to-lipid ratio.The demixing of the DOPG and DOPC lipids still occurred; however, an overall net movement of lipids away from the top leaflet, which had the peptide inserted, was observed, leading to an increased combined-lipid density in the bottom leaflet (Figure 7B). Experimental Investigation of the Mechanisms of Pore Formation of Smp24 The mechanism by which AMPs induce the formation and stabilisation of membrane pores is complex.Several different models have been proposed to explain the underlying molecular-level peptide-pore structures which facilitate membrane disruption, the main mechanism by which antimicrobial peptides exert their antimicrobial properties.As previously mentioned, the three traditional models (barrel stave, toroidal pore and carpet) are the models most frequently described, although many others have been proposed [4].Previously, the pore formation induced by Smp24 was investigated using atomic force microscopy (AFM) showing the formation of stable, differently sized circular pores with an average diameter of 80 ± 40 nm after 30 min of incubation [14].The presence of distinct pores of greatly varying sizes indicates that, at least under these conditions, the pore structure most closely aligns with toroidal pore model [14].However, the structure of these large mature pores does not necessarily correlate with the structure at the beginning of the pore formation process.Investigating the pore structure during these early stages of the disruption process may make it easier to correlate the structure of the individual peptide with the mechanism of pore formation and stabilisation, as fewer peptides likely take part in the peptide-pore assembly. Single-channel-level patch-clamp investigations of peptide-induced pore formation enable the measurement of the increase in conductance across a membrane due to this membrane disruption.These current spikes are directly correlated to molecular-level changes to the structure of the bilayer and can therefore give insight into the transient structure of these pores in the early stages of the membrane disruption.The current traces from the patch-clamp experiments indicate that, as seen with the mature pores, Smp24 does not create ordered peptide-pore assemblies which would yield consistent and repeatable conductance levels.Instead, the peptide produces a range of different event types which often both have a variable conductance level within each individual event and even within the same event-type category.This indicates that the structure and size of the pores corresponding to the individual events also ranges widely and therefore the molecularlevel structures of the peptide-pore assembly is likely disordered in its nature.The three different conductance-event types observed also indicate that multiple different biophysical mechanisms underly the membrane disruption at these early stages of the activity Experimental Investigation of the Mechanisms of Pore Formation of Smp24 The mechanism by which AMPs induce the formation and stabilisation of membrane pores is complex.Several different models have been proposed to explain the underlying molecular-level peptide-pore structures which facilitate membrane disruption, the main mechanism by which antimicrobial peptides exert their antimicrobial properties.As previously mentioned, the three traditional models (barrel stave, toroidal pore and carpet) are the models most frequently described, although many others have been proposed [4].Previously, the pore formation induced by Smp24 was investigated using atomic force microscopy (AFM) showing the formation of stable, differently sized circular pores with an average diameter of 80 ± 40 nm after 30 min of incubation [14].The presence of distinct pores of greatly varying sizes indicates that, at least under these conditions, the pore structure most closely aligns with toroidal pore model [14].However, the structure of these large mature pores does not necessarily correlate with the structure at the beginning of the pore formation process.Investigating the pore structure during these early stages of the disruption process may make it easier to correlate the structure of the individual peptide with the mechanism of pore formation and stabilisation, as fewer peptides likely take part in the peptide-pore assembly. Single-channel-level patch-clamp investigations of peptide-induced pore formation enable the measurement of the increase in conductance across a membrane due to this membrane disruption.These current spikes are directly correlated to molecular-level changes to the structure of the bilayer and can therefore give insight into the transient structure of these pores in the early stages of the membrane disruption.The current traces from the patch-clamp experiments indicate that, as seen with the mature pores, Smp24 does not create ordered peptide-pore assemblies which would yield consistent and repeatable conductance levels.Instead, the peptide produces a range of different event types which often both have a variable conductance level within each individual event and even within the same event-type category.This indicates that the structure and size of the pores corresponding to the individual events also ranges widely and therefore the molecular-level structures of the peptide-pore assembly is likely disordered in its nature.The three different conductance-event types observed also indicate that multiple different biophysical mechanisms underly the membrane disruption at these early stages of the activity rather than just toroidal pores as observed for the mature pores.This observation that a variety of different event types can be seen for a single peptide is not uncommon for AMPs.While some examples of pore formation result in consistent and distinct conductance levels leading to square-top or flickering current signatures [33,34], they are not always consistent between different studies [35].Many AMPs show either erratic behaviour or a combination of different event types like Smp24 [35][36][37][38][39].One particularly interesting example is the cyclic beta sheet AMP gramicidin S. While having very little in common with Smp24 both structurally and in origin, the conductance behaviour of the two peptides is remarkably similar [40].Due to the structural dissimilarity between the two peptides, this again indicates that the disruption occurs in a more disordered manner rather than by the formation of distinct structural assemblies.The multitude of different event types does not correlate well with the three traditional models for pore formation.However, alternative models such as the SMART model put a greater emphasis on the mechanism of action being not only dependent on the structure of the peptide but also the other conditions of the specific system such as the peptide concentration and the properties of the bilayer [5].Such models better account for an individual peptide acting via multiple competing mechanisms of action, depending on the local conditions during a specific membrane-disruption event. However, even if disordered, there must still be some correlation between the structure of Smp24 and the range of molecular-level behaviours leading to the occurrence of the conductance events.The distinct start and endpoints of the spike and multilevel events suggest that these events are pore-like in nature and must somehow be induced and stabilised by the presence of the peptide in the bilayer.In order to formulate more accurate models for these individual categories of membrane disruption, the specific structure of Smp24 and how it correlates with the orientation of the peptide must be considered, but the transient changes in the peptide orientations necessary to explain these very shortlived, early-stage pore forming events cannot readily be investigated using traditional experimental techniques.However, with a nanosecond time scale and atomic resolution, MD simulations can at least provide a theoretically based prediction as to how the peptide might behave under conditions mimicking those observed in the patch-clamp experiments.While the simulation of spontaneous AMP-induced pore formation is currently not feasible, the behaviour of the peptides in a pore-like environment can still be investigated, such as by manually inducing the formation of a toroidal using electroporation.This approach is especially useful for investigating the structure, orientation and position of peptides relative to the pore lumen, but due to the artificial conditions of the pore creation and stabilisation, it does not yield strong predictions with regard to more macroscopic pore-related factors such as the overall pore diameter or ideal number of peptide monomers in the pore assembly.Still, understanding how the behaviour of individual peptides changes between before and after pore nucleation and relative to the total concentration of peptides within the bilayer, could allow for a better contextualization of the experimental observations and the construction of more accurate models explaining the underlying structures of the different conductance events. MD Simulation-Derived Structure of Smp24 Before the pore-associated structures and orientations of the peptide can be explored, the baseline peptide behaviour must be established.The configuration of the peptide inserted into the leaflet of a bilayer at a low peptide-to-lipid ratio was simulated, in conditions known to yield consistent structures for amphiphilic AMPs [41].The simulated structure of Smp24 shows that the peptide has a number of characteristics that are consistent with many other peptides of the same class, but some aspects are also unique.The presence of a large helical region orientated in parallel with the bilayer is a well-known and experientially consistent structural characteristic of many AMPs [12,[41][42][43].In addition, as for Smp24, the presence of a proline residue has been shown several times to introduce a kink in the helical region although in most cases the kink is positioned more centrally in the helical region of the peptide [9][10][11][12].However, the presence of the large unstructured region is a less common structural motif for AMPs of a similar size.The closely related peptide pandinin 2 also has an unstructured region near the C-terminal consisting of the same four residues as Smp24 (although ordered differently); however, unlike Smp24, this region is not separated from the rest of the peptide by the inclusion of a linker region [44].The only other peptide found with a similar potential linker motif is another scorpion venom-derived peptide Con10.This peptide also contains three glycine residues near the C-terminal followed by a region of only polar or charged residues, which could give this peptide a similar overall 3D structure to Smp24.However, no detailed structural information has been published for this peptide yet [45,46].The presence of the glycine linker region in Smp24 could affect the functionality of the highly polar tail region as the linker ensures that the last four residues are freely positioned around the lipid headgroups without affecting the orientation and positioning of the helical parts of the peptide. Molecular Mechanisms and Structure behind Early-Stage Smp24-Induced Pores With a baseline established for the structure of peptide and bilayer in simple conditions, further investigation can be carried out into the interplay between them and how their structures adapt as the conditions near pore formation.AMPs such magainin 2 and melittin have previously been shown to affect bilayers in non-disruptive ways prior to pore formation such as by inducing membrane thinning [47,48].Smp24 itself has been shown to affect bilayer properties in several non-destructive ways, from introducing thinning defects to affecting the lipid ordering of phase separated bilayers [14].The simulations show that Smp24 affects both the area per lipid, membrane thickness and the lipid chain order in a concentration-dependent manner.These changes are all likely related and stem from the way the top leaflet adapts to the presence of the inserted peptides.As the peptides take up space among the lipid phosphate and glycerol groups, these parts of the lipids get forced apart leading to the increase in the area per lipid.As the lipids are more spread out some lipid chains need to bend below the peptide in order to retain the integrity of the hydrophobic core of the bilayer, which in turn causes a reduction in the lipid chain order of the lower half of the chains.As an increasing number of the lipid chains in the top leaflet are no longer straight, a thinning effect is seen.These subtle concentration-dependent changes to the bilayer properties could play a part in reaching conditions that allow for further and more drastic membrane disruption. Concentration-Dependent Membrane Disruption Both the patch-clamp experiments and the peptide-pore simulations indicate that membrane disruption is concentration dependent.The kinetic data from the patch-clamp experiments suggest that pore formation is a stochastic process that increases in probability and severity at concentrations above 3.8 µM.At the two lowest concentrations tested, the variations in the kinetics were very large with some instances where the disruption kinetics were in line with what was observed at the higher concentrations and some cases where no disruption was observed at all.Under these experimental conditions, pore formation is highly dependent on the local environment of the bilayer.Slight defects in the bilayer or small variations in the other experimental conditions may be enough to make the difference between complete destruction of the bilayer and no effect at all, while the behaviour is more consistent as the peptide concentration increases.At low peptide-tolipid ratios, the MD simulations suggest that the peptides do not spontaneously transition into a configuration deep within the pore lumen.Instead, the most favourable peptide configurations seem to be when the peptide is associating with the interface between the pore and the bilayer, as this facilitates a better alignment between the curvature of the bilayer and the curvature of the helical region of the peptide induced by the proline14 residue.This principle is also demonstrated by the configuration with the secondary helix orientated towards the pore lumen (Figure 5B) being the most common, as this places the kink in a higher curvature region of the pore and allows the overall shape of the peptide and pore to better align compared with the reverse orientation.However, as the peptide-tolipid ratio used in the simulation was increased, the peptides gained an ability to adopt more extreme configurations within the pore lumen.This is not due to direct interactions between the individual peptides but because of their concentration-dependent effects on the properties of the bilayer as a whole.As more peptides are inserted into the top leaflet of the membrane, a mismatch occurs between the ideal size of the two leaflets, which facilitates the net movement of lipid towards the bottom leaflet.As the peptides are directly interacting with the lipids, they can be pulled down into the pore lumen as the lipids are translocating.Furthermore, the lower lateral-diffusion constant of the peptides at the high peptide-to-lipid ratio can make it more difficult for the peptides to move out of the pore lumen.Together, this allows the peptides to adopt these more extreme configurations, even if the alignment between the helical and bilayer curvatures is less favourable.However, deeper positioning within the pore would likely provide an increase in the stabilising effect that the peptides have on the overall structure and the lifetime of the pore compared with the earlier configurations. This molecular-level mechanism of a concentration-dependent change to the peptidepore structure correlates well with the observations from the patch-clamp experiments.As the transition of the peptide to the most pore-stabilising configurations does not happen spontaneously but is very dependent on the specific local conditions around the area of the pore, one would expect to see a large degree of variation in the kinetics of the membrane disruption, especially at lower peptide concentrations.Furthermore, as deeply inserted peptide configurations are not intrinsically more energetically favourable, the nature of the pore structure would be expected to be disordered, leading to an inconsistent conductance level. Molecular-Level Structures Corresponding to Disruption Events The behaviours found in the MD simulations allow us to propose different molecularlevel structures corresponding to the different event types observed in the patch-clamp experiments (Figure 8).The distinct and clear beginning and end to the signals corresponding to the multilevel and spike events both suggest that these events are mechanistically related and that they correspond to a pore which has a distinct open and closed state, even if the characteristics of the open state are variable.The main difference is the lifetime of the open state, which, on a molecular level, corresponds to how well the pore is stabilised by the incorporation of the peptides.During both types of events, the local conditions of the bilayer are such that a pore-nucleation event can occur, either directly due to structural disruptions due to the presence of the peptides or indirectly via the macroscopic effects the inserted peptides have on the bilayer.In the case of the multilevel events, a number of peptides will transition from the pore interface into variable positions within the pore lumen providing stability to the pore structure, increasing the lifetime of the event.However, for spike events, this transition does not occur either due to chance or because the local peptide concentration around the pore is too low.As such, the pore will quickly close due to the line tension of the bilayer.The structural models for these peptide-induced pores could thus be best described as supported or unsupported disordered toroidal pores (Figure 8A,B).Proposing an underlying molecular structure corresponding to the erratic event type is more challenging.The lack of a distinct start and endpoint makes it less likely that the "pore" structure is similar to a water channel in the traditional sense.Instead, the event type could represent a more general form of disruption of the membrane structure increasing the "leakiness" of the bilayer, allowing for water and ions to sporadically cross the bilayer at an increased rate.Changes to the structure of the bilayer such as membrane thinning, increased lipid chain disorder, a higher portion of membrane defects or lipid removal via micelle formation could all be factors lowering the intrinsic resistance of the bilayer and thereby lead to periods of increased conductance (Figure 8C).Another option could be the formation of less well-defined peptide-lipid aggregates.It has previously been proposed that micellar-like peptide-lipid aggregates could form within the bilayer creating structures that function as a pore but without the toroidal or channel-like shape [49].This would produce a much less defined path for the water and ions to penetrate the bilayer which could explain the more gradual shifts in the conductance seen for these events (Figure 8D).could form within the bilayer creating structures that function as a pore but without the toroidal or channel-like shape [49].This would produce a much less defined path for the water and ions to penetrate the bilayer which could explain the more gradual shifts in the conductance seen for these events (Figure 8D). Conclusions The mechanism of action of AMPs such as Smp24 is varied and complex, especially during the early stages of membrane disruption.For Smp24, patch-clamp experiments indicate that multiple mechanisms of pore formation can be present at the same time, something which is not well accounted for with the ridged-pore structures proposed in the three traditional models for AMP-induced pore formation.Instead, a more general understanding of the ability for both the peptide and bilayer to transiently respond and adapt their structures to the changing conditions that occur after the peptides and the bilayer meet is required to explain this phenomenon.A combination of the off-centre kinked structure of the peptide and the concentration-dependent effect of an uneven distribution of the peptide across the bilayer leaflets can explain why some short-lived disordered toroidal pores transition into more stable events while others do not. Conclusions The mechanism of action of AMPs such as Smp24 is varied and complex, especially during the early stages of membrane disruption.For Smp24, patch-clamp experiments indicate that multiple mechanisms of pore formation can be present at the same time, something which is not well accounted for with the ridged-pore structures proposed in the three traditional models for AMP-induced pore formation.Instead, a more general understanding of the ability for both the peptide and bilayer to transiently respond and adapt their structures to the changing conditions that occur after the peptides and the bilayer meet is required to explain this phenomenon.A combination of the off-centre kinked structure of the peptide and the concentration-dependent effect of an uneven distribution of the peptide across the bilayer leaflets can explain why some short-lived disordered toroidal pores transition into more stable events while others do not. Figure 1 . Figure 1.Smp24 pore formation kinetics via patch clamp.(A) Time between addition of pept the bilayer and the occurrence of an irreversible disruption of the bilayer resistance.(B) Tim tween the addition of peptide to the bilayer and the observation of the first conductance event shown are mean and standard deviation (n = 5), * indicates significant difference based on unp t-test (p < 0.05). Figure 1 . Figure 1.Smp24 pore formation kinetics via patch clamp.(A) Time between addition of peptide to the bilayer and the occurrence of an irreversible disruption of the bilayer resistance.(B) Time between the addition of peptide to the bilayer and the observation of the first conductance event.Data shown are mean and standard deviation (n = 5), * indicates significant difference based on unpaired t-test (p < 0.05). Figure 2 . Figure 2. Representative examples of the current trace and amplitude histograms for the three different event types observed in the patch-clamp experiments.(A) Multilevel events, (B) spike events, (C) erratic events. Figure 2 . Figure 2. Representative examples of the current trace and amplitude histograms for the three different event types observed in the patch-clamp experiments.(A) Multilevel events, (B) spike events, (C) erratic events. Figure 3 .Figure 3 . Figure 3. Characteristics of the insertion of Smp24 into the negative bilayer.(A) Configuration of the peptide in the N-terminal anchored stage of the insertion.Interactions occur via two of the four Figure 3. Characteristics of the insertion of Smp24 into the negative bilayer.(A) Configuration of the peptide in the N-terminal anchored stage of the insertion.Interactions occur via two of the four lysine residues (blue), the N-terminal isoleucine (green) and the position 4 phenylalanine (purple).(B) Changes over time in the Z-axis centre of mass of peptide (yellow) and N-terminal (blue) relative to the phosphor atoms of the top leaflet (red) and centre of the bilayer (black).(C) Changes over time in the tilt angle relative to the bilayer norm of the helical region from residues 1-12.(D) Cumulative changes in the local helical rotation of residue 2-10 during the rotational stage of the insertion process.Figures shown are based on one simulation; corresponding figures for all repeats can be found in Supplementary Materials S2 (Figure S2.2-S2.4). Figure 5 . Figure 5. Three-dimensional models of the two pore-associated configurations seen for Smp24 in the long_pbcg_1-3 simulations.(A) The helical regions of Smp24 are aligned with the curvature of the pore interface with the secondary helix positioned within the top of the pore.(B) The helical regions are aligned in reverse such that the primary helix is positioned within the top half of the pore. Figure 5 . Figure 5. Three-dimensional models of the two pore-associated configurations seen for Smp24 in the long_pbcg_1-3 simulations.(A) The helical regions of Smp24 are aligned with the curvature of the pore interface with the secondary helix positioned within the top of the pore.(B) The helical regions are aligned in reverse such that the primary helix is positioned within the top half of the pore. Figure 6 . Figure 6.Examples of transmembrane peptide configurations in the multi-peptide pore simulations.(A,B) Example of peptide with the primary helical region positioned the deepest in the pore lumen (frontal and sideways view), (C,D) example of peptide with the secondary helix positioned the deepest in the pore lumen (frontal and sideways views), (E) two peptides in transmembrane Figure 6 . Figure 6.Examples of transmembrane peptide configurations in the multi-peptide pore simulations.(A,B)Example of peptide with the primary helical region positioned the deepest in the pore lumen (frontal and sideways view), (C,D) example of peptide with the secondary helix positioned the deepest in the pore lumen (frontal and sideways views), (E) two peptides in transmembrane configurations in the same pore, (F) example of a top-down view of a pore with multiple peptides associated with the pore interface. Figure 7 . Figure 7. Translocation of lipids through the membrane pore during the simulations.Results are shown as the average ± SD lipid density in the bottom leaflet over time for the pore simulations.(A) Single peptide pore models (n = 3), (B) multi-peptide pore models (n = 5).Black = combined density for both DOPC and DOPG lipids divided by 2, red = density of DOPC lipids, green = density of DOPG lipids. Figure 7 . Figure 7. Translocation of lipids through the membrane pore during the simulations.Results are shown as the average ± SD lipid density in the bottom leaflet over time for the pore simulations.(A) Single peptide pore models (n = 3), (B) multi-peptide pore models (n = 5).Black = combined density for both DOPC and DOPG lipids divided by 2, red = density of DOPC lipids, green = density of DOPG lipids. Figure 8 . Figure 8. Proposed molecular-level structures corresponding to the conductance events observed in the patch-clamp experiments.(A) The short-lived spike events are caused by unsupported toroidal pores without peptides in the pore lumen.(B) The longer-lived multilevel events are caused by supported toroidal pores with peptides taking part in the pore structure.(C) General disruptions to the bilayer structure such as membrane thinning could be the reason for the erratic events.(D) Micellarlike aggregates within the bilayer could also be the reason for erratic events. Figure 8 . Figure 8. Proposed molecular-level structures corresponding to the conductance events observed in the patch-clamp experiments.(A) The short-lived spike events are caused by unsupported toroidal pores without peptides in the pore lumen.(B) The longer-lived multilevel events are caused by supported toroidal pores with peptides taking part in the pore structure.(C) General disruptions to the bilayer structure such as membrane thinning could be the reason for the erratic events.(D) Micellar-like aggregates within the bilayer could also be the reason for erratic events. Table 1 . Concentration-dependent effects of peptide insertion on the structure and order of the lipid bilayers.Values represents the average ± the standard deviation (SD) over the simulation period.
13,560.4
2023-09-28T00:00:00.000
[ "Biology", "Chemistry", "Medicine" ]
Development of ytterbium-doped oxyfluoride glasses for laser cooling applications Oxyfluoride glasses doped with 2, 5, 8, 12, 16 and 20 mol% of ytterbium (Yb3+) ions have been prepared by the conventional melt-quenching technique. Their optical, thermal and thermo-mechanical properties were characterized. Luminescence intensity at 1020 nm under laser excitation at 920 nm decreases with increasing Yb3+ concentration, suggesting a decrease in the photoluminescence quantum yield (PLQY). The PLQY of the samples was measured with an integrating sphere using an absolute method. The highest PLQY was found to be 0.99(11) for the 2 mol% Yb3+: glass and decreases with increasing Yb3+ concentration. The mean fluorescence wavelength and background absorption of the samples were also evaluated. Upconversion luminescence under 975 nm laser excitation was observed and attributed to the presence of Tm3+ and Er3+ ions which exist as impurity traces with YbF3 starting powder. Decay curves for the Yb3+: 2F5/2 → 2F7/2 transition exhibit single exponential behavior for all the samples, although lifetime decrease was observed for the excited level of Yb3+ with increasing Yb3+ concentration. Also observed are an increase in the PLQY and a slight decrease in lifetime with increasing the pump power. Finally, the potential of these oxyfluoride glasses with high PLQY and low background absorption for laser cooling applications is discussed. mechanical resistance, giving rise to unique materials with superior optical properties for a wide range of applications in photonics 9,[14][15][16][17][18] . In addition, ultra-transparent glass-ceramics containing low phonon energy fluorite nanocrystals could also be produced under appropriate heat-treatment applied to as-made oxyfluoride glasses 19 . Oxyfluoride glasses and glass-ceramics may also be suitable for non-linear optical applications. As reported in the literature 20 , nano-glass-ceramics exhibiting a non-linear refractive index (n 2 = 6.69 × 10 14 cm 2 /W) about two times larger than that of their parent glasses (n 2 = 3.23 × 10 14 cm 2 /W) were obtained. The motivation of our work is to develop low phonon energy oxyfluoride glasses for laser cooling applications. Glassy materials indeed exhibit various advantages over crystals such as ease of fabrication, capability of scaling-up and thus cost-effective production. Here, heavily Yb 3+ -doped oxyfluoride glasses belonging to the SiO 2 -Al 2 O 3 -PbF 2 -CdF 2 -YF 3 vitreous system were prepared and characterized. Glass transition and crystallization temperatures, thermal stability against crystallization and thermal expansion coefficient were determined by thermal analysis. The PLQY was then evaluated for all the samples by using a pump wavelength at 920 nm from a Ti: sapphire laser and measuring the emission spectra with an integrating sphere coupled to an optical spectrum analyzer. The present study aimed at optimizing the Yb 3+ ion concentration in order to obtain high PLQY and low background absorption. To the best of our knowledge, this is the first spectroscopic investigation report on these oxyfluoride glasses for laser cooling applications. The obtained results were compared with those reported on Yb 3+ : ZBLANP glass 6 and Yb 3+ : YAG crystal 12 . Theory Laser cooling process in RE-doped host, which is based on anti-Stokes fluorescence, is illustrated in Fig. 1. The cooling efficiency of the sample can be described as: where η abs is the absorption efficiency which includes the resonant, α r and background absorption, α b . ext e r e r n r is the external PLQY, where η e is the fluorescence escape efficiency, which has been investigated in ref. 21. It also depends on the refractive index and shape of the sample. W r and W nr are the radiative and non-radiative decay rates, respectively. The fluorescence escape efficiency depends not only on the refractive index but also on the shape of the sample. As can be seen in Eq. (1), only RE-host combinations satisfying the inequality, W nr ≪ W r are suitable for laser cooling by anti-Stokes fluorescence. The mean fluorescence wavelength, λ f can be calculated as: Here λ p is the pump wavelength, λ f is the fluorescence wavelength, E p is the pump energy, E f is the fluorescence energy, R is the reabsorption, and W r is the radiative and W nr is the non-radiative decay rates. If impurities (quencher: transition metal ions and other impurities) present in the glass matrix (host) absorb the pump laser, then luminescence quenching occurs, leading to heating of the sample. Scientific RepoRts | 6:21905 | DOI: 10.1038/srep21905 The external PLQY (η ext ) can be expressed as the ratio of the number of emitted photons and the number of absorbed photons 22 : where N ep is the number of emitted photons from the sample when the excitation beam is directed onto the sample, A is the absorption coefficient, N ip is the number of incident photons detected without the sample and N epd is the number of emitted photons by the sample with the interaction of diffused light. In addition to Eq. (5) for evaluating the PLQY, we propose another relation to assess the PLQY. ap ip sip By combining the above two relationships (6) and (7), we can get: where N ap is the number of absorbed photons when the sample is excited directly with the laser in the integrating sphere, N ip is the number of incident photons collected without the sample in the integrating sphere and, N sip is the number of scattered incident photons detected when the sample is inside the integrating sphere and the beam is directed on it. Results and Discussion Thermal and thermo-mechanical properties. The thermal and thermo-mechanical properties of the proposed cooling material are essential for investigating the integration of numerous optical parameters of an optical cooler. The DSC traces recorded on the 30SiO 2 -15Al 2 O 3 -(29-x)CdF 2 -22PbF 2 -4YF 3 -xYbF 3 (mol%) glasses for various Yb 3+ ion concentrations are shown in Fig. 2(a). The glass transition temperature (T g , ± 2 °C), the onset temperature of crystallization (T x , ± 2 °C) and peak crystallization temperature (T p , ± 1 °C) were determined from the thermograms as well as the corresponding glass thermal stability against crystallization criterion (Δ T = T x − T g , ± 4 °C). Among the Yb 3+ -doped samples, a slight increase of their glass characteristic temperatures T g , T x and T p is observed in Fig. 2(a,b) with increasing Yb 3+ concentration. First, the increase of glass transition temperature (from 412 to 445 °C) with increasing Yb 3+ content (from 0 to 20 mol%) shows here that the Yb 3+ ions are well incorporated into the glass network. Whereas addition of RE ions like Yb 3+ into a glassy material usually tends to decrease its stability by altering its network reticulation (and thus decreasing its T g ), it seems here that the addition of Yb 3+ reinforces the glass network. The heavy level of doping attained (up to 20 mol%) supports this assumption. Further investigation is required to understand the structural role played by Yb 3+ in such heavily doped glasses. A progressive shifting towards higher temperature of the crystallization peak can be observed on the DSC traces ( Fig. 2(a)) with increasing Yb 3+ concentration. This slight increase of T x and T p can be directly correlated with the strengthening of the glass network after the replacement of Cd 2+ cations by Yb 3+ ions, as above mentioned. The shape of the crystallization peak also evolves with increasing Yb 3+ concentration, as can be seen in Fig. 2(a). From the SYb05 to the SYb20 sample thermograms, the peak is broadening and clearly consists of two contributions: a sharp intense one at lower temperature, and a broad weak shoulder at higher temperature. It can be assumed that the first peak is related to the crystallization of β-PbF 2 crystals, as already reported in refs 23-25 while the second contribution can be associated with the formation of new crystalline phase or even phase transformation. Further investigation focused on the crystallization kinetics and identification of the crystalline phase structure would be required to fully describe the crystallization process in these SiO 2 -Al 2 O 3 -CdF 2 -PbF 2 -YF 3 -YbF 3 glasses. Nevertheless, it is worth mentioning that the undoped sample exhibits the highest temperatures of crystallization (both onset and peak) and the largest thermal stability against crystallization (Δ T = 69 °C, see Fig. 2(b)). In addition, one can see in Fig. 2(a) that its crystallization peak is weaker and flatter than those on the other DSC traces, indicating therefore that the undoped glass is less prone to crystallization. Such result was expected as it is well-known that addition of RE ions like Yb 3+ into a glassy material tends to decrease its stability vs crystallization, as previously reported in many works in the literature 23,24,26,27 . As above mentioned, the knowledge of the thermo-mechanical material properties such as thermal conductivity, specific heat capacity and thermal expansion coefficient (TEC) is crucial for a proper design of an optical cooler. The heat transfer rate within the cooled material is proportional to changes in temperature, thermal conductivity and heat capacity after excitation with a suitable laser 28 . Thermo-mechanical analysis (TMA) was performed on the samples SYb02 and SYb12. The TEC determined for these samples (in the temperature range of 100-350 °C) are 11.3x10 −6 /K and 13.7x10 −6 /K, respectively. The theoretical description of the TEC has been reported for Yb 3+ -doped phosphate laser glasses elsewhere 29 . The TEC values are higher than those reported for Li 2 O-Al 2 O 3 -SiO 2 glasses (4.6-7.5 × 10 −6 /K) 30 , phosphate glass (LiPO 3 -Al(PO 3 ) 3 -Ba(PO 3 ) 2 -La 2 O 3 , 9.8 × 10 −6 / K) 31 , Yb 3+ :YAG crystal (8.06 × 10 −6 /K) 32 but lower than that of ZBLAN fluorozirconate glass (16.4 × 10 −6 /K) 33 and comparable to that of silicate laser glasses (12.7-13.4 × 10 −6 /K) 34 . The TEC values for the investigated glasses are between those of laser cooled materials such as Yb 3+ :YAG crystal 32 Linear refractive index. The refractive indices of the glass samples were measured by the prism coupling technique with a resolution of ± 0.001, and plotted in Fig. 3 as a function of wavelength and Yb 3+ concentration. The values reported here were obtained for the transverse-electric (TE) mode of the incident laser radiation while no significant difference was observed in the transverse-magnetic (TM) mode, confirming the absence of birefringence, as expected in isotropic glass materials. First, one can observe in Fig. 3 a decrease of the refractive index with increasing the wavelength for each glass sample, showing thus their respective chromatic dispersion. The Sellmeier's dispersion relation was used to fit the experimental data and facilitate their reading. Then, if we do not consider the undoped sample, one can observe that their refractive index decreases with increasing Yb 3+ concentration. Such behavior is quite unusual. Indeed, glass doping with RE ions which are heavy elements compared to traditional components used to form glass (e.g. SiO 2 ), generally results in increasing its refractive index. Here, the ytterbium fluoride (YbF 3 , molar mass = 230.04 g/mol) is incorporated into the glass by substituting for the cadmium fluoride, which is lighter (CdF 2 , molar mass = 150.41 g/ mol), following the composition law 30SiO 2 -15Al 2 O 3 -(29-x)CdF 2 -22PbF 2 -4YF 3 -xYbF 3 (mol%). Therefore, an increase of refractive index and density could be expected with increasing the Yb 3+ concentration. However, while the density increase is observed with increasing Yb 3+ concentration (as shown in Fig. S1) as expected, an opposite trend is observed for the refractive index in our glasses, as shown in Fig. 3. Interpreting the refractive index change of glasses as a function of their chemical composition is relatively complex. Indeed, it essentially depends on two factors, i.e. the glass molar volume (related to its density and molar mass) and the polarizability of its constituents. A tentative explanation can be as follows. First the high refractive index of these glasses is mainly governed by their large concentration of heavy metals with large electronic densities. Then, it is known that F − anions possess a lower polarizability than O 2− anions 35 . The progressive replacement of CdF 2 by YbF 3 in these glasses implies an increase of its fluorine content to the detriment of its oxygen content, as presented in the Table 1. This results then in a decrease of the glass average polarizability. Therefore the observed decrease in glass refractive index can be ascribed here to the dominant role played by its decreasing polarizability whereas its density, which increases with increasing Yb 3+ concentration (see Fig. S1), has a lower impact. Last, one can also notice in Fig. 3 that refractive index was accurately measured at 972 nm for the undoped sample while no value was obtained by the prism couling method at that wavelength on the Yb 3+ -doped samples. We assume that it is related to the strong absorption of Yb 3+ ion in this spectral region. Then, the refractive index of the undoped sample (as a function of wavelength) is comprised between those of the SYb08 and SYb12 samples, illustrating once again the complexity to represent its dependence on the glass chemical composition. Following the same reasoning as above, we would have indeed expected a higher refractive index for the undoped glass than for those doped with Yb 3+ (because of a lower fluorine content). But it is clearly not the case here as one can see in Fig. 3. It can be assumed here that the density of the undoped glass, which is significantly lower than those of the Yb 3+ -doped glasses (Fig. S1), plays a more significant role. Further structural investigation is required to elucidate such behavior. Electron probe micro analysis. Electron probe micro analysis (EPMA) was carried out to identify and quantify the elemental composition of the prepared glasses. The experimental results along with the theoretical data are presented in Table 1. The synthesis process was performed at the same temperature (1100 °C) but with varying duration (1h30, 2h, 2h30, 3h, 3h30 and 4h) of the glass melting with increasing Yb 3+ concentration. Note that the results presented in Table 1 are the mean value of five independent measurements on the same sample at different positions. It is worth mentioning that both theoretical and experimental F contents increase with increasing Yb 3+ concentration whereas an opposite trend is observed in the case of the O content. To show the reproducibility of the synthesis process, the same glass (SYb02) was prepared three times by keeping all the conditions strictly identical (melting temperature and duration of glass melting are 1000 °C and 1 h, respectively) and the results are presented for the three samples in Table 2. The obtained maximum errors (%) in experimental results between the three samples when compared to theoretical values, indicate here an excellent repeatability of the sample preparation in the given conditions. Absorption spectra. The UV-visible-near-infrared (NIR) absorption spectra of the undoped and SYb02 samples are presented in Fig. 4, showing a broad absorption band for the SYb02 sample centered at a wavelength of 975 nm which corresponds to the Yb 3+ : 2 F 7/2 → 2 F 5/2 transition. The transmission spectra obtained for the other samples (see the supplementary information, Fig. S2) show very similar profiles with the same Yb 3+ absorption band shape, except for its intensity which depends on the Yb 3+ concentration, as plotted in the inset of Fig. 4. The inhomogeneously broadened absorption bands are due to the electronic transitions between the Stark sublevels of the ground ( 2 F 7/2 ) and the excited ( 2 F 5/2 ) levels as well as the strong electron-phonon interaction characteristic to the glassy host 36 . The quasi-linear variation of the integrated absorption band intensity observed with increasing Yb 3+ concentration (inset of Fig. 4) indicates the presence of a similar local environment around the Yb 3+ ions in all the investigated glasses. Photoluminescence quantum yield (PLQY). The PLQY measurements were performed inside an integrating sphere coupled to an optical spectrum analyzer (OSA) with a multimode optical fiber and then determined using the method reported in refs 22, 25. The absolute photoluminescence spectra of the samples obtained under a laser excitation at 920 nm (510 mW of power), are presented as a function of their Yb 3+ concentration in Fig. 5. As can be seen from Fig. 5, luminescence quenching is observed for Yb 3+ concentration higher than 2 mol%, due to either an increase in the energy transfer or reabsorption by the Yb 3+ ions. Reabsorption or radiation trapping effects are usual when dealing with Yb 3+ -doped glasses because of the overlap of their absorption and emission bands, directly related to the Yb 3+ ion concentration, the sample thickness (2.3 mm for our samples) and the optical path length of the photons in the medium 37,38 . The absorption and emission spectra of the SYb02 sample (measured inside and outside the integrating sphere) are presented in Fig. 6. As can be seen in Fig. 6, reabsorption effect is observed even for the SYb02 sample and is more predominant for samples with higher Yb 3+ concentration (as shown in the supplementary information, Fig. S3). The emission peak position shifts towards longer wavelength and broadens due to reabsorption. The reabsorption and luminescence quenching effects observed for all the Yb 3+ concentrations are illustrated in Fig. 7(a,b). At lower concentrations, the interaction or radiation exchange between the Yb 3+ ions is significantly reduced and may become negligible. In Fig. 7(a), the absorbed radiation from the pump laser is re-emitted in the form of luminescence (photons) without heat generation, resulting in higher luminescence intensity. At higher concentrations, the interaction between the Yb 3+ ions becomes stronger and their energy exchange leads to a decrease in photoluminescence intensity due to non-radiative (phonons) emission resulting in luminescence quenching effect, as schematized in Fig. 7(b). This also induces the Yb 3+ ions luminescence reabsorption by the Yb 3+ neighboring ions, resulting in a redshift of the emission band. PLQY is evaluated by using both Eqs (2) and (5), giving similar values for each SYb sample as a function of Yb 3+ concentration, as summarized in Table 1. The standard deviation of measurements is around ± 0.11, which is typical of absolute PLQY measurements. Moreover, the acquisitions were repeated 5 times to ensure the 12 and Yb 3+ :ZBLANP glass 6 in which optical cooling has been already demonstrated. It is worth mentioning that a high PLQY close to unity is one of the most important conditions in order to achieve a better cooling efficiency by removing successfully heat from the sample in every cooling cycle 6 . The background absorption coefficient (α b ) of the samples, measured with a 1300 nm wavelength laser by the calorimetric method described in our earlier works 12,25 is reported in Table 3. One observes a background absorption increase with increasing Yb 3+ concentration. The mean fluorescence wavelength (λ f ) is calculated by using Eq. (4). The laser cooling/reduced heating can be expected when the samples are excited at or above λ f 6 . As can be seen from Table 3, the λ f value increases with increasing Yb 3+ concentration due to reabsorption. The λ f is found to be 1003(1) nm for the SYb02 sample which is larger than that reported for the Yb 3+ :ZBLANP (995 nm, 1 wt% of Yb 3+ ) 6 . Hence, as the SYb02 sample exhibits high PLQY and low background absorption when compared with the other investigated samples, it appears to be the best candidate for laser cooling application besides serving as a reference sample for PLQY measurements in the near-infrared region. Decay curves. The luminescence decay curves of the Yb 3+ : 2 F 5/2 → 2 F 7/2 transition were measured by exciting with 940 nm wavelength laser and monitoring above the 975 nm wavelength emission, as shown in Fig. 8. The luminescence lifetime (τ) of the Yb 3+ : 2 F 5/2 excited level was evaluated from a single exponential fit. It is observed that the τ of the Yb 3+ : 2 F 5/2 excited state shortens from 1.52 to 0.19 ms in the investigated glasses when Yb 3+ concentration increases from 2 mol% to 20 mol%. These results indicate that the decrease in PLQY with increasing Yb 3+ concentration is not only due to reabsorption but also to concentration quenching. The quenching of lifetime may be either due to multiphonon relaxation, energy transfer among the Yb 3+ ions (diffusion limited) 39 or direct coupling with OH − groups 37 . In the present study, since the amount of OH − groups is expected to be relatively constant in all the samples, it is assumed that the most dominant mechanisms for lifetime quenching are the energy transfer among the Yb 3+ ions, as well as the multiphonon relaxation. The longest lifetime measured here is 1.52 ms for the SYb02 sample, which is longer than that reported for the Yb 3+ :YAG crystal (1.1 ms, for a concentration of 2.5 at.% Yb 3+ ) 40 but shorter than that measured for the Yb 3+ :ZBLAN glass (1.82 ms, for a concentration of 2 mol% Yb 3+ ) 41 . The high PLQY, which is a key parameter for laser cooling process, of the SYb02 sample with lower Yb 3+ concentration (2 mol%) indicates its higher potential for laser cooling. Longer lifetime is not an obstacle for cooling, but it is not desirable, since it can slow down the cooling process. Pump power dependence PLQY and lifetime studies. The pump power dependence of PLQY and lifetime measurements were performed on the Yb 3+ -doped glasses. Boconilli et. al. have reported on the pump power dependence studies of upconversion (UC, process consisting in the absorption of two or more photons of low energy followed by the emission of one photon of higher energy) PLQY in Er 3+ :β-NaYF 4 nanocrystals by considering the effect of reabsorption for solar cell applications 42 . This is the first time to the best of our knowledge that the pump power dependence PLQY of Yb 3+ -doped glasses for laser cooling prospective is reported by considering. The PLQY (and the intensity of NIR emission as well) always follows a linear dependence with the pump power in our Yb 3+ -doped oxyfluoride glasses, as presented in Fig. 9. The PLQY was found to be as high as 0.99 for 510 mW of laser power measured at the entrance of the integrating sphere. Three regions are distinguished in Fig. 9, the first region for the low powers, the second one for intermediate powers and the third one for high excitation powers. The PLQY follows a progressive increase at low excitation powers whereas it remains unchanged at the intermediate excitation powers. This can be explained by the fact that there is no influence of the absorbed power by the Yb 3+ ions which means that reabsorption may play a crucial role for this behavior Table 3. Photoluminescence quantum yield (PLQY) of the Yb 3+ -doped samples is determined by using the Eqs (2) and (5), pump wavelength (λ P ), limiting wavelength (λ cutoff ) which separates the integration of N ipj and N epj (j = a,b and c), mean fluorescence wavelength (λ f ) which takes into account reabsorption, background absorption (α b ) determined at 1300 nm laser by using the calorimetry method described in refs 12,25. providing a low fluorescence escape efficiency. At higher powers, the PLQY progressively increases due to the enhanced absorption of Yb 3+ ions within the unit area for a fixed concentration. Moreover, the bleaching occurring at high pump powers can decrease significantly reabsorption, inducing further increase in PLQY 43 . The upconversion effects from the Tm 3+ and Er 3+ ions present as impurity traces (detailed discussion in the next section) at high pump powers may also cause a slight deviation from a straight line. The power dependence of lifetime for the Yb 3+ -doped glasses is shown in Fig. 10. As can be seen in Fig. 10, there is a small but consistent decrease of lifetime with increasing pump power, especially at higher Yb 3+ concentration (20 mol%). This may be due to either a decrease in the lifetime and PLQY because of lower reabsorption due to bleaching effects 43 . The decrease in lifetime (Fig. 10) indicates that non-radiative and also upconversion processes should play an important role which means the non-radiative rate, W nr , increases with increasing the Yb 3+ concentration. The most important channels on the non-radiative rate increase are energy migration among Yb 3+ ions, followed by transfer to impurity centers: trapping by defects such as OH − radicals, radiation trapping of energy among Yb 3+ ions, interaction between Yb 3+ ions and the glassy host defects. Upconversion luminescence. The UC emission spectra of glasses recorded under laser excitation at 975 nm (80 mW power) are shown in Fig. 11 while a photograph of the UC luminescence from the SYb05 sample is shown in inset. It is clear that a higher UC intensity at 478 nm is obtained for the SYb05 sample than for the other samples. All the samples exhibit UC emissions at 478 nm ( 1 G 4 → 3 H 6 ) and 800 nm ( 3 H 4 → 3 H 6 ) originating from Tm 3+ impurity as well as 410 nm ( 2 H 9/2 → 4 I 15/2 ), 539 nm ( 2 H 11/2 , 2 S 3/2 → 4 I 15/2 ) and 647 nm ( 4 F 9/2 → 4 I 15/2 ) originating from Er 3+ ions also present as an impurity. Those ions are excited thanks to energy transfer (Addition of Photons by Transfer of Energy: APTE effect) from the Yb 3+ ions which act as a sensitizer 44 . It was evidenced that the APTE effect is 10 5 times more efficient than the cooperative luminescence which is usual in Yb 3+ -doped samples at high concentrations for the same Yb 3+ -Yb 3+ distances 45,46 . Due to this reason no cooperative luminescence was observed in these glasses even at high Yb 3+ concentration. The pump power dependence of the UC luminescence is shown in Fig. S4. It is worth mentioning that no UC emission was observed from the Yb 3+ free sample (as shown in Fig. S5). Therefore, it is clear that these Tm 3+ and Er 3+ traces (contents of respectively less than 10 and 8 ppm according to the certificate of analysis provided by the chemical supplier) originate from the YbF 3 starting powder, in spite of its relatively high purity (99.99%). Complete separation of RE ions during their manufacturing process to obtain ultra-high purity raw materials is indeed a well-known issue in the industry. Conclusion Heavily Yb 3+ -doped 30SiO 2 -15Al 2 O 3 -(29-x)CdF 2 -22PbF 2 -4YF 3 -xYbF 3 (where x = 2, 5, 8, 12, 16 and 20 mol%) oxyfluoride glasses have been fabricated and characterized from a thermal and spectroscopic point of view. Their glass transition and crystallization temperatures as well as thermal expansion coefficient were determined by thermal analysis. The measured linear refractive index of the SYb samples varies from 1.780 to 1.730 at 532 nm and decreases with increasing Yb 3+ concentration. Luminescence intensity at 1020 nm under laser excitation at 920 nm decreases with increasing the Yb 3+ concentration. The highest photoluminescence quantum yield (0.99, near unity) was obtained for the 2 mol% Yb 3+ -doped sample and was then found to decrease when increasing the Yb 3+ concentration. The PLQY increases with increase in the pump power up to 510 mW, the limit of the laser used in this work. The mean fluorescence wavelength was evaluated from the emission spectrum and reported to be 1003(1) nm for the 2 mol% Yb 3+ : glass which increases with increasing Yb 3+ ion concentration. Pump power dependence studies have revealed a linear increase in the PLQY and a decrease in the lifetime with increasing the pump power. The lifetime of the 2 F 5/2 level shortens from 1.52 ms to 0.19 ms with increasing the Yb 3+ concentration. The 2 mol% Yb 3+ -doped oxyfluoride glass with its high PLQY, its low maximum phonon energy and low background absorption is the most promising candidate for laser cooling and solid-state laser applications besides serving as a reference to calibrate the instruments for PLQY measurements. Future works will focus on one hand on the fabrication of glasses of ultra-high purity, which is a required condition to achieve optical cooling and; on the other hand, on the preparation of ultra-transparent nano-glass-ceramics in a similar vitreous system with the aim to further enhance the photoluminescence quantum yield, to decrease the background absorption and in fine to demonstrate laser cooling in this material. Methods Oxyfluoride glasses with chemical composition 30SiO 2 -15Al 2 O 3 -(29-x)CdF 2 -22PbF 2 -4YF 3 -xYbF 3 (mol%), where x = 2, 5, 8, 12, 16 and 20 were prepared by the conventional melt-quenching technique. The glass samples were labeled as SYb02, SYb05, SYb08, SYb12, SYb16 and SYb20, respectively. High purity commercial starting materials (Aldrich, 99.99%) were thoroughly mixed in an agate mortar and then loaded into a platinum crucible. The powders were then melted and homogenized at 1100 °C for different durations (1h30, 2h, 2h30, 3h, 3h30 and 4h with increasing Yb 3+ concentration respectively) in a muffle furnace under ambient atmosphere in the covered crucible. The glass melt was then casted into a stainless steel mold preheated close to the glass transition temperature (T g ), subsequently annealed at the same temperature for 5 h and slowly cooled to room temperature to remove any residual internal stress. The glass samples were finally polished to optical quality for further characterization. The density of the samples was measured by using a Mettler Toledo XSE204 densimeter with a resolution of ± 0.001 g/cm 3 . The linear refractive index of the samples was measured by employing the prism coupling technique (Metricon 2010) at different wavelengths, 532, 633, 972, 1308 and 1538 nm. Differential scanning calorimetric (DSC) measurements were performed by using a Netzsch DSC Pegasus 404F3 apparatus on glass pieces in PtRh pans at a heating rate of 10 °C/min. The thermal expansion coefficient (TEC) was measured by using a Netzsch TMA 402F1 Hyperion thermo-mechanical analyzer apparatus on glass rods of 5 mm length and 10 mm diameter at a heating rate of 5 °C/min up to the glass softening point with a load of 0.02 N. The TEC of the sample was then determined in the temperature range from 100 to 300 °C. UV-visible-near infrared transmission spectra were recorded on a Cary 5000 (Varian) double-beam spectrophotometer. The photoluminescence quantum yield (PLQY) of the samples (10 mm × 2 mm × 2 mm) was measured by pumping at a wavelength of 920 nm with a Ti:sapphire laser, collecting the emitted light from an integrating sphere (2″ ) (Thorlabs IS200) and coupling it through a multimode optical fiber to an optical spectrum analyzer (OSA) operating in the wavelength range of 800-1600 nm. The photoluminescence spectra were also measured outside the integrating sphere under 920 nm wavelength laser excitation. The upconversion luminescence spectra were recorded using a Nanolog (Horiba Jobin Yvon) fluorimeter equipped with a double monochromator and a photomultiplier tube sensitive from 250 to 825 nm. A laser diode operating at 975 nm coupled with a standard single-mode pigtailed fiber was used to excite the samples after beam collimation and focusing on the sample surface with a lens (f = 50 mm). Decay curves were recorded with a resolution of 10 μ s by using a photodiode (Thorlabs SM05PD1B). The signal was amplified by a bench top transimpedance amplifier (Thorlabs PDA200C) and read with a digital storage oscilloscope (Tektronix TDS2012CB 100MHZ 2GS/s). The pump power dependence PLQY studies were performed by exciting the samples with a 920 nm wavelength laser from a Ti: Sapphire while the output power was maintained constant by using a Glan-Thompson polarizer. Part of the laser power was tapped by a glass slide and monitored with a Keithley 6487 Picoammeter. The transmitted beam was focused at the entrance port of the integrating sphere and directed to the center of the sphere where the sample was situated. The diffused light from the sphere walls was collected by a multimode fiber of 200 μ m diameter and detected with an Ando AQ6317B optical spectrum analyzer. The data was collected and processed in a computer which measures 50 spectra, while normalizing them to the tapped optical power. All the measurements were performed at room temperature.
7,716.8
2016-02-26T00:00:00.000
[ "Materials Science" ]
BUCKLING LOAD OF RC COLUMNS EXPOSED TO ISO FIRE LOAD The influence of the cross-sectional dimensions The influence of the cross-sectional dimensions on the buckling load capacity of reinforced concrete column exposed to ISO fire load is presented. The fire analysis is divided in two separate phases. In the first phase, the calculation of the temperatures over the cross-section of the concrete column is performed. Here more advanced hygro-thermal analysis is executed to take into account the influence of moisture on the distribution of the temperatures. In the second step of the fire analysis, the mechanical analysis is performed. The mechanical and thermal properties of concrete and reinforcement at elevated temperatures are used in accordance with EN 1992-1-2 (2004). For two different cross-sections, the parametric study has been performed. The critical buckling time and critical buckling capacity as a function of a load and slenderness of reinforced concrete column have been determined. INTRODUCTION The reinforced concrete (in the sequel RC) columns are the basic structural element of each frame as well as beams.Consequently, the behaviour of the columns substantially influence on the behaviour of the frames as a whole.Failure of the column may occur due to the material softeningstocky columns, or due to phenomenon of buckling -slender columns.The deformability of the structure enlarges during fire and the RC frames are exposed to phenomena of buckling and stability problems even more.The aim of this paper is to show how the different cross-sectional dimensions influence load-carrying capacity of the column exposed to temperatures prescribed by ISO fire curve.Due to complexity of the present work, the fire analysis is divided in two separate phases.As input data for the first phase of fire analysis, the time distribution of a gas temperature in the fire compartment is assumed in accordance with standard EN 1991standard EN -1-2 (2004) ) by ISO fire curve.Next, a coupled hygro-thermal analysis in heterogeneous concrete is performed.In the second phase of the fire analysis, the mechanical analysis is accomplished where the critical buckling load and load-carrying capacity are determined.The main findings of short parametric study are gathered in the conclusions. Hygro-thermal analysis The transfer of temperatures, free water, and mixture of dry air and water vapour in concrete is obtained by the solution of the equations of a coupled heat and moisture transfer proposed by Davie et al. (Davie et al., 2006).The set of governing equations comprises the mass conservation of the three constituents -free water, water vapour and dry air, and an additional equation requiring the energy conservation to be satisfied: In Eqs. ( 1)-( 4) index i denotes the phases of concrete, namely i = FW is free water, i = V is water vapour and i = A is dry air.The unknown i  is the volume fraction of a phase i, i   is the density of a phase i for m 3 of gaseous mixture, i J is the mass flux of a phase i. FW E  is the specific heat of evaporation, D  is the volume fraction of a chemically bound water in pores of concrete, FW  is the density of free water, t means time.In Eq. (4) C  denotes the heat capacity of concrete, k is relates to the energy transferred by fluid flow, E  is the specific heat of evaporation, D  is specific heat of dehydration and T is the absolute temperature. Making a sum of Eqs. ( 1) and (2) gives a system of three partial differential equations, which are solved numerically with Galerkin's type of finite element method.The primary unknowns are temperature, T, the pressure of gaseous mixture of water vapour and dry air, P G , and the water vapour content, V   .For in-deep description of the heat and moisture transfer in concrete and its numerical formulation, see Davie et al. (2006) and Kolšek et al. (2013). Mechanical analysis This chapter briefly describes the alternative procedure for the determination of the generalized Euler buckling loads with the help of the linearized system of equations for the material nonlinearity of the columns in fire.In stress-strain analysis of RC column the bending as well as membrane deformation are considered. The procedure for determination of buckling load is as follows: (i) we calculate the equilibrium state of the considered column which is modelled by the kinematically exact planar beam theory of Reissner (Reissner, 1972), consisting of kinematic, equilibrium and constitutive equations with the appropriate boundary conditions; (ii) by the help of the theory of stability (Keller, 1970), which assures that the buckling load is equal to the buckling load of the related linearized problem, with the solution obtained in point (i), we write a linear system of differential equations and solve it; (iii) we find the critical solution of the system present in point (ii), which already represents a buckling load of the RC column.Therefore, for determination of the generalized Euler buckling load of RC column we have to solve a system of two non-linear equations for two unknowns: the critical buckling load P cr and the critical axial deformation 0,cr  .These two equations are: In Eqs. ( 5) and ( 6), N c is the constitutive axial force of the cross-section of the RC column and D is the quantity that is closely related to the determinant of the current tangent constitutive matrix of the cross-section: In Eq. ( 7), 11 C are the components of the current tangent constitutive matrix of the cross-section and can be defined as: Eqs. ( 5) and ( 6) cannot be solved analytically and the solution needs to be obtained numerically.As the problem is governed by fully non-linear equations, and only Eqs. ( 5) and ( 6) need to be solved numerically, we name this kind of formulaton 'semi-analytical'.The semi-analytical procedure is, in principle, the same for any end conditions (Krauberger, 2007), the only changeable parameter at the constant cross-section of the RC column during fire being the buckling length L u .Therefore, Eqs. ( 5) and ( 6) are solved in an incremental way, incrementing either the time or the mechanical load depending on whether the critical time or the critical load is sought.The time of fire analysis is divided into small time intervals [t i-1 , t i ] and the stress-strain state at each time t i is then determined iteratively.For in-deep description of mechanical analysis, see the work of Bajc et al. (2014). By changing the temperature of RC column, the mechanical and thermal properties of concrete and reinforcing bars may rapidly change as well, and their dependence on temperature should not be neglected.Here it is neccessary to consider the principle of additivity of strains.We assume that the total strain increment of concrete, c  , is the sum of the strain increments due to temperature,  , strain.The detail description of strain increments is presented in the work of Bratina et al. (2007). NUMERICAL EXAMPLE In the numerical example to follow, we analyse the influence of the cross-sectional dimensions on buckling of a RC column.In present short parametric study the critical buckling time and the critical buckling load at a chosen time of the fire spread as functions of the load and the slenderness of the RC column have been determined.The geometric, material and loading data of simply supported RC column are shown in Fig. 1.The RC column with length L, which is subjected to initial compressive load of different magnitudes, P = µ P cr,20°C (µ = 0.3, 0.5, 0.7), and to ISO fire load, is considered.Here, P cr,20°C denotes the critical buckling load of the identical RC column at room temperature T = 20 °C.We consider the concrete column with a quadratic cross-sectonal dimensions of b/h = 30/30 cm (Case C1-a) and b/h = 20/20 cm (Case C1-a*).The column is reinforced with 12 reinforcing bars in both cases, and the reinforcement ratio is taken to be ρ L = 1.5 %.The distance between the reinforcing bar and the nearest concrete surface is a, which is taken to be 5 cm.The material parameters of concrete and reinforcing steel at the room temperature remain unchanged in the parametric study and are selected as: compressive strength of concrete f c,20°C = 3.8 kN/cm 2 , tensile strength of steel f sy,20°C = 50 kN/cm 2 , and elastic modulus of steel E c = 20000 kN/cm 2 .The remaining material parameters needed for the description of constitutive laws are taken in accordance with EN 1992-1-1 ( 2004) and EN 1992-1-2 (2004). 3.1 The determination of the temperature field The RC column is on all four sides exposed to fire.The temperature of the gases in the fire compartment varies with time according to the ISO fire curve (see Fig. 1).We assume that the gas temperature around the column is constant.Consequently, the variations of temperature over the typical cross-section of the RC column during fire with time remain unchanged in the x-direction. The solution of the system of partial differential equations is obtained with the finite element method as described in Sec.2.1.Due to symmetry of the cross-section, only one quarter is considered and a regular mesh of 900 (C1-a) and 400 (C1-a*) four-node finite element, respectively, are used.The time interval Δ t = 0.5 s is chosen in the time integration (Hozjan, 2011). The remaining material parameters used in hygro-thermal analysis are taken in accordance with the paper of Davie et al. (2006). Mechanical analysis The analysis is carried out for three different initial mechanical load magnitudes µ=0.3, 0.5, 0.7 and the ISO fire load.The results are presented in Fig. 3(a).The columns with the smaller cross-section, b/h = 20/20 cm (C1-a*), have approximately 60 % smaller load carrying-capacity at room temperature, P cr,20°C , when compared to the same slender columns with the cross-section b/h = 30/30 cm (C1-a).Therefore, the mechanical load P is reduced in the same way.On the other hand, the critical time of the columns C1-a* in fire is reduced only for about 20 % to 40 % when compared to columns with the same slenderness and the cross-section C1-a (Fig. 3(a)).We have also analysed the influence of the cross-section dimensions on the remaining load carrying capacity of the column, P, previously exposed to 15 min fire, for three different initial load levels. We compare the behaviour of columns C1-a and C1-a*.The results are presented in Figs.The effect of fire on the load-carrying capacity of slender columns (λ ~76) is, however, more pronounced, and amounts to 52 % and 43 %, respectively. Fig. 1 A Fig. 1 A simply supported RC column.The geometrical, material and loading data. Fig.2(a) and (b) shows the distributions of the temperature over the cross-section C1-a (b/h=30/30 cm) and C1-a* (b/h=20/20 cm), respectively, of the column for the different time of fire exposure.As expected, the temperatures in centre of cross-section are higher for reduced concrete-section. Fig. 2 Fig. 2 Temperature field for different cross-sectional dimensions at 15, 30, 60 and 120 min of fire exposure for (a) C1-a and (b) C1-a* Fig. 3 Fig. 3 The influence of the cross-sectional dimensions and different initial magnitudes of the load on critical buckling time and critical buckling load: (a) critical time t cr ; (b) critical buckling load for µ=0.3;(c) µ=0.5 and (d) µ=0.7. 3(b), (c),(d).Here all graphs have been normalized by the load-carrying capacities of the individual crosssections C1-a and C1-a* at room temperature, P cr,20°C = 4042.8kN and P cr,20°C = 1806.3kN, respectively.The critical loads are presented only for the columns whose critical time is higher than selected time t sel = 15 min.The results demonstrate that the initial magnitude of the mechanical load, P, has a small effect on the load-carrying capacity of column in a short fire.After 15 min of the fire duration, short columns (λ < 20) with cross-sections C1-a and C1-a*, still keep about 90 % of the load-carrying capacity of the cross-section at room-temperature at the same slendernesses.
2,780.4
2016-01-18T00:00:00.000
[ "Engineering", "Materials Science" ]
RESIDUAL ACTION OF INSECTICIDES TO LARVAE OF Chrysoperla externa ( Hagen , 1861 ) ( Neuroptera : Chrysopidae ) UNDER GREENHOUSE CONDITIONS This work was designed to evaluate the residual action of the insecticides trichlorfon, triflumuron, endosulfan, fenpropathrin, chlorpirifos, tebufenozide and esfenvalerate, sprayed on cotton plants, to second-instar larvae of Chrysoperla externa (Hagen, 1861), under greenhouse conditions. The experimental design was completely randomized with ten replicates. Three larvae were released on each plant, in the 1, 12 and 23 day after pesticides spray. Tebufenozide and esfenvalerate were little persistent (class one), while trichlorfon, triflumuron and endosulfan were slightly persistent, decreasing the survival of C. externa larvae over 30%, up to 14 days after spray. Fenpropathrin and chlorpirifos caused mortality over 30%, up to 25 days after spray, being classified as fairly persistent. INTRODUCTION The use of insecticide to control of cotton pests, in general, does not take into account the effects of these chemicals on beneficial arthropods present on the crop.The preservation and maintenance of the natural enemies in the agroecosystem are essential to the establishment of the biological equilibrium and reduction of the production costs as well as to avoid side effects the chemicals to environment (Gravena & Cunha, 1991). The use of highly toxic and broad-spectrum action pesticides is the main cause of biological disturbance in a number of crops, giving rise to phenomena such as pest resurgence, occurrence of secondary pests and selection of populations of resistant insects.The use of selective chemicals is an important strategy within pest management programs, since it reduces the population of the phytophagous insects without affecting significantly the natural enemies.To maximize the compatibleness between the chemical and biological controls it is needed to know the selectivity and the conditions of use of an insecticide, in order to reduce its impact on the natural enemies. The insects of the family Chrysopidae are predators of many species of arthropods, and play an important role in the natural biological control of several crop pests.Among the chrysopids, Chrysoperla externa (Hagen, 1861), a common species in the Neotropical region, is reported to be abundant in cotton agroecosystens (Gravena et al., 1992). In view of the importance of C. externa in the biological control of cotton pests and scarcity of information regarding the impact of insecticides on that species of natural enemy, this work aimed to evaluate the residual action of insecticides used on cotton crops, to second-instar larvae of that predator. MATERIAL AND METHODS The work was conducted in a greenhouse at the Entomology Department of the Universidade Federal de Lavras -UFLA.Newly-hatched larvae of C. externa and from laboratory rearing were individualized in glass tubes 2.5 cm in diameter by 8.5 cm in height and fed on eggs of Anagasta kuehniella (Zeller, 1879) (Lepidoptera: Pyralidae) up to the second-instar (about 48 hours after ecdysis), when they were utilized in the trials.The insects were maintained in climatic chambers at 25±2 o C, RH 70±10% and 12h photophase. Seven insecticides utilized for control of cotton leafworm Alabama argillacea (Hübner, 1818) (Lepi-doptera: Noctuidae) were evaluated, by employing the highest dosages recommended by the manufacturers (Table 1).The chemicals were diluted in distilled water, by using a magnetic shaker, to allow complete homogenization.The control treatment was made up of distilled water. Thirty day-old cotton plants, cultivar IAC 22, planted in polyvinilchloride pots with a capacity of three liters, containing earth (60%) and cow manure (40%), were utilized.The plants were sprayed up to the point of dripping, utilizing a manual sprayer with capacity of 500 ml and, after one, 12 and 23 days, three second-instar larvae of C. externa were released on the upper third of each of them.The days for proceeding the releases were chosen randomly, within the interval proposed by the "International Organization for Biological and Integrated Control of Noxious Animals and Plants" -IOBC (Hassan & Degrande, 1996;Hassan, 1997) (Table 2). To prevent the escape of larvae, cages made of plastic bottles were use to cover the cotton plants.The upper end of each container was removed, fitting at the opening, a foam disk with a central orifice of diameter enough to allow the perfect fit to the plant's stem.Aiming to warrant the aeration in the inside of the bottles, a side opening 20 cm long x 10 cm wide was done, which was sealed with a organza type white fabric, affixed with adhesive tape.Each plant was covered with one of those containers, arranging the foam disk close to the soil surface.For fixation of those frames, in the substrate of each plot, two bamboo stakes 40 cm long were inserted, these being tied by a elastic strap on its upper end.The number of dead larvae in each treatment was examined after 48 hours' exposition to the chemicals, at 3, 14 and 25 days after spraying.The experimental design was that completely randomized, by utilizing ten plants per treatment, the plot being made up of one plant with three larvae of the predator. Data analysis: The reduction of the number of larvae caused by the action of the insecticides was compared with the control treatment, used as a parameter for classification of the chemicals' toxicity.This classification was determined according to the duration of the toxic activity of the compounds, that is, the interval of time in which its residues caused less than 30% of mortality, fitting into categories according to the methodology advised by the IOBC (Hassan & Degrande, 1996;Hassan, 1997) (Table 2). RESULTS AND DISCUSSION Tebufenozide and esfenvalerate presented a rapid loss of residual action after being sprayed on the cotton plants (Table 3), being fit in class 1 (little persistent).The low residual action of esfenvalerate confirms the results found by Carvalho et al. (2002) for that same specie, though another application methodology was utilized.The low mortality (<30%) provided by those chemicals on the third day after spraying, is probably due to a greater degradation of the insecticide molecules either on the cotton leaves or in the body of the larvae. According to Yu (1988), other mechanisms may be involved in the selectivity of tebufenozide and esfenvalerate to the larvae of C. externa, such as: less penetration of these chemicals through the cuticle or alterations in the target of actions of active ingredients.Guedes et al. (1992) reported that the lipophilic character of some insecticides, associated with the thickness and lipidic composition of the insects' cuticle and its trans-location as far as the target of action.Lipophylicity is inversely proportional to the solubility of the insecticide in water; the most lipophylic compounds by their chemical similarity with the cuticle, in general, present greater penetration in the insect's body. The insecticides trichlorfon, triflumuron and endolsulfan were in the class 2, being regarded as slightly persistent (Table 3).At 14 days from its application, those insecticides caused less than 30% of mortality for larvae which had contact with sprayed cotton leaves.As regards to trichlorfon, the results differ from those verified by Hassan et al. (1994) for Chrysoperla sp., who found that insecticide to be in class 1.That difference may be as related to physiological responses specific to the insects tested, inasmuch as it was not mentioned the species concerned.Lingren & Ridgway (1967), Hassan et al. (1987) and Toda & Kashio (1997) also found that larvae of Chrysoperla carnea (Stephens, 1836) were not affected negatively by the trichlorfon.The insecticides fenpropathrin and chlorpirifos caused less than 30% of mortality of the larvae only at 25 days after its application, being regarded as moderately persistent (Table 3).That may be explained by the adherence of those chemicals to the cotton leaves, ena-bling a longer residual period, causing the larvae to remain for longer exposed to the active ingredient.The persistence of other pyrethroids on Chrysoperla sp.larvae was investigated by Hassan (1992), who found fenpropathrin to be included in class 2 for different predators and parasitoids.Hassan et al. (1994), found a high mortality of larvae of Chrysoperla sp.under laboratory conditions, when treated with deltamethrin and fenvalerate, which were included in class 3 (moderately persistent).Under field conditions, Hassan et al. (1991) also observed a high mortality of that predator when treated with deltamethrin and fenvalerate, which were found to be in class 3. The high toxicity of chlorpirifos was also noticed by Balasubramani & Swamiappan (1997) in laboratory, who verified a highly deleterious effect on larvae of C. carnea when sprayed with that insecticide. On the basis on that results obtained with fenpropathrin and chlorpirifos, it is recommended that in the case of establishment of a Integrated Pest Management (IPM) for cotton crop by utilizing second-instar larvae of C. externa, the releases of this predator should be done after 25 days from the application of those chemicals. CONCLUSIONS a) The insecticides tebufenozide and esfenvalerate presented a low residual effect for second-instar larvae of Chrysoperla externa, when sprayed on cotton leaves. b) The insecticides trichlorfon, triflumuron and endosulfan are slightly persistent to the larvae of Chrysoperla externa causing mortality above 30% up to 14 days after application on cotton leaves. c) The insecticides fenpropathrin and chlorpirifos are moderately resistant to the larvae of Chrysoperla externa, causing mortality above 30% up to 25 days after application on cotton leaves. TABLE 1 - Commercial names, active ingredients, chemical groups and application rates of the insecticides utilized in the selectivity trial for Chrysoperla externa. TABLE 2 - Evaluation intervals, persistence levels and toxicity classes of pesticides according to the "International Organization for Biological and Integrated Control of Noxious Animals and Plants" -IOBC. TABLE 3 - Residual action of insecticides to secondinstar larvae of Chrysoperla externa, when applied on cotton plants in greenhouse.Lavras-MG, 2001.
2,217.6
2003-08-01T00:00:00.000
[ "Biology" ]
ANALYSIS OF MATHEMATICAL PROBLEM SOLVING SKILLS ON THE RELATIONSHIP AND FUNCTION MATERIAL OF GRADE VIII STUDENTS OF SMPN 6 PALU , INTRODUCTION Mathematics is one of the subjects that has an important role in life, both in academic life and everyday life.In mathematics there are various problems or problems that must be solved by students.Most students still have difficulty solving problems in mathematics because they still do not understand (Khasanah et al., 2021).Mathematics is not only a thinking tool that helps students find patterns, solve problems, and draw conclusions, but also as a tool that gives students a clear understanding of different ideas (Khadijah et al., 2018). Based on 2016's Permendikbud Number 22 (Sofyan et al., 2021), one of the objectives of learning mathematics is to solve mathematical problems which include the ability to understand problems, develop solution models, solve models and provide appropriate solutions.As stated by the National Council of Teachers of Mathematics (NCTM, 2000) that there are five basic mathematical abilities including problem solving ability, reasoning ability, communication ability, connection ability, and representation ability (Pramuswara N.A & Haerudin, 2022). Mathematical problem solving ability plays an important role in learning mathematics. The importance of mathematical problem solving is emphasized in NCTM Augustami which suggests that problem solving is an integral part of learning mathematics, so that problem solving and learning cannot be separated.Mathematical problem solving skills are important because: (a) problem solving includes the general objectives of teaching mathematics, (b) problem solving which includes methods, procedures, and strategies is a core and main process in the mathematics curriculum, and (c) problem solving is a fundamental ability in learning mathematics (Fariha & Ramlah, 2021). The stages of problem solving according to Polya's strategy (Febrianti et al., 2023) are understanding the problem, making a solution plan, implementing a solution plan and evaluating.As for some indicators in the problem solving stages, namely: understanding the problem is 1) students can determine information from what they know, 2) students can determine what information is asked in the problem, and 3) students describe the original problem in their own language.The indicator for developing a plan is where students can find a way to solve the problem.Indicators at the step of carrying out student solutions can use the method or strategy used to get results.While at the step to check back are: 1) students can check whether the solution steps used are correct, 2) students can check whether the results obtained are correct in solving the problem.Problem solving skills have not been in line with student achievement.This is in accordance with the survey results from The Trends International Mathematics and Science Study (TIMSS) 2015 and the Program for International Student Assessment (PISA) which show that the average score obtained by Indonesia is 397 and is ranked 44 out of 49 participating countries.While the 2018 PISA results ranked 72 out of 78 countries (Noviyana et al., 2018). There are several factors that influence student success in learning mathematics, including internal factors which include initial ability, intelligence level, learning motivation, Prima: Jurnal Pendidikan Matematika ◼ 325 Analysis Of Mathematical Problem Solving Skills On The Relationship And Function Material Rahim, Nurhayadi, Rizal, Idris learning habits, learning anxiety, learning motivation, and so on.In addition to internal factors, there are also external factors including the family environment, school environment, community environment, social and economic conditions and so on (Mawaddah & Anisah, 2015). In connection with this, the researcher made initial observations and conversations with one of the mathematics teachers at SMP Negeri 6 Palu.From the results of the interviews conducted, the researcher obtained information that mathematical problem solving ability is influenced by, confidence, shyness, nervousness, group learning atmosphere and gender.In addition, students also lack discussion together, unable to write down the information in the problem, lack of focus in learning so that they do not understand the material taught and the difficulty level of the problem. Seeing the things that must be mastered by students in the material of relations and functions, the mathematical problem solving ability of students becomes very important because students are required to be able to express an event from a relation and function problem into mathematical language or symbols, explain an idea in pictures, graphs, and algebra, compile mathematical models and their solutions, compile story problems, and be able to understand a mathematical presentation. Based on the description presented, research will be conducted to find out how students' mathematical problem solving skills.To answer these problems, the researcher gave the title: "Analysis of Mathematical Problem Solving Ability on Relation and Function Material of Class VIII Students of SMP Negeri 6 Palu". METHODS The type of research used is descriptive with a qualitative approach.With the aim of Analysis Of Mathematical Problem Solving Skills On The Relationship And Function Material Rahim, Nurhayadi, Rizal, Idris known and asked in the question completely.In question number 3 the subject did not write down the information that was known and asked about in the question but was able to state the information that was known and asked about in the question completely.This shows that the AG subject can fulfill indicator 1, namely at the stage of understanding the problem. At the stage of planning the solution, In the first question, the respondent is able to write and clarify how to describe an arrow diagram and state the complete set of ordered pairs.In question number 2, the subject wrote and explained how to determine the function f but was not precise in describing it in an arrow diagram.The subject does not add columns and points to connect relations in set A and set B. This shows that subject AG can fulfill indicator 2 at the completion planning stage. At the stage of solving the problem according to plan, subject AG answered question number 1 but was not correct in the solution set.In question number 2 the subject answered correctly but was not precise in describing the arrow diagram.Furthermore, in question number 3 the subject answered correctly but did not use any symbols in his answer and did not provide a conclusion on the final result of his answer.This shows that AG subjects can solve problems but have not really mastered them. At the re-checking stage, in question number 1 the subject did not re-check the answer so that the writing of the set of ordered pairs was still not correct.In question number 2, the subject also did not check the answer again so the arrow diagram was still correct but still lacking.Furthermore, in question number 3, the subject did not write a conclusion on the answer sheet.This shows that subject AG cannot fulfill indicator 4 at the re-examination stage. Figure 2. Medium Ability Subject Answers Based on the picture above, at the stage of understanding the problem.In question number 1, the AI subject wrote the information he knew in the question but was inaccurate and did not write down the information asked.The AI subject did not pay attention to the meaning of the question properly so that set A and set B were reversed.Then in the interview excerpt, the AI subject can state the information he knows and is asked about the question, but the answer is not quite correct.In question number 2, the AI subject wrote down the information he knew and was asked about in the question.Then in the interview excerpt, the AI subject was able to state the information he knew and asked about in the question. Furthermore, in question number 3 the subject did not write down the information that was known and was asked about in the question, but in the interview excerpt the AI subject stated the information that was known and asked about in the question.This shows that the AI subject can fulfill indicator 1 at the stage of understanding the problem. At the completion planning stage.In question number 1, the AI subject was able to write and explain how to draw an arrow diagram but it was not quite right because the subject was not careful in reading the question so that set A and set B were reversed.Then the subject stated the set of ordered pairs but it was not quite right because the arrow diagram was also not right.In question number 2, the subject can write and explain how to determine the function f but cannot draw the arrow diagram.This shows that the subject of AI is still not deep enough at the stage of planning a solution. At the stage of solving the problem according to plan, on question number 1 the AI subject answered but it was not quite right.In question number 2, the subject was only able Prima: Jurnal Pendidikan Matematika ◼ 329 Analysis Of Mathematical Problem Solving Skills On The Relationship And Function Material Rahim, Nurhayadi, Rizal, Idris to determine the function f but not draw the arrow diagram.In question number 3, subject AI did not answer on the answer sheet.This shows that the AI subject cannot fulfill the indicators at the stage of solving the problem according to plan. At the re-checking stage, in questions number 1, 2 and 3 the subject re-checks the answer so that set A and set B in question number 1 are reversed.Furthermore, the subject also did not write down the answer to question number 3, let alone draw a conclusion.This shows that the AI subject cannot complete the re-examination stage. Figure 3. Low Ability Subject Answers Based on the picture above, at the stage of understanding the problem, in question number 1 the subject wrote down the known information in the question but did not write down the information being asked, stated the information that was known and asked in the question completely.In question number 2, the subject stated the information that was known and asked about in the question was incomplete but did not write down the information that was known and asked about on the answer sheet.In question number 3, the subject did not write down the information that was known or asked about in the question and did not complete what was instructed in the question.This shows that the LD subject is still not at the stage of understanding the problem. At the stage of planning the solution, in question number 1, subject LD was able to write and explain how to draw an arrow diagram and state the set of ordered pairs but it was still not quite right.The subject did not use columns or dots to connect the relation being asked, whereas in the set of ordered pairs the subject answered correctly but not quite correctly.In question number 2, the subject did not write it on the answer sheet because he did not knowing students' mathematical problem solving skills in relation and function material.This research will be conducted in class VIII of SMP Negeri 6 Palu, located at Jl. Dewi Sartika No. 81 Palu, South Birobuli, South Palu sub-district, Palu city, Central Sulawesi.This research will be conducted in the odd semester of the 2023/2024 school year.The subjects selected in this study were three grade VIII students.The selection of subjects is based on the odd semester mathematics report card scores in the 2023/2024 school year.Based on the math report card scores, the subjects were grouped into high ability subjects, medium ability subjects and low ability subjects.The instruments used in this research are written tests and interviews.The written test in this study is a description test in the form of test questions related to relation and function material.This test is used to obtain data from students with problem solving skills which are then analyzed, so that researchers can find out the ability of students to work on the tests given.while interviews are used to classify student answers.RESULTS AND DISCUSSION This research was conducted by researchers on January 22-24 2024.In this study the researcher asked three questions containing 4 indicators of problem solving for class VIII students at SMP Negeri 6 Palu.There are indicators of problem solving in the questions.The research provided is understanding the problem, planning a solution, resolve problems according to plan, and carry out inspections again.Based on the results.The results of the students' work can be seen in the following picture. Figure 1 . Figure 1.High Ability Subject AnswersBased on the picture above, at the stage of understanding the problem, in question number 1 and question number 2 the subject wrote down the information known in the question completely but did not write down the information asked, stating the information
2,900.8
2024-05-29T00:00:00.000
[ "Mathematics", "Education" ]
WALL ORIENTATION EFFECT ON THE DETACHMENT OF A VAPOR BUBBLE Boiling is influenced by a large number of parameters; the angle of orientation constitutes one of these parameters which have a positive impact on the heat transfer. The dynamic of the bubble plays a significant role in the improvement of heat transfer during boiling. For this reason, we are located on the bubble scale and we simulated the detachment of vapor bubble in the liquid water on a heated surface, when the angle of orientation varies from 0 to 180°. We followed the evolution of the sliding of the bubble; it appears that the thermal boundary layer is disturbed and that the heat transfer coefficient reached major proportions and the sliding velocity of the bubble depends on the orientation of the wall. INTRODUCTION Boiling is a very effective mode of heat transfer, it allow to extract an enormous quantity of heat for low variations of temperature, and finds a broad application in chemical industry, petrochemical, food, and refrigeration. Nucleate pool boiling remains an extremely complex and imperceptible phenomenon, and depends on several parameters like underlined it Dhir (1998).Among these parameters, the effect of the inclination of the wall, studied by Kaneyasu et al (1988) for water at the atmospheric pressure; the orientation was varied from 0° to 175° compared to the horizontal one.A direct relationship was established between the angle of inclination and the heat transfer coefficient; this last increase when the angle increases, this is valid for the weak heat flows.On the other hand, for important heat flows, no relation is established with the angle of orientation.The comprehension of the phenomenon of boiling passes by the investigation at the microscopic level, to knowing dynamics of the bubble and its environment.Mukherjee and Dhir (2003) made the simulation of the growth and the detachment of a vapor bubble, as well as the coalescence of two and three vapor bubbles.Manickam and Dhir (2012) studied experimentally the sliding of a bubble by interferometry on the low side of a heated surface.They noted that the sliding of the bubble improves the heat transfer.Van der Geld et al (2014) presented measurements of the growth of vapor bubble and its detachment for some orientation, and compared with 2D numerical simulations.An analysis of the forces acting on the bubble during its sliding was examined. In this study, we simulated the dynamic of a bubble, during the sliding and the detachment from a hot wall, when the inclination varies; we have so compare, the velocities, the distance of sliding and the heat transfer when the angle go from 0 to 180 degrees. Method The dynamics of the bubble included the growth and the detachment.For the simulation of the detachment of the bubble, we supposed that it had reached a necessary volume to start its process.The equivalent starting diameter was given by using the correlation of Fritz (1935) detailed in the equation (1). ( ) The studied process is the detachment of the hot surface of a steam bubble; the liquid is at the saturation temperature.The simulation is focused in two dimensions, the flows are laminar, the temperature of the wall remains constant and the thermodynamic properties of each of the two phases are insensitive to small variations of temperature and pressure. During the process, the mass transfer from the liquid phase to the vapor phase is not included; the thermal and physical properties are presented in table 1.All the properties are evaluated in the case of water at atmospheric pressure and the temperature at 100°C, as shown in table 1. Frontiers in Heat and Mass Transfer Available at www.ThermalFluidsCentral.org Initial Conditions At the time of the detachment; the bubble is regarded as a half-sphere of diameter 3.50 mm.The equivalent diameter d given by the formula of Fritz (1) for a contact angle of 54° is d=2.72 .The properties of the liquid phase and vapor are taken for water with 100 °C; the temperature of the vapor is considered equal to the temperature of saturation, The temperature of the wall is equal to 110 °C, either an overheating of wall Δ =10°C and the pressure is taken equal to the atmospheric pressure. The temperature of the bubble is the temperature of saturation but the temperature of the liquid is overheated compared to the bubble by taking account of the overpressure of the bubble. Governing equations The VOF (Volume Of Fluid) model is used when two or more phases are present in a control volume, and are not miscible, the volume fraction variable for each fluid in the calculation cell is then introduced; if the volume fraction of the qth fluid in the cell is denoted by "q," When 0 q   the cell is empty of the fluid q, and when 1 q   the cell is full of the fluid q, on the other hand if 0 1 q    there is the presence of two fluids and the cell contains the interface.The properties of the mixture are determined by the presence of the p and q fluid phases in the control volume and are assumed to remain constant in each phase, the density of each cell is calculated by the formulation   , the same for the viscosity and the others properties. The tracking of the interface is modeled by the continuity equation (3) for the fraction of volume of a phase for the qth fluid; it is presented in the following form: . Where s is the source term, by default it is equal to zero.The volume fraction of the second phase will be deduced, given that the sum of the volume fractions for a cell is equal to unity With this approach, the continuity equation is solved for each phase, with a single-phase formulation. Flows are considered to be laminar, the momentum equation is solved throughout the domain regardless of the phase or phase mixture, and the velocity field obtained is weighted according to the volume fractions; viscous stress tensor and the equation of the conservation of momentum (4) are reduced to the following formulations after the introduction of the assumptions that the fluids are incompressible and Newtonian: And the energy equation is given by the formulation (5) and defines as: Boundary conditions The control domain selected has dimensions 20 mm x 10 mm, the low part is hot surface (wall), the right-sided is the entry (velocity inlet), the left side is the exit (pressure outlet) and the upper part is symmetry (symmetry).Calculations will be made in Cartesian coordinates; the bubble made with 476 cells, will be placed on the wall with the coordinates (15,0).The geometry and the grid were generated by using the software Gambit 2.4.6; on the level of surface heating we have y=0, u=0, v=0, T=Tw; above the field, at y=ymax, u=0, v=0, T=Tl; on the two sides, at x=0 and x=xmax, u=0, v=0, T=Tl ; all flow velocities are zero , it is considered that the bubble is in pool boiling. Parameters of simulation used After introduction of the initial conditions and the specifications of the boundary conditions, the control of the solution as well as initialization must be specified before the starting of calculation. The control of the solution consists of to adopt the precision of the coupling schema of pressure and velocity, the schema of discretization of the various variables and the under-relaxation factors.In this case, algorithm PISO for the pressure coupling and velocity and the schema of rebuilding of the interface geo-reconstruct were used.This last diagram is most precise; it makes a linear interpolation by using the volume fractions of the meshes close to the interface. First order diagrams were specified for the discretization of the Momentum parameters and Energy and body Weighted force for pressure.Default under-relaxation factors were used, where the step of time of 10 -4 seconds is selected. By introducing the initial conditions on the numerical domain using the Fluent software 6.3.26;we obtain the contours of fraction, at the initial state, as it is indicated on figure 1. Figure 2 shows a little effect of mesh; the grid chosen for simulation is 200×100 for mesh 2, the bubble moves, and the complete detachment from the hot wall takes place after 17 ms.we used two other grids 182×91, mesh 1 and 222×111, mesh 3. The detachment for the grid 1 was carried out after 18 ms and for the third at 19 ms; in this test the angle of inclination  is null; the grid 200×100 is retained, for the following simulations Figure 3 shows the movement of the bubble when the wall is horizontal, so at an angle θ = 0°; the contours schematized are the steps that follows a bubble at the detachment, they indicate the initial position, after 3ms, then half of the course and finally the phase of detachment. The base diameter of the bubble has a maximum value at the start, then it begins to decrease until it reaches zero at the detachment after 17ms; Mukherjee and Dhir (2003) made the simulation of growth and detachment of a bubble, after reaching a maximum base diameter after 20 ms, remains constant during 10 ms, then it decreases to the detachment after 16 ms, which agrees with present work. The same conditions were renewed for simulation by changing the direction of acceleration compared to the hot wall, thus we examines the angle of inclination  of the wall and its effect on the detachment of the bubble. RESULTS AND DISCUSSIONS After simulation of the detachment of the bubble of a horizontal wall, we varied the orientation angle  of the wall from 0° to 180°. Detachment and sliding Figure 4 a to c, shows the detachment and sliding of a bubble for different orientation of the hot wall; from 30 ° to 90 degree, more the angle increases, more the bubble slides over a much large length and detachment is delayed; at 30 ° the bubble slides 0.3 mm and lift off after a period of 20 ms, at 60 ° the bubble slides over a distance of 4.8 mm and detachment occurs after 63 ms; At 90 ° detachment occurs after 83 ms, the bubble will have traveled a distance of 8.3 mm; when the angle of orientation is greater than 90 ° the bubble is found below the wall it slides and remains stuck to the wall; for the duration of 83 ms, the bubble has moved a distance of 7.5 mm for an angle of 120 ° and 5.6 mm for 150 °; when the wall do rotation of 180 ° the bubble remains below, adhered to the wall without moving. The bubble is shown in the figures in the initial state after 3 ms, during the half time and before the detachment; first the base area shrinks to the same location then the bubble begins to slide until detachment. Effect of contact angle Figure 5 shows the relation between the angle of orientation and the two parameters which are the distance of sliding of the bubble along the wall and the time of this same slide.We have choose a bubble of reference with 54° of contact angle and compare it with another bubble with 44 ° contact angle, It is noted that when the angle θ is between 0 and 30°, the increase in the distance is tiny just as for the time of slide which varies only slightly.From 30° to 60°, the increase is very important whether for the distance of slide or the time of slide.After 60° and until 90°, the increase in the distance and the time from slide remain important but less than for the band of 30 to 60°, for the two bubbles, but the bubble with 54° contact angle slide more than other mostly when the angle of orientation is equal to 60°. Sliding velocity The evolution of the sliding velocity of the center of the bubble according to time is indicated in Figure 6; for an orientation of the angle  of 30, 60 and 90°.For 30°, velocity increases during a time 0 to 5 ms to exceed the value of 1,5 cm/s, then it remains constant from 5 to 14 ms and finally it increases again until the detachment of the bubble where it will have reached a value of 1,89 cm/s.For the angle of 60°, the velocity of displacement increases gradually up to 43 ms to reach the velocity of 11.93 cm/s, then it decreases and remains constant until the detachment of the bubble has a velocity of 9.91 cm/s.When the angle of orientation is equal to 90°, the sliding velocity of the bubble increases with a slope more important than the two previous up to 39 ms where it will have reached a peak of 13.07 cm/s, then it decreases and is stabilized along a stage until the detachment where it leaves the wall with a velocity of 11.41 cm/s; for the three cases, we have a disturbance around 6ms then velocity decrease and increase again.In all the cases, the profile of the sliding velocity has the pace of a bubble rising in a liquid, the velocity progresses then tends towards a velocity limits of 21 cm/s, the latter is determined by the equation ( 7) given by Davies and Taylor (1950), it corresponds to a rise of a bubble without sliding on the wall and is higher than all velocities at the time of the detachment of the bubble.0 0, 707 The final velocity depends on the angle of orientation, and the time of change τ of the velocity is relatively the same one for the various angles of orientation  and corresponds to 30ms, when the bubble is downward, and 40 ms when the bubble is upward.Figure 8 shows final velocity at the moment of the detachment of the bubble for  varying from 30 to 90°, between 0 and 30 the velocities grows slightly, then at 30 ° it grows quickly, after 60° it grows again slightly until 90°; these values where compared to the experimental data from Maity (2000), and gives good agreement for  equal to 0, 30 and 60, for 90 there is a slight lag.  The differences between the experimental data and those of the simulation in Figures 8 and 9 are due to the simplifying hypotheses of the simulation and that the bubble is processed in 2D and not in 3D. Heat transfer coefficient The evolution of the heat transfer coefficient along the wall according to the time is determined by the equation ( 9), the surface A has been selected far from the influences of entry and exit of the domain. The heat transfer coefficient evolves in the same way for the three angles of orientation, the increase in the coefficient is the result of the duration of sliding of the bubble on the wall, and the angle does not have a direct influence on the heat transfer coefficient, as shown in the Figure 10.In the same way, when the bubble is in lower part of the wall and the angle of orientation varies from 120 with 150° the coefficient of transfer of heat practically evolves in the same way as shows in the Figure 11, but remains higher if the bubble is above the wall. Temperature The thermal boundary layer between the hot wall and the bulk plays an important role in the heat transfer execution.After an established mode, and in the absence of bubble, this thermal boundary layer has a certain thickness.Moreover when the bubble slides along the wall, it disturbs this layer and the heat transfer will be improved.The ten isotherms which formed parallel lines with the wall will be deformed by the foot of the bubble which slides and remains stuck to the wall.More the distance or time of sliding is important and more the deformation of the isotherms is important, and the transfer is improved as shows it the Figures 12a to 12c. The first figure 12a corresponds to an orientation of the wall of 30°, the disturbance is tiny and the isotherms follow the pattern of the bubble at the time of the detachment.The figure 12b corresponds to the orientation of 90°, the isotherms are influenced by the slide and the detachment of the bubble; When the bubble is in lower part of the wall, there is no detachment but the bubble at the time of its slide disturbs even more the thermal boundary layer and we obtains isotherms indicated on the figures 12c, for angle of orientation  equal to 150°. The hydrodynamic parameters When moving a bubble, parameters such as velocity and pressure will be modified, we followed and compared these two parameters around the bubble at the beginning of the displacement at 3 ms and on detachment, in the case of an orientation with θ =90°, since at this angle the bubble travels a long distance and its velocity is the highest.Figure 13 shows the pressure which prevails in the bubble and that around at the interface in two positions, namely at the beginning of the displacement and when the bubble arrives at the top before detachment, the internal pressure is constant and That of the interface fluctuates around the interface while generally remaining lower, at the initial position the pressures are important since the bubble is below a height of liquid, this height decreases as the bubble moves towards The top and the pressure applied decreases to the point of detachment; the plot in figure 13 indicate the pressure according to the position on the interface, zero is the upper point.shows the velocities along the x-axis parallel to the wall and y perpendicular to it, during the displacement of the bubble the liquid will be antrained, for which we have plotted the velocity profile along the y-axis, at the top of the bubble.The velocities at the start at 3ms vary slightly along the x-axis, the bubble begins to antrain the liquid upwards, and a part goes down, along the y-axis, we have disturbance in both directions; at the detachment the liquid is pushed strongly upwards, the rest is antrained downwards indicating a large recirculation, the velocities along the y axis show clearly the recirculation of the liquid, and other part is pushed towards the wall. CONCLUSIONS A numerical analyze has been carried out to simulate the phenomenon of detachment of a vapor bubble on an inclined wall, for different angles from 0° to 180°.The conclusions are summarized as follows:  The bubble slides then is detached, the distance and the time of slide increase when the angle of orientation of the wall increases from 0 to 90°.  The sliding velocity increase then reached a final velocity, which depends on the angle of inclination.  The heat transfer coefficient evolves according to the angle of orientation, and increases quickly between 30 and 60°  The contour of the isotherms is strongly disturbed, particularly when the bubble is in lower part of the wall.  All the studied parameters, namely time of slide, the distance of sliding, the sliding velocity of the bubble before detachment and the heat transfer coefficient increase slightly when the angle of orientation varies from 0 to 30°, then they vary vigorously between 30 and 60° and finally they strongly increase between 60 and 90°, but less than between 30 and 60°. Fig. 1 Fig. 1 Contour of the bubble in an initial state Fig. 3 Fig. 3 Contour of bubble at θ = 0° a) Effect of distance of slide b) Effect of time of slide Fig. 5 Effect of contact angle on detachment Fig. 9 Fig. 9 Froude number according the inclined plate Figure 9 show us the Froude number of equation (8) plotted to angle (180 -), and compared with Maxworthy (1991) experiment; the result agrees well the present work when the bubble is downward the plate for two angles 120 and 150, and indicates when the angle increases the Froude number decreases. Fig Fig. 13 Pressure on and in bubble Table 1 Thermal and physical properties of water.
4,691.4
2017-08-29T00:00:00.000
[ "Physics", "Engineering" ]
ON A LOGARITHMIC STABILITY ESTIMATE FOR AN INVERSE HEAT CONDUCTION PROBLEM . We are concerned with an inverse problem arising in thermal imaging in a bounded domain Ω ⊂ R n , n = 2 , 3. This inverse problem consists in the determination of the heat exchange coefficient q ( x ) appearing in the boundary of a heat equation with Robin boundary condition. 1. Introduction. This paper investigates an inverse boundary coefficient problem in thermal imaging. The inverse problem under consideration consists in recovering the unknown heat exchange (heat loss) coefficient q(x) appearing in the heat equation with Robin boundary condition. This recovery may be obtained from boundary temperature measurements. In practice, this kind of inverse problem can be used to model the damage localization or corrosion detection in an inaccessible portion of some material object [23,28] and heat loss as well [8,17,32,34,33]. In this paper we consider a C 3 -smooth bounded domain Ω of R n (n = 2, 3) with boundary Γ := ∂Ω. We assume that there exist two subsets of Γ disjoint, Γ a and Γ i , with nonzero surface measure such that Γ := Γ a ∪ Γ i ;Γ a = ∅, andΓ i = ∅, (1.1) where Γ a denotes the "known" accessible portion of ∂Ω. We impose on the portion Γ i (may be inaccessible) a condition modeling the heat loss. The support of the applied heat flux g (stationary or time dependent) is contained in Γ a . The temperature distribution, denoted by u, satisfies the following initial-boundary value problem where ∂ ν denotes the derivative in the direction of the exterior unit normal vector ν to Γ. In engineering applications, the stationary heat flux g corresponds to an uniform heating of the outer surface. Typically, this is the case when heat or flash lamps are used to provide the input flux g. In this paper, we study separately the two cases: stationary and time-dependent heat flux. Here, it is worth noticing that the time-periodic heat flux problem was studied in [6]. Besides, for an initial condition u 0 ∈ L 2 (Ω), one can prove that the problem (1.2) has a unique solution u ∈ C([0, ∞[; L 2 (Ω)) ∩ C(]0, ∞[; H 2 (Ω)) ∩ L ∞ ((0, ∞); H 1 (Ω)) and to be able to get some estimations of the solution, we assume that for a fixed positive constant R 0 , we have In this paper, we restrict our attention to the Robin boundary condition ∂ ν u + q(x)u = 0 which, according to [7], corresponds to a Newton-cooling type of heat loss on the boundary with ambient temperature scaled to zero. Now, using the same notations as in [14], we introduce the vector space where S (R n ) is the space of temperate distributions on R n ,ŷ is the Fourier transform of the function y and B s,r (R n ) is a Besov space (see Chapter 10 in [22]). The Besov spaces B s,r (R n ) play an important role in generalizing many classic functional spaces. Moreover, the space B s,2 (R n ) is the Sobolev space H s (R n ). In addition, if s ∈ (0, 1) and r = ∞, we have B s,∞ (R n ) = C s (R n ) where C s is the Hölder space (see the Appendix of [21]). Using local charts and partition of unity, B s,r (∂Ω) is defined from B s,r (R n−1 ) in the same way as H s (R n ) is built from H s (R n−1 ). In our current study, some smoothness properties of the solution to the problem (1.2) are needed. In order to give sufficient conditions on data guaranteeing these smoothness properties, we introduce the following sets of boundary coefficients: where M > 0 is a given constant. Let us recall that the function q(x) (heat exchange coefficient) in (1.2) is known as the Robin coefficient with a support in Γ i . So, the introduction of the space D 0 M is suitable and it will be useful in the rest of the paper. The inverse Robin problem has been investigated, theoretically and numerically by several authors (see e.g. [1,3,4,5,6,7,9,10,11,12,13,18,19,23,25,26,27,29,32,34]) where various type of stability estimates are given. In this work, we firstly established a double-logarithmic stability estimate for recovering the boundary coefficient. Our result is considerably different from those already established in [3,12] in two or three dimensional spaces where Ω is of class C ∞ . In this paper, Ω is just of class C 3 . Moreover in [3,12] the error q −q is only estimated in a compact subset of {x ∈ Γ i ; u q (x) = 0} by a simple logarithmic stability estimates. In this paper, the error q −q is obtained in the whole Γ i by a double-logarithmic stability. This means that the stability decreases less rapidly near the points where the solution to the problem (1.2) vanishes. To the best of our knowledge, our results generalize the majority of previous works. The paper is organized as follows. In Section 2, we first describe the considered inverse problem and we state the logarithmic stability estimate results. Then, we prove rigorously and separately some important results by considering some hypothesis introduced in Section 1. Finally, we give a stability result for the case of a time dependent heat flux. 2. Stability of the determination of Robin coefficient. In this section, we establish a double logarithmic stability estimate for the determination of a boundary coefficient appearing in a boundary value problem for the heat equation with Robin boundary condition. Firstly, we consider the case where the heat flux g is stationary (g := g(x)). Then, the time-dependent case (g := h(x, t)) can be deduced using the same techniques. Now, we introduce γ × (0, ∞) as a subset of the accessible sub-boundary Γ a × (0, ∞). We assume that γ does not meet supp(g) and the following condition holds true: The inverse problem associated to the problem (1.2) can be formulated as follows: Inverse problem. Determine q, supported on Γ i , from the boundary measurements where Γ i is assumed to be a priori known and u q is the solution to the problem (1.2) with coefficient q. The uniqueness results of the determination of Γ i in (1.2)-(2.2) can be inspired from [7] and [6]. More results for the stationary case can be found in the literature, where different kinds of methods are used to establish uniqueness, stability and numerical algorithms ( see e.g, [1,2,9,12,13,15] and references therein). 2.1. Double logarithmic stability estimate. In this subsection, we establish a double logarithmic stability estimate for the inverse problem in thermal imaging described above. Here, we require that the heat flux on a part of the accessible boundary Γ a remains the same at every time. Moreover, we assume that the boundary function g introduced in the problem (1.2) only depends on the space variable x. A similar result can be deduced for the case of a time dependent heat flux. This latter case will be detailed later. Now, let us consider the following assumption: where k ≥ 0 is an integer and χ Γa is the characteristic function related to Γ a . Note that, g is not identically equal to zero. Thus, let us assume that we have (1.1), (2.2), (2.1) and (A 1 ). By applying stationary heat flux, we get the following result. The proof of Theorem 2.1 is postponed to subsection 2.2. So, following [15] and [14], we modulate the problem of detecting corrosion damage by electric measurements. To this end, we consider the following boundary value problem Using Theorem 2.3 of [16] and the fact that B n−1/2,1 (Γ) is continuously embedded in B n−3/2,1 (Γ), we obtain that (for any q ∈ D) the problem (2.3) has a unique solution v q ∈ H n (Ω) (n = 2, 3). Moreover, we have where R 1 = R 1 (Ω, g, M ) denotes a positive constant. In addition, we introduce the auxiliary initial-boundary value problem (2.5) Next, let us denote by u q the solution to the problem (1.2) and let us decompose this solution into the sum where v q is the solution to (2.3) and w q is the solution to (2.5). 2.2. Proof of the stability estimate. To prove Theorem 2.1, we need the following technical lemmas. The first lemma, was proved in [3] and is based on the classical theory of analytic semigroups ( see e.g. [30,31] where C depending on Ω, Γ, R 0 , and R 1 but independent of q. Note that by virtue of Lemma 4.2 of [4], there exist δ * > 0 and 0 < r * ≤ diam(∂Ω) such that for any 0 < δ < δ * andx ∈ ∂Ω, we have where v q is the solution to (2.3) and B(x, r * ) means the ball, in R n , with centerx and with radius r * . To state the second technical lemma, we first consider an arbitrary function f ∈ C α (Γ) such that where M 0 > 0 is a given constant and [f ] α denotes the infimum over all constants L such that is well satisfied. Let us recall that f ∈ C α (Γ) if there exists L ≥ 0 such that (2.9) holds. We have the following result. 10) where v q is the solution to (2.3). Since |v q | ≤ δ in B(x, r) ∩ Γ, one can use Corollary 2.6 of [14] to justify the existence of a constant c 1 > 0 such that Applying the function ln to the previous inequality, we obtain where c is a positive constant depending on c 1 and Γ. Thus, (2.12) implies that , for all 0 < δ < δ * . Now, we are ready to give the main result of the current subsection. Proof of the Theorem 2.1. Let γ ⊂ Γ a ⊂ Γ be defined as in (2.1). Let u q (resp. uq) be the solution to the problem (1.2) with the coefficient q (resp. with the coefficientq). Analogously, we can define the functions v q and vq . Using relation (2.6), Lemma 2.1 and the trace Theorem, we can find a constant λ > 0 such that where the positive constants C and µ are given in Lemma 2.1. Letting t tends to infinity, we get v q − vq L 2 (γ) ≤ u q − uq L ∞ ((0,∞);L 2 (γ)) . (2.14) Besides, let us defineṽ = v q − vq satisfying ∆ṽ = 0. From the interpolation inequality and (2.4), we get: where C 1 , C 2 , C 3 are positive constants. Hence, by the trace Theorem, we get where C 4 is a positive constant. Now, let γ 0 ⊂⊂ γ and P v = ∆ṽ = 0. From Theorem 2.2, we have As done previously, we use the interpolation inequality, to obtain . By the trace Theorem, relations (2.4) and (2.14), we obtain L ∞ ((0,∞);L 2 (γ)) + τ ṽ H 2 (Ω) . Using relation (2.4) once again, we infer that The function f is non increasing with f (0 + ) = +∞ and f (+∞) = 0, so that the above equation has a unique solution min . -If 0 > min , and by choosing min = in (2.18) we have where min is sufficiently small satisfying for some c > c, ≤ e c / min .
2,735.6
2019-01-01T00:00:00.000
[ "Mathematics", "Engineering" ]
Tissue Profile of CDK4 and STAT3 as Possible Innovative Therapeutic Targets in Urinary Bladder Cancer Bladder cancer represents a global health problem. It ranks ninth in worldwide cancer incidence. In Egypt, carcinoma of the bladder is the most prevalent cancer, Bladder cancer has the highest recurrence rate of any malignancy. Certainly, suitable molecular diagnostic markers are required to improve the early detection of bladder cancer and then to prolong survival of patients. The present study was aimed to explore the expression of CDk4 and STAT3 in bladder cancer tissues as prospective for target therapy. Our studied groups showed higher values of CDK4 and STAT3 expression in malignant tissues (SCC andUC collectively) compared to cystitis, however, significantly higher values of CDK4 and STAT3 expression were detected in UC group compared to SCC group. Urothelial carcinomas with papillary patterns showed lower parameters of CDK4 and STAT3 expression compared to the non-papillary variant, with significant differences. Higher grades of UC showed significantly higher parameters of CDK4 and STAT3 expression compared to low grade ones. Muscle invasion increases the level of CDK4 and STAT3 expression parameters, compared to non-muscle invasive UC. Conclusion: Our results showed a good correlation of the expression patterns of both the cell cycle (CDK4) and inflammatory (STAT3) markers studied and might be helpful for suggesting more selective agents in the therapeutic scenario of bladder cancer in the near future. Potential biomarkers such as CDK4 andSTAT3 may be targets for molecular based therapeutic strategies in the prevention or management of bladder cancer. Future studies should explore molecular mechanisms of these proteins to define their roles in tumorigenesis. Introduction Bladder cancer represents the fifth most common malignancy worldwide and a major cause of cancer-related morbidity and death. Incidence and mortality rates have remained relatively constant over the past four decades (Siegel et al., 2013). Urothelial carcinoma is well known for its divergent differentiation resulting in distinct, morphological variants (Amin et al., 2009), Squamous differentiation, defined by the presence of intercellular bridges, keratinization, or both, is the most common variant, occurring in up to 20% of urothelial carcinomas of the bladder, followed by glandular differentiation (Wasco, 2007). Although urothelial carcinoma with squamous differentiation may be associated with poor prognosis, conflicting data have been reported regarding the role of squamous differentiation in unfavorable clinical outcomes (Domanowska et al., 2007). Because it is not uncommon for squamous differentiation to concurrently occur with other histological variants of urothelial carcinoma, such as micropapillary, glandular, and sarcomatoid differentiation, Abstract Bladder cancer represents a global health problem. It ranks ninth in worldwide cancer incidence. In Egypt, carcinoma of the bladder is the most prevalent cancer, Bladder cancer has the highest recurrence rate of any malignancy. Certainly, suitable molecular diagnostic markers are required to improve the early detection of bladder cancer and then to prolong survival of patients. The present study was aimed to explore the expression of CDk4 and STAT3 in bladder cancer tissues as prospective for target therapy. Our studied groups showed higher values of CDK4 and STAT3 expression in malignant tissues (SCC andUC collectively) compared to cystitis, however, significantly higher values of CDK4 and STAT3 expression were detected in UC group compared to SCC group. Urothelial carcinomas with papillary patterns showed lower parameters of CDK4 and STAT3 expression compared to the non-papillary variant, with significant differences. Higher grades of UC showed significantly higher parameters of CDK4 and STAT3 expression compared to low grade ones. Muscle invasion increases the level of CDK4 and STAT3 expression parameters, compared to nonmuscle invasive UC. Conclusion: Our results showed a good correlation of the expression patterns of both the cell cycle (CDK4) and inflammatory (STAT3) markers studied and might be helpful for suggesting more selective agents in the therapeutic scenario of bladder cancer in the near future. Potential biomarkers such as CDK4 andSTAT3 may be targets for molecular based therapeutic strategies in the prevention or management of bladder cancer. Future studies should explore molecular mechanisms of these proteins to define their roles in tumorigenesis. The cell cycle is regulated in part by cyclins and their associated serine/threonine cyclin-dependent kinases, or CDKs. CDK4, in conjunction with the D-type cyclins, mediates progression through the G1 phase when the cell prepares to initiate DNA synthesis, this gene plays a key role in development of cancer and tumorigenesis (Stacey and Premkumar, 2013). Signal transducer and activator of transcription 3 (STAT3) plays a prominent role in the growth and invasion of several types of solid tumors, it has a prognostic significance of the, upper urinary tract urothelial carcinoma (UTUC) and has role in the tumerogenesis (Matsuzaki , 2018). Materials and Methods Our study consists of 68 urinary bladder biopsy specimens. We got tissue sections from their archival material kept in the pathology department of Theodor Bilharz Research Institute (TBRI), Cairo, Egypt. The patients came to TBRI hospital seeking medical advice for their urinary symptoms and were cystoscopically examined, biopsied and their specimens were sent for histopathological diagnosis. The cases under study consist of chronic cystitis (7 cases), squamous cell carcinoma (14 cases) and urothelial carcinoma (47 cases). Immunohistochemical Method Anti-STAT3 (F-2) antibody (Santa Cruz, clone EP1sc 8019, Lot# E0217, mouse monoclonal IgG) and Anti-Cdk4 (DCS-35) antibody (Santa Cruz, Lot# D1217, mouse monoclonal IgG) were used for immunohistochemical (IHC) detection of the STAT3 and Cdk4 in tissue. Tissue sections were processed for IHC analysis as follows. IHC examinations were carried out on 4 μm thick sections. Antigen retrieval was performed with 10 mM sodium citrate buffer, pH 6.0, at 90°C for 30 min. Sections were incubated in 0.03% hydrogen peroxide for 10 min at room temperature, to remove endogenous peroxidase activity, and then in blocking serum (0.04% bovine serum albumin, A2153, Sigma-Aldrich, Shanghai, China, and 0.5% normal goat serum X0907, Dako Corporation, Carpinteria, CA, USA, in PBS) for 30 min at room temperature. Antibodies were used at a dilution of 1:100, added to tissue sections and incubated overnight at 4°C. Sections were then washed three times for 5 min in PBS. Non-specific staining was blocked 5% normal serum for 30 min at room temperature. Finally, staining was developed with diaminobenzidine substrate (DAB) and sections were counterstained with hematoxylin. PBS replaced the antibody in negative controls. All procedures were done at the pathology department of Theodor Bilharz Research Institute, Cairo, Egypt. Quantification of protein expression The expression of both CDK4 and STAT3 markers was semiquantitatively estimated as the percentage and intensity of staining. The proportion reflected the fraction of positive staining cells from 0% to 100% and the intensity score represented the staining intensity (score 0: no staining, score 1: weak positive, score 2: moderate positive, score 3: strong positive). Finally, multiplication of the score of intensity by the percentage of positive cells yields a total expression score ranging from 0 to 300. Statistical analysis SPSS for Windows, version 20 was used for statistical analysis (IBM Corporation, Armonk, New York, USA). The comparisons of quantitative variables were performed between two groups using ANOVA and student t-tests. Associations between each antigen expressions and other studied variables were evaluated by Chi square test and fisher test. Correlations between variables were studied using Spearmann's correlation test. The P value < 0.05 was considered statistically significant. Results Our study consists of 68 urinary bladder biopsy specimens. We got tissue sections from their archival material kept in the pathology department of Theodor Bilharz Research Institute (TBRI), Cairo, Egypt. These patients came to TBRI hospital seeking medical advice for their urinary symptoms and were cystoscopically examined and biopsied for histopathological diagnosis. All cases of squamous cell carcinoma showed high tumor grade and positive muscle invasion, while most cases of urothelial carcinoma were of low grade malignancy (65.2%) and showed negative muscle invasion (69,6%). The difference was statistically significant (p<0.001). Urothelial carcinoma showed significantly higher percent and score of CDK4 expression compared to both cases of cystitis and squamous cell carcinoma (p<0.05). Also, cases of urothelial carcinoma showed highly significant elevated STAT3 percent and scores of expression compared to cases of cystitis and squamous cell carcinoma (p<0.01). On the contrary, no significant differences in both CDK4 and STAT3 expression parameters were achieved between cases of cystitis and squamous cell carcinoma (p>0.05) ( Table 1 and Histogram 1). The mean score of STAT3 expression was significantly higher in high grade urothelial carcinoma compared to low grade one (p<0.05). No significant differences were detected in percent of cellular expression of both CDK4 and STAT3 between high and low grade urothelial non-papillary and papillary UC cases (p>0.05) ( Table 2). As regards muscle invasion, cases of muscle invasive bladder cancer (MIBC) showed significantly higher score of STAT3 expression compared to cases of non muscle invasive bladder cancer (NMIBC) (p<0.05). No significant differences were detected in CDK4 carcinoma cases (p>0.05) ( Table 2). Non-papillary UC showed higher percent and scores of STAT3 expression compared to papillary UC. The differences were highly significant (p<0.01). On the contrary, no significant differences were detected in both the percent and the score of CDK4 Regarding the immunohistochemical study of urothelial carcinoma cases, there were no significant differences in both CDK4 and STAT3 expression parameters between bilharzial associated and non-associated groups (p>0.05) ( Table 2). Discussion Urinary bladder cancer (BC) is the second most common malignancy of all genitourinary tumors after prostate cancer with an incidence of 18.5 per 100,000 males and 5.7 per 100,000 females and approximately 25% of newly diagnosed bladder carcinoma patients present with aggressive muscle-invasive disease (Burger et al., 2013). In Egypt a significant decline in its frequency was observed becoming second after breast and contributing only 11 % of all cancers. Thus at the National Cancer Institute (NCI) (Moktar et al., 2007), pathology registry, bladder carcinoma constitutes 26 % of all cancers during time period (1985 -1989), but dropped to only 12 % in years (2003 -2004).The mean age was 46 years and male to female ration 4:1. This male predominance was most marked in squamous carcinoma (7:1) and least (2:1) in adenocarcinoma (El-Bolkainy et al., 2013). In the present study, according to the results shown there were no statistically significant differences in the percentage of diagnoses for male and females patients categories for both squamous cell carcinoma and urothelial carcinoma at 0.05 level. These results were in accordance with the results of (Fiorina et al., 2018) that showed no gender differences in the grade and the stages of either SCC or UC; however, the male-to-female ratio was higher for UC (5.9) than for SCC (2.7) In our study, we found that 50% of squamous cell carcinoma and 66% of chronic cystitis cases were associated with bilharziasis but only 4.3% of the urothelial carcinoma patients were positive for bilharziasis. The (Liu et al., 2012) supporting our findings, as he stated that the percentage of patients (38.3%; 18 of 47) who had urothelial carcinoma with squamous differentiation was significantly higher than those with pure urothelial carcinoma (17.3%; 34 of 197; P <0 .01. Moreover, prior studies have reported that urothelial carcinoma with squamous differentiation is more aggressive because of its resistance to radiotherapy, chemotherapy, and immunotherapy (Gofrit et al., 2016). According to (Felix et al., 2008), the main cause of SCC in developing countries is Schistosoma haematobium. Bladder cancer is one of the most prevalent malignancy among Egyptian males (16%), accounting for >7900 deaths per year, which is considerably higher than most other parts of the world (Feraly et al ., 2010 ). Certainly, suitable molecular diagnostic markers are required to improve the early detection of bladder cancer and then to prolong survival of patients. The present study was aimed to explore the expression of CDk4 and STAT3 in bladder cancer tissues. CDK4 is important for cell cycle G1 phase progression. The activity of this kinase is restricted to the G1-S phase, which is controlled by the regulatory subunits D-type cyclins and CDK inhibitor p16INK4a G1/S. Cyclin D-CDK4 complexes are major integrators of various mitogenic and antimitogenic signals. Also phosphorylates SMAD3 in a cell-cycle-dependent manner and represses its transcriptional activity. Component of the ternary complex, cyclin D/CDK4/CDKN1B, required for nuclear translocation and activity of the cyclin D-CDK4 complex. (Sheppard and McArthur, 2013). In our study, all cases of squamous cell carcinoma showed high tumor grade and positive muscle invasion, while most cases of urothelial carcinoma were of low grade malignancy (65.1%) and showed negative muscle invasion (70.0%). The difference was statistically significant (p<0.001). Another study that made on the non-bilharzial squamous cell carcinoma and transitional cell carcinoma, found that all squamous cell carcinoma patients were muscle invasive carcinoma patients and have staging at least pT2. This was in agreement with (Scosyrev et al., 2009) who stated that that squamous cell carcinoma have more aggressive than urothelial carcinoma in invasion and staging. On the other hand, they reported that in 205 patients with bilharzias bladder cancer; 122 (59.6%) were squamous cell carcinoma, 69 (33.7%) are urothelial carcinoma. In our Immunohistochemical study, we show significant difference between groups of different bladder lesions considering the score and percentage of cellular Squamous cell carcinoma showed higher score of CDK4 expression compared to both chronic cystitis and urothelial carcinoma, with statistically significant difference (p<0.001 by ANOVA test). Comparison between different studied groups showed higher values of CDK4 percent and score of expression in malignant tissues (SCC andUC) in relation to cystitis, with statistically significant differences (p<0.001 and p<0.01 respectively). However, significantly higher values of CDK4 percentage and scores of expression were detected in UC group compared to SCC group (p<0.01 and p<0.05 respectively). Urothelial carcinomas with papillary patterns showed lower parameters of CDK4 expression compared to the non-papillary variant, with significant differences for CDK4 percent (p<0.01) and score (p<0.01). Higher grades of UC showed significantly higher parameters of CDK4 expression. This was true also if we add cases of squamous cell carcinoma to cases of urothelial carcinoma and make the scale of grade to be low and high collectively. Muscle invasion increases the level of CDK4 expression parameters, compared to non-muscle invasive UC. These differences were statistically significant for CDK4 score (p<0.05), while non-significant for CDK4 percent (p>0.1). Nucleo-cytoplasmic expression of CDK4 was found to be associated with higher levels of expression, compared to the cytoplasmic expression pattern, which was usually associated with lower levels of CDK4 expression. The differences were statistically significant. The relation between inflammation and cancer progression has been well established (Yu et al., 2009) has reported that STAT3 is the major inflammation-promoting transcription factor shown to play important roles in cancer progression in various types of tumors. Several studies have reported STAT3 as an important factor in the development of bladder cancer (Degoricija et al., 2014). In this study, we evaluated the relation between the expression of STAT3 pathway protein in the two most common types of bladder cancer; namely urothelial carcinoma and squamous cell carcinoma and the relation between STAT3 expression and different grades and stages of bladder cancer, associated or non-associated with urinary bilharziasis in egyptian patients. The signal transducer and activator of transcription (STAT) proteins are intracellular transcription factors that mediate various aspects of cellular immunity, proliferation, apoptosis, and differentiation (Seif et al., 2017). The STAT family includes seven members (STAT1, STAT2, STAT3, STAT4, STAT5A, STAT5B, and STAT6). Among them, STAT3 has been shown to play a prominent role in tumor growth and invasion (Yu et al., 2009). In response to cytokines and growth factors, STAT3 is phosphorylated by receptor-associated Janus kinases (JAK), forms homo-or hetero-dimers, and translocates to the nucleus where it acts as a transcription activator (Yu et al., 2014). Stat 3 is a latent transcription factor that normally resides in the cytoplasm. Upon growth factor/cytokine receptor or non-receptor tyrosine kinase-mediated activation, Stat3 rapidly translocates into the nucleus where it binds to consensus promoter region and activates target gene transcription . Our findings comparing different studied groups showed higher values of STAT3 percent and score of expression in malignant tissues (SCC andUC) compared to cases of cystitis, with statistically significant differences (p<0.001 and p<0.01 respectively). however, significantly higher values of STAT3 percentage and scores of expression were detected in UC group compared to SCC group (p<0.01 and p<0.05 respectively). These results are in accordance with (Matsuzaki et al. 2018) univariate logistic regression analysis of variable parameters associated with patient prognosis. It showed that the STAT3 score, nuclear STAT3 score, pathological stage lymph node involvement, lymphovascular invasion, and tumor grade were associated with both progression-free survival and cancer-specific survival. Two subtypes of bladder urothelial carcinomas exist: noninvasive papillary and muscle-invasive cancer. Evidence supports that these 2 subtypes develop through their own independent pathologic and molecular pathways, although certain overlap does exist (Goebell PJ., et al 2010) The vast majority of muscle-invasive cancers arise de novo from carcinoma in situ (CIS) without prior clinical progression through noninvasive papillary lesions (Wu, 2005) . In our current study we have demonstrated that urothelial carcinomas with papillary patterns showed lower parameters of STAT3 expression compared to the non-papillary variant, with significant differences for STAT3 intensity (p<0.05), percent (p<0.01) and score (p<0.01). STAT3 has been implicated in the progression from carcinoma in situ to invasive bladder cancer . In particular, STAT3 signaling acts as an important downstream mediator of inflammatory cytokines, such as IL-6 and IL-17, which are released during bladder tumorigenesis due to chronic inflammation (i.e., smoking, persistent urinary tract infections) [Stat3 activation in urothelial stem cells leads to direct progression to invasive bladder cancer . Present study shows the relation between STAT3 expression parameters and tumor grade of urothelial carcinoma (UC). Higher grades of UC showed significantly higher parameters of STAT3 expression. Comparable results has been reported by ( Matsuzaki et al., 2018), patients with high grade tumor , are patients with high STAT3 tumor and have a significantly higher risk of both disease progression (p = 0.008) and cancer-specific mortality (p = 0.008). Bladder cancer was proved by previous studies to be related to bilharzial infection, however, during our current work, we did not find statistically significant differences between bilharzial associated and non-associated bladder cancer. We suppose that the relation between STAT3 expression and bilharzial association was altered by the fact that most cases of bladder cancer that were associated with bilharziasis were of squamous carcinoma variant; which is almostly invasive in nature. In conclusion, Our results showed a good correlation of the expression patterns of both the cell cycle (CDK4) and inflammatory (STAT3) markers studied and might be helpful for suggesting more selective agents in the therapeutic scenario of bladder cancer in the near future. Potential biomarkers such as CDK4 andSTAT3 may be targets for molecular based therapeutic strategies in the prevention or management of bladder cancer. Future studies should explore molecular mechanisms of proteins to define their roles in tumorigenesis.
4,342.8
2020-02-01T00:00:00.000
[ "Biology" ]
Relationships between computer-extracted mammographic texture pattern features and BRCA1/2mutation status: a cross-sectional study Introduction Mammographic density is similar among women at risk of either sporadic or BRCA1/2-related breast cancer. It has been suggested that digitized mammographic images contain computer-extractable information within the parenchymal pattern, which may contribute to distinguishing between BRCA1/2 mutation carriers and non-carriers. Methods We compared mammographic texture pattern features in digitized mammograms from women with deleterious BRCA1/2 mutations (n = 137) versus non-carriers (n = 100). Subjects were stratified into training (107 carriers, 70 non-carriers) and testing (30 carriers, 30 non-carriers) datasets. Masked to mutation status, texture features were extracted from a retro-areolar region-of-interest in each subject’s digitized mammogram. Stepwise linear regression analysis of the training dataset identified variables to be included in a radiographic texture analysis (RTA) classifier model aimed at distinguishing BRCA1/2 carriers from non-carriers. The selected features were combined using a Bayesian Artificial Neural Network (BANN) algorithm, which produced a probability score rating the likelihood of each subject’s belonging to the mutation-positive group. These probability scores were evaluated in the independent testing dataset to determine whether their distribution differed between BRCA1/2 mutation carriers and non-carriers. A receiver operating characteristic analysis was performed to estimate the model’s discriminatory capacity. Results In the testing dataset, a one standard deviation (SD) increase in the probability score from the BANN-trained classifier was associated with a two-fold increase in the odds of predicting BRCA1/2 mutation status: unadjusted odds ratio (OR) = 2.00, 95% confidence interval (CI): 1.59, 2.51, P = 0.02; age-adjusted OR = 1.93, 95% CI: 1.53, 2.42, P = 0.03. Additional adjustment for percent mammographic density did little to change the OR. The area under the curve for the BANN-trained classifier to distinguish between BRCA1/2 mutation carriers and non-carriers was 0.68 for features alone and 0.72 for the features plus percent mammographic density. Conclusions Our findings suggest that, unlike percent mammographic density, computer-extracted mammographic texture pattern features are associated with carrying BRCA1/2 mutations. Although still at an early stage, our novel RTA classifier has potential for improving mammographic image interpretation by permitting real-time risk stratification among women undergoing screening mammography. Electronic supplementary material The online version of this article (doi:10.1186/s13058-014-0424-8) contains supplementary material, which is available to authorized users. Introduction Epidemiologic studies have consistently demonstrated that elevated mammographic density is a strong and independent risk factor for sporadic breast cancer, conferring relative risks of 4-to 5-fold when comparing women with high versus low mammographic density [1]. Although mammographic density has a strong heritable component [2][3][4][5][6][7][8][9][10], it is currently being debated as to whether mammographic density is associated with hereditary breast cancer risk [11,12]. Up to half of all hereditary breast cancer cases can be attributed to autosomal dominant mutations in two genes, BRCA1 and BRCA2 [13]. Among women with BRCA1/2 mutations, nearly 50% may be expected to develop breast cancer by age 50 years [13]. The ability to identify high-risk patients through analysis of mammographic images could have clinically significant implications for breast cancer screening and prevention strategies. Utilizing a computer-assisted method to characterize percent mammographic density (PMD), we have previously reported that mammographic density is not associated with BRCA1/2 mutation status [14], a finding consistent with those from prior studies [12,[15][16][17][18]. In contrast, Huo et al. and Li et al. used computerized radiographic texture analysis of a retro-areolar region-of-interest (ROI) to distinguish between mutation carriers and low-risk women; mutation carriers had a breast parenchymal texture pattern that was characterized as being coarse with low contrast [19,20]. Multiple investigators have evaluated whether texturebased features capture a component of risk beyond that of mammographic density [19,[22][23][24][25][26]31,32], but only Huo et al. and Li et al. have suggested that this method might accurately classify subjects according to BRCA1/2 mutation status [19,20]. These findings, though promising, were based on the analysis of 30 BRCA1/2 mutation carriers. This study represents replication and validation of their results in a larger, independent dataset. Study populations and data collection The study populations have been described previously [14]. Briefly, the NCI Clinical Genetics Branch Breast Imaging Study evaluated breast cancer screening modalities in women who were at high genetic risk of breast cancer. From 2001 to 2007, 200 women were enrolled in this study, including 170 women with proven deleterious BRCA1/2 mutations and 30 proven mutation-negative women from the same families. Participants were seen at the NIH Clinical Center (NCI Protocol #01-C-0009; NCT-00012415) and underwent a physical examination, nipple fluid aspiration, breast duct lavage, standard clinical four-view screening mammogram and breast magnetic resonance imaging (MRI), which were reviewed by the study radiologist (CKC). See prior reports for additional details related to study design [33,34]. The NCI Institutional Review Board (IRB) approved the study, and all participants provided informed consent. The NCI/National Naval Medical Center (NNMC) Susceptibility to Breast Cancer Study was a cross-sectional study of the association between mammographic density and genes involved in estrogen metabolism. From 2000 to 2006, 219 women with a documented personal history of breast cancer and 488 controls were enrolled. Participants were enrolled from the patient population at the NNMC and other referring institutions and the NIH Clinical Center (NNMC Protocol #NNMC.2000.0010; NCI Protocol #00-C-0079; NCT-00004565). Mammograms obtained within the year prior to enrollment were reviewed by two study radiologists (CKC and CEG). Study participants did not undergo BRCA1/2 mutation testing. Five-year Gail assessment [35] and Pedigree assessment tool (PAT) [36] scores were calculated for all controls. The PAT is a pointscoring system that uses family cancer history to identify women who are at high risk of hereditary breast cancer (that is, >10% risk of being a BRCA1/2 mutation carrier) [36][37][38]. A PAT score ≥8.0 has been associated with 100% sensitivity and 93% specificity for detecting mutation carriers, and a PAT score <8.0 has been associated with a negative predictive value of 100% [36]. For the current study, control subjects with low scores by both models were classified as having low risk of breast cancer; they were highly unlikely to be BRCA1/2 mutation carriers. The IRBs of the NNMC and NCI approved this study, and all participants provided written informed consent. Participants from both studies completed self-administered questionnaires which captured demographic characteristics, current weight and height, medical and reproductive history, and personal and familial history of cancer. Questionnaire items were compared between studies, and common response categories were combined in order to create a harmonized analytic database. Analytic sample A flow diagram of the criteria utilized to derive the analytic sample of BRCA1/2 mutation carriers and non-carriers is depicted in Figure 1. The NCI Clinical Genetics Branch's Breast Imaging Study After excluding 22 women with prevalent breast cancer (11 BRCA1 carriers, 11 BRCA2 carriers), one BRCA1 carrier with prevalent ovarian cancer, five women with missing mammographic density readings (three BRCA1 carriers, one BRCA2 carrier, and one non-carrier whose mammograms were given to the patients for care in their home communities prior to being digitized), the final study population included 143 mutation carrier and 29 non-carrier women (the latter from mutation-positive families) eligible for analysis. Of these, images from six mutation carrier and three non-carrier women were deemed ineligible for analysis of computer-extracted texture features for various reasons (for example, breast area too small for ROI placement, image artifacts, et cetera), resulting in a total of 137 mutation carriers (88 BRCA1and 49 BRCA2-positive) and 26 non-carriers in our analytic sample. The NCI/NNMC Susceptibility to Breast Cancer Study For the purposes of this report, the analytic sample was restricted to controls with available mammographic density readings, who were determined to be at low-toaverage breast cancer risk. After excluding controls with missing density readings (n = 226), 262 potentially eligible women remained. Of these, 153 women had a 5-year Gail score ≥1.67, three women were missing Gail scores, 15 women had PAT scores ≥8, and one woman had a personal history of skin cancer, type unspecified; these 172 women were excluded, resulting in 90 non-carriers eligible ANALYZABLE [N=74] Low-risk women judged very unlikely to be BRCA mutation carriers ANALYZABLE [N=100] Proven mutation-negative subjects + low-risk women judged very unlikely to be BRCA mutation carriers. In this analysis, this group is designated "non-carriers." Figure 1 Flow diagram depicting the eligibility criteria used to derive the analytic sample of BRCA1/2 mutation carriers and non-carriers. PAT, Pedigree assessment tool. for analysis. Of these, images from 16 women were deemed ineligible for analysis of computer-extracted texture features for the reasons described above and were excluded, yielding 74 women at low-to-average risk of breast cancer for our analytic sample. Medians (ranges) for their maternal PAT, paternal PAT and 5-year Gail scores were 0 (0, 7), 0 (0, 5), and 1.2 (0.3, 1.6), respectively. Given the rarity of BRCA1/2 mutations in the general population, and the low PAT scores, these 74 women were assumed to be mutation-negative. For the sake of simplicity, combining these women with the 26 known mutation-negative subjects from the Breast Imaging Study, we use the term "non-carriers" in this report to describe these 100 women. Assessment of mammographic density Analog mammographic films from both studies were digitized at 0.095 mm (267 dots per inch) in pixel size and 8-bit quantization in gray level. The details of the digitization process have been described previously [14]. Participants from both studies had standardized, quantitative calculations of PMD measured in digitized craniocaudal views by the same experienced study mammographer (CKC), using an interactive computerized thresholding method developed at the NIH Clinical Center (MEDx™ version 3.44, Medical Numerics, Germantown, MD, USA). We have previously reported that the intra-observer agreement for PMD assessed in 100 paired sets using MEDx was 0.89 [14]. In addition, we found that Cumulus™ measures of PMD were strongly and positively correlated with those assessed by MEDx (r = 0.84, P <0.0001) [14]. Computerized assessment of mammographic parenchymal patterns Regions-of-interest (ROIs) measuring 256 by 256 pixels were manually selected by the same investigator (LL) without knowledge of BRCA1/2 mutation status, from the central breast region behind the nipple on digitized craniocaudal projections ( Figure 2). Detailed explanations of the effects of ROI extraction, ROI size, and ROI location on RTA have been reported elsewhere [20]. These ROIs were used in the subsequent analytic step to extract and characterize the gray-level magnitude-based and parenchymal texture-based features of the digitized mammograms. Radiographic texture analysis (RTA) of computer-extracted features The detailed descriptions of the 38 computer-extracted parenchymal texture features (mathematical descriptors used in the RTA) have been reported previously [19,20,[27][28][29][30][39][40][41]; their feature numbers, names and definitions are summarized in Additional file 1: Table S1. For ease of interpretation, gray-level magnitude (M)-based features were assigned alpha-numeric descriptors ranging from M1 to M9, and texture (T)-based features were assigned descriptors ranging from T1 to T29. We assessed the internal reliability of the reader's ROI placement by randomly submitting a masked set of 91 mammograms (Susceptibility to Breast Cancer Study (n = 27); Breast Imaging Study (n = 64)) for re-selection of the ROIs and re-analysis by the RTA algorithms. The intraclass correlation coefficient (ICC) was calculated to assess the intra-observer reliability of the RTA features following manual re-selection of the ROIs. Statistical analyses Selection of participants for the training and testing datasets After exclusions, 237 subjects were eligible for analysis: 137 BRCA1/2 carriers, 100 non-carriers. We divided these women into a training set used to develop discrimination models to distinguish carriers from non-carriers, and a testing set used to evaluate how well the discrimination model distinguished carriers from non-carriers. From the 100 non-carriers, 6 were randomly selected from each noncarrier quintile of age, for a total of 30 non-carriers, to comprise the testing set. Likewise, from the 137 mutation carriers, 6 women were randomly selected from each carrier quintile of age, yielding 30 carriers for the testing set. The remaining 177 women comprised the training dataset. Baseline characteristics were compared between BRCA1/2 mutation carriers and non-carriers within the training and testing datasets using the two-sample t-test for independent samples. We assumed equal variances for continuous measures, and used the chi-square test for discrete measures. Stepwise feature selection using linear discriminant analysis Utilizing the 177 subjects in the training dataset, we employed stepwise feature selection using linear discriminant analysis, in which RTA features were reiteratively added and removed from the group of selected features based on a feature selection criterion, that is, the Wilks' lambda [42,43]. In each iteration step, linear discriminant analysis was used to calculate the discriminant scores, which were then used to compute the Wilks' lambda. The F-statistic was applied to determine whether a particular feature contributed significantly (P-value <0.05) to the performance of the linear discriminant analysis in each step. Details of stepwise feature selection using linear discriminant analysis are described in Additional file 2. The stepwise feature selection was performed 177 times, by leaving out one woman from the training set each time. To be included as a classifier for distinguishing carriers from non-carriers, a feature had to be selected in at least half of these 177 analyses. Merging of computer-extracted features The RTA features selected in the linear discriminant analysis were combined using a Bayesian artificial neural network (BANN) algorithm (Additional file 2). The output from BANN was converted to an estimate (probability score) of the likelihood of being within the BRCA1/2 mutation carrier group. These probability scores were evaluated for their capacity to serve as an image-based marker of risk in the independent testing data set by assessing whether their distribution differed between BRCA1/2 mutation carriers and non-carriers. In order to assess how mammographic density might influence the discrimination performance, we also developed (training data) and tested (testing data) a modified BANN classifier in which percent mammographic density was forced to be included along with the same selected RTA features. Both the linear discriminant analysis and the BANN algorithm were completed in MatLab™ (The MathWorks, Inc. Natick, MA, USA). Performance evaluation and related statistical analyses Spearman's rank correlation coefficient was used to describe the relationships between the selected computerized texture features with PMD, age and each other. The ability of the BANN-trained classifier to distinguish between BRCA1/2 mutation carriers and non-carriers was evaluated in the testing dataset using several approaches. We evaluated the relation between the BANN-trained classifier output and BRCA1/2 mutation status in univariate and multivariable logistic regression analysis, first adjusted for age as a continuous variable, and then adjusted for age and PMD. For comparison purposes, we evaluated the relationships between (a) PMD alone, and (b) the modified BANN-trained classifier, which included PMD with BRCA1/2 mutation status in both univariate and multivariable logistic regression analysis adjusted for age. In sensitivity analyses, we additionally adjusted for baseline characteristics that differed by mutation status. Because carriers were on average approximately 10 years younger than non-carriers [14], we also performed agematched sensitivity analysis in the testing data. First, we applied the BANN-trained classifier from the original training dataset to testing datasets restricted to pairs of BRCA1/2 mutation carriers and non-carriers who were randomly selected and matched on age within ±3 years (that is, 19 mutation carriers, 19 non-carriers) and ±1 year (that is, 17 mutation carriers, 17 non-carriers). Within the age-matched testing datasets, the Wilcoxon signed rank test was used to examine the mean paired difference in the BANN probability score between carriers and noncarriers. We performed a similar paired difference analysis of BANN probability scores based on the selected features and PMD. In an additional sensitivity analysis, we removed women older than age 55 years from both the training and testing datasets, and repeated the analysis conducted with the combined dataset. The utility of the computer-extracted RTA features, as well as the output from BANN in the task of differentiating the two groups, was also evaluated by using receiver operating characteristic (ROC) analysis [44,45]. The area under the fitted ROC curve (AUC) was used to evaluate the inherent discriminant capacity of the decision variable. The AUC measures the probability that a randomlyselected carrier will have a greater probability score than a randomly-selected non-carrier. The ROCKIT™ software package (ROCKIT, version 1.1b) [46] was used to evaluate the statistical significance of the difference between two AUC values (that is, the AUC from the BANN-trained classifier was compared with the AUC from PMD alone) [47]. We used two methods to obtain age-adjusted estimates of the AUC values explained by the BANN-trained classifier. In the first method, we restricted our test-set to pairs age-matched within ±3 and ±1 years, as defined above. In an alternate approach, we computed individual AUCs within age strata, in which the testing dataset was divided into three age strata: 25 to <35 years, 35 to <45 years, and 45 to 55 years. The AUCs were computed within each age stratum, and then were averaged to yield the AUC across the age strata. Except where noted above, analyses were completed using SAS statistical software (SAS 9.2 software, SAS Institute Inc., Cary, NC, USA). Probability values <0.05 were considered to be statistically significant. All tests of statistical significance were two-tailed. Distribution of patient characteristics in the training and testing datasets The baseline characteristics of BRCA1/2 mutation carriers and non-carriers stratified by training and testing datasets are shown in Table 1. Compared with noncarriers, the BRCA1/2 mutation carriers were statistically significantly younger, more likely to be white, nulliparous or to have a later age at first birth, and to have undergone surgical menopause. As previously reported, age-adjusted mean PMD did not differ between BRCA1/2 carriers and non-carriers [14]. Because women were randomly selected from age quintiles within each risk group for the training and testing datasets, the age distribution of non-carriers in the training set (n = 70) was similar to that of the testing set (n = 30) (P = 0.44). Likewise, the age distribution of the carriers in the training set (n = 107) was similar to those in the testing set (n = 30) (P = 0.95). The distributions of PMD within the risk groups were also similar between training and testing sets (non-carriers: P = 0.66; BRCA1/2 carriers: P = 0.47). There were no statistically significant differences in body mass index (BMI) between the risk groups or between the training and testing datasets. Descriptive characteristics of selected computer-extracted features The ICCs between duplicate measurements of the 38 computer-extracted RTA features for the 91 women with repeated readings ranged from 0.79 to 0.99, documenting high reliability of ROI selection and analysis (Additional file 1: Table S1). Additional file 1: Figure S1 shows the number of times that each feature was selected in the 177 leave-one-case-out feature selection analyses of the training data. Of the 9 gray-level magnitude-and 29 texturebased computerized features explored using the training dataset, two gray-level magnitude-(that is, M1: AVE; M2: MinCDF) and two texture-based features (that is, T1: Energy; T2: MaxF (COOC) were selected more than half the time, and were therefore included in subsequent BANN models. A third gray-level feature, "Balance", was selected in sensitivity analyses in which the training dataset was truncated at the upper age-limit of mutation carriers. The distribution of values for the selected features of Energy and Balance are shown in the scatter plot in Figure 3. This plot demonstrates that the parenchymal texture features of mutation carriers tend to have low Energy, that is, they are less homogenous, with a coarse pattern. Table 2 presents descriptive information related to the selected features. On average, mutation carriers tended to have lower values for the gray-level magnitude-based features, and texture-based features were less homogeneous as compared with the non-carriers. With regard to the three selected gray-level magnitude-based features, the feature characterizing the average gray value within the ROI ("AVE") was positively correlated with PMD (r = 0.31, P <0.0001), whereas the Balance feature was inversely correlated with PMD (r = −0.32, P <0.0001). A weak inverse correlation was observed between the MinCDF feature (that is, the gray value corresponding to the 5% region cutoff on the cumulative density function) and PMD (r = −0.13, P = 0.04); MinCDF was positively correlated with age (r = 0.23, P = 0.0005). Modest statistically significant inverse correlations were observed between PMD and both of the selected texture-based features, Energy and MaxF (COOC), which are measures of image homogeneity (Energy: r = −0.30, P < 0.0001; MaxF (COOC): r = −0.24, P = 0.0002). These selected texture-based features were positively correlated with age. The selected gray-level magnitude-based features (AVE, MinCDF, and Balance) were strongly correlated with each other; however, of the three gray-level magnitude-based features, only MinCDF was statistically significantly and positively correlated with the two selected texture-based features (Additional file 1: Table S2). The selected texturebased features, Energy and MaxF (COOC), were strongly and positively correlated with one another (r = 0.90, P <0.0001) (Additional file 1: Table S2). There were no statistically significant mean differences in the selected computer-extracted feature measures between the training and testing data sets (P -value range from Wilcoxon rank sum test = 0.17 to 0.45; data not shown). Likewise, the descriptive characteristics of and correlations between the selected computer extracted features in the testing dataset were consistent with those observed for the training and testing datasets combined (data not shown). Relationships between computer-extracted mammographic features and BRCA1/2 mutation status: original training and testing datasets Table 3 shows the results for the ability of the BANNtrained classifier, developed using the selected feature subset, to distinguish between BRCA1/2 mutation carriers and non-carriers in the independent testing dataset. The AUC (standard error, SE) for the BANN-trained classifier of 0.68 (0.07) was an improvement over the AUC (SE) for PMD alone (0.59 (0.07)); however, the two AUC statistics were not significantly different from one another (P = 0.52), likely due to the small sample size. One SD increase in the probability score from the BANN-trained classifier, developed using the features selected in the original training dataset, was associated in the testing data with about a two-fold increase in the odds of predicting BRCA1/2 mutation status in both unadjusted (odds ratio (OR) = 2.00, 95% CI: 1.59, 2.51, P = 0.02) and age-adjusted (OR = 1.93, 95% CI: 1.53, 2.42, P = 0.03) models. Additional adjustment for PMD did not alter the observed age-adjusted OR. The findings were nearly identical when the BANN-trained classifier, modified to include PMD (that is, Features + PMD in Table 3), was used, and when adjusting for baseline characteristics that differed by mutation status (that is, parity, age at first birth, oral contraceptive use, and surgical menopause) (data not shown). Relationships between computer-extracted mammographic features and BRCA1/2 mutation status: sensitivity analyses utilizing an age-matched testing dataset By virtue of the Breast Imaging Study eligibility criteria, the BRCA1/2 mutation carriers were on average approximately 10 years younger than the non-carriers. We therefore conducted a series of age-matched sensitivity analyses. First, the testing dataset was restricted to pairs of BRCA1/2 mutation carriers and non-carriers matched on age within ±3 years (Additional file 1: Table S3). The mean paired differences in the probability scores from the trained classifiers developed using selected features alone and the features plus PMD were statistically significantly greater than zero (P = 0.02 and P = 0.02, respectively). Using the same age-matched testing dataset, the corresponding AUC (SE) values for the BANN-trained classifier without and with PMD were 0.71 (0.09) and 0.72 (0.08), respectively. When matching on age within ±1 year, the findings were similar, although the mean paired difference in the probability score was no longer statistically significant (P = 0.06 for features alone and P = 0.08 for features + PMD). Computing AUCs within age strata yielded comparable results (data not shown). We performed additional sensitivity analyses by removing women above age 55 years (the upper limit of age among the Breast Imaging Study participants) from both the training and testing datasets. This resulted in 96 women in the training data (48 mutation carriers and 48 non-carriers) and 38 women in the testing data (19 carriers and 19 non-carriers) matched on age within ±3 years. The mean paired difference in the probability score from the BANN-trained classifier, developed using the newly-selected feature subset (MinCDF, MaxF (COOC), Balance), was of borderline statistical significance (P = 0.055). Forcing PMD into the BANN-trained classifier did not Figure 3 Scatterplot of the computer-extracted parenchymal features of Energy and Balance for BRCA1/2 mutation carriers and noncarriers. Energy, a texture-based feature, was identified as distinguishing between carriers and non-carriers; Balance, a gray-level magnitude-based feature, was selected in age-matched analyses. Compared with non-carriers, mutation carriers tended to have a parenchymal texture with low Energy. 1 Four features were selected by the trained classifier: MinCDF, Energy, AVE, and MaxF (COOC); percent mammographic density was not selected by the trained classifier but was forced into the models where noted. 2 Odds ratios, per unit increase in percent mammographic density. 3 Odds ratios, per one SD increase in probability score from trained classifier; SD from both models = 0.342. AUC, area under the curve; N/A, not applicable; PMD, percent mammographic density; SE, standard error. P-values <0.05 are shown in bold font. substantially alter our ability to distinguish between BRCA1/2 mutation carriers and non-carriers (P = 0.06). The AUC (SE) for the BANN-trained classifier to distinguish between BRCA1/2 mutation carriers and non-carriers was 0.72 (0.09) for features alone and 0.71 (0.09) for the features plus PMD. The results from these sensitivity analyses are consistent with those from our primary analyses based on the original testing dataset. In contrast to the differences we observed in the BANNtrained classifier between BRCA1/2 mutation carriers and non-carriers, we did not observe any statistically significant mean paired differences in PMD between the test-set pairs age-matched within ±3 or ±1 years or when using the age-restricted dataset (Additional file 1: Table S3, P = 0.83, P = 1.00, and P = 0.83, respectively). Discussion We investigated relationships between computer-extracted mammographic texture features and BRCA1/2 mutation status among women without breast cancer, and identified novel mammographic texture features (AVE, MinCDF, Energy, MaxF (COOC)) that appear to distinguish BRCA1/2 mutation carriers from non-carriers. We had previously observed no difference in percent density obtained from the entire mammogram by BRCA1/2 mutation status in this same population, motivating our search for new informative parenchymal characteristics based on radiographic texture analysis within a retro-areolar ROI. These associations changed minimally when we included PMD in models with the four selected texture features. Thus, the associations we have identified between specific RTA features and mutation status are independent from any possible modifying effect of mammographic density, which in both our prior work and that of others appears no different in mutation carriers than that observed in the general population [12,[14][15][16][17][18]. The strength of the RTA feature associations was attenuated when mutation carriers were age-matched to non-carriers, likely due to reduced sample size. Our study adds to the existing RTA literature [19,20] by analyzing the largest number of mutation carriers yet studied in this manner, and our findings indicate that computer-extracted mammographic features provide some additional information for identifying women likely to carry BRCA1/2 mutations. The RTA classifier we have identified could prove a useful adjunct to mammographic interpretation both in women from families with many affected relatives in whom no genetic susceptibility has yet been identified and in families known to have mutations in these genes. However, because the positive predictive value of such a test would be low in the general population, owing to the rarity of these mutations, the strength of the association we found is not high enough for screening a general population to identify candidates for mutation testing. The texture-based features Energy and MaxF (COOC)which describe the spatial distribution pattern for tissue homogeneity -and AVE and MinCDF -which provide gray-level magnitude information on tissue densenesswere the strongest RTA predictors of mutation status within a given ROI. The RTA texture-based features selected in this study characterize similar parenchymal attributes found in previous studies on digitized screen/film mammograms [19,27,28,30], such that BRCA1/2 mutation carriers tend to have retro-areolar parenchymal patterns that are coarse in texture. It is important to note that a given parenchymal attribute may be described by multiple computer-extracted features. For example, image homogeneity can be measured by Energy and the largest number of a gray-value pair in the co-occurrence matrix (MaxF (COOC)), as selected in this study, or by the first moment of the power spectrum (FMP) or Coarseness, which Huo et al. and Li et al. previously found to be associated with BRCA1/2 mutation status [19,27]. In addition, our findings are consistent with two case-control studies reporting that mammograms of coarse texture are associated with increased breast cancer risk [23,24]. In these studies, however, simultaneous inclusion of the texture features in a model with PMD did not improve breast cancer risk prediction [23,24]. Although we found that the selected texture features significantly improved our ability to distinguish between mutation carriers and non-carriers when compared with PMD alone, ours was a cross-sectional study evaluating features associated with BRCA1/2 mutation status rather than subsequent risk of developing breast cancer. Prior studies have questioned the importance of mammographic density for breast cancer risk prediction among BRCA1/2 mutation carriers [11]; further research is warranted to investigate the predictive value of computerextracted texture features among this high-risk patient population. We currently have no information on the association between the RTA classifier and the risk of breast cancer per se among BRCA mutation carriers. While it may seem logical to assume that women with the BRCA-related RTA mammographic texture pattern will actually be at increased risk of breast cancer, that fact has not yet been established. Further clinical development of the RTA classifier will require proof of this hypothesized association; we strongly recommend that a new study with that question as its primary study endpoint be undertaken. Mutation carriers tended to have lower values for the RTA gray-level magnitude-based features selected in this study, suggesting that their breasts were less dense in the retro-areolar region as compared with the non-carriers. This finding is inconsistent with prior studies suggesting that mutation carriers have gray-level magnitude-based features that are low in contrast [19,30]. It is possible that differences in film digitizers and/or digital mammographic image acquisition systems between studies could influence RTA, particularly for the gray-level magnitudebased features which have been previously shown to be sensitive to the effects of variable gain [48]. Consistent with the idea that texture-based features are more robust than gray-level magnitude-based features across systems of varying gain [48], a prior study, which utilized fullfield digital mammograms (FFDM) to identify high-risk features, resulted in selection of only spatial distribution texture-based features [49]. Hence, the gray-level magnitude-based features that were related to mutation status in our study population may not be generalizable to FFDM. This is not surprising as image processing of FFDM permits the degree of contrast in the image to be manipulated, such that contrast may be increased in the dense areas of the breast in order to maximize mammographic sensitivity [50]. As clinical practice is rapidly shifting toward digital breast imaging, this work should set the stage for applying the strategies described herein to newer images from mutation carriers as they become available. Our research method was also limited by the need for manual placement of retro-areolar ROIs; however, manual ROI reselection for a randomly selected subset of participants was found to be highly reliable, both in this study and as reported previously [20]. Automation of ROI placement could be applied in future work. Our study had several strengths, including the largest number of mutation carriers and non-carriers yet studied in this manner, assessment of digitized images that was completely masked to mutation status and evaluation of the proposed classifier in independent test data. Although the discriminatory accuracy of the RTA classifier was modest (AUC = 0.68), and for a diagnostic test we would like to have a higher value, the AUC does compare favorably with AUC statistics reported in most breast cancer risk models [51]. Further, we performed extensive sensitivity analyses, and our findings persisted in the presence of multiple potential confounding factors, including age and PMD. Although statistical power was limited for the age-matched sensitivity analyses, these analyses provided an important confirmatory way to control for age and results were consistent in their suggestion of a relation between computer-extracted mammographic texture pattern features and mutation status. Thus, our findings warrant validation in larger independent clinical studies. The biology of mammographic density is poorly understood [52,53], and the biologic correlates of texture-based features are even less well-characterized. Nevertheless, evidence from animal models and human breast tissues suggests underlying biological differences in the molecular histology and pathology of the breast by BRCA1/2 mutation status [54][55][56][57]. While it is possible that our results may be related to true anatomical differences between carriers and non-carriers as reflected in their parenchymal patterns, other biologic factors, such as biochemical differences, also need to be explored. Conclusions Several noteworthy clinical implications flow from our results. First, we confirm an important observation, previously made by Huo et al. and Li et al. [19,20] but not widely appreciated in the clinical community: the digitized mammographic image contains computerextractable information not captured during routine radiologic interpretation which may permit improved, real-time risk stratification among women undergoing screening mammography. Nonetheless, it is early days for the tools used in this analysis; further development of these techniques might identify additional, more stronglycorrelated features. In the current instance, our computer model was significantly correlated with the presence of deleterious mutations in BRCA1/2, conferring a two-fold increase in the likelihood of being a mutation carrier, per one SD increase in the probability score. If the interpreting radiologist were to be made aware of this information while reading clinical mammographic images, it could alter image interpretation by increasing the prior probability of disease in subjects with the BRCA-related pattern. The model's ability to distinguish between BRCA1/2 mutation carriers and noncarriers might, in the context of a positive family history of breast and/or ovarian cancer, serve as an indicator to consider formal genetic risk assessment in persons who have not been previously tested. Integration of breast imaging data with family history and breast tumor markers could be formally assessed by estimating the added value of our image-based probability score to existing statistical models that are used to predict BRCA1/2 mutations [58]. Although mathematical and statistical concepts involved in generating the RTA classifier are complex, a great deal of work has already been done relative to the details of this methodology. Should the RTA classifier be validated clinically, this algorithm is amenable to a user-friendly implementation. The current data do not support these clinical applications at the present time, but they provide a solid basis for extending this novel research into larger, more rigorously-designed studies utilizing digital imaging modalities. Our findings also serve as a reminder of the importance of keeping an open mind relative to novel applications of old technologies. This valueadded strategy may improve the cost-benefit ratios of tried, true and readily available clinical tests, without the development costs associated with an entirely new technology. Additional files Additional file 1: Table S1. Intraclass correlation coefficients (ICC) for masked reliability assessment of Computer-extracted features (n = 91 pairs). Table S2. Correlations between selected computer-extracted features (n = 237 women). Table S3. Sensitivity analyses of the ability of the trained classifier to distinguish between BRCA1/2 mutation carriers and non-carriers in age-matched datasets. Figure S1. Histogram of the number of times that each feature was selected in the 177 leave-one-caseout stepwise feature selection using linear discriminant analysis of the training dataset. Competing interests ML Giger is a stockholder in R2 Technology/Hologic and shareholder in Quantitative Insights, and receives royalties from Hologic, GE Medical Systems, MEDIAN Technologies, Riverain Medical, Mitsubishi, and Toshiba. It is the University of Chicago Conflict of Interest Policy that investigators disclose publicly actual or potential significant financial interest that would reasonably appear to be directly and significantly affected by the research activities. Authors' contributions JTL, SAP, and MH Greene were responsible for the conduct and oversight of the NCI Breast Imaging Study. SAP, JE-W, and PWS were responsible for the conduct and oversight of the NCI/NNMC Susceptibility to Breast Cancer Study. GLG, HL, CKC, LL, CG, CEG, KN, KAC, and MLG additionally participated in the acquisition of data. GLG, HL, JTL, MH Greene, PLM, OIO, MH Gail, and MLG were involved in the analytic concept and design. GLG, HL, JTL, MH Greene, MH Gail, and MLG contributed to the statistical analyses and participated in manuscript preparation. GLG, HL, JTL, MH Greene, CKC, LL, SAP, JEW, PWS, CG, PLM, CEG, KN, KAC, OIO, MH Gail, and MLG participated in the interpretation of data for the work and critical revision of the manuscript for important intellectual content. All authors read and approved the final manuscript. All authors agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
8,583.4
2014-08-01T00:00:00.000
[ "Computer Science" ]
Controlling and Mitigating Targeted Socio-Economic Attacks . The transformation of social media has paved a way to express one’s views, ideas, and opinions in an effective and lucid manner which has resulted in its increased popularity. However, there are both pros and cons of this socio-technological revolution. This may lead to its misuse with planned and targeted attacks which often have the potential of massive economic effects. This paper articulates the negative aspects, especially, of how the social media is being misused for greedy needs. Spammers may defame the product to achieve their greedy goal of earning more profit by decreasing the competing effect of their opponents. This paper discusses, analyzes and proposes two novel techniques by which one can either decrease or completely abolish these types of socio-economic attacks. Introduction Social media has been the ever expanding realm since a decade and has taken the technological advancements to its pinnacle. Social media helps the business in a variety of ways especially in promotion which is economically viable and effective than the traditional ways of promotion. Table 1 enlists the percentage of B2B marketers who use various social media sites to distribute their content. The increasing popularity of social networking sites such as Twitter, Facebook and LinkedIn has attracted a large number of bloggers, content writers and article creators [1]. Social media has removed all the communication and interaction barriers and bridged the gap amongst the earthlings. Another positive aspect of social media is uniting a large number of people on a huge platform which is necessary to induce positivity in the society. On the other hand, it has many bad and ugly impacts [3]. As stated above our aim is to control or totally mitigate the false content (uploading fake videos which have no authentication, posting vulgar images) making the platform more trust worthy and reliable than the former. Some of the bad and ugly aspects are that some spammers are forging multiple identities (also called Sybil) in order to harm the users of the media [4]. This is due to the fact that no mechanism for authentication is provided when any video or picture that addresses the issue of public interest gets uploaded. One can easily post some false and vulgar content and raise some sensitive issues which may damage the goodwill of the product as it is just a matter of creating a fake identity and uploading a video or some morphed photograph. Cautious content filtration of objectionable or adulterated content is necessary because it is high time to control the evil abuse widely prevalent in the society. This paper provides various approaches to control this at various levels starting from the very root level. The work concentrates on all types of false content detection, false content tolerance and vulgarity issues. The aim is not to undermine the great contributions that the social media has made to social progress and technological advancement but rather to make it more trust worthy, reliable and transform it to a better facilitative tool which supports social cohesion and benign societal relations by abolishing such false content. Related Work Content-based filtering in online social networks has good results in the case of text or information. Content-based filtering can be applied concurrently at the same time when the text is getting uploaded [10] [11]. But this is not the case with videos or photographs. Daily millions of videos get uploaded and content filtration is not possible. Even the technique of content-based filtration is context based. Much of the literature work is not available in this context of socio-economic attacks. Some of the available instances are Maggi incident and many messages spreading that soft-drinks are contaminated with AIDS blood etc. Such incidences clearly elucidate some of the ugly aspects of social media. Ying-Chiang-Cho addressed various negative aspects of social media such as 1) Cyber Bullying, 2) Role of social media in organization of negative social events such as the 2011 UK riots, 3) Social-media-assisted infidelity and promiscuity [3]. He has discussed various instances of the misuse of social media in different public domains. One of them is Cyber bullying which describes the situation of a child or a teenager when he/she is harassed, humiliated, embarrassed, threatened or tormented using the digital technology. Cyber bullying includes sending mean messages or threats, spreading rumors, posting hurtful or threatening posts, sexting (circulating sexually suggestive pictures or messages about a person) and so forth [5] [6]. He articulated the ugly side of social media by various examples. Some of them are as follows:  A 13-year old school boy, Ryan Halligan took his life because of cyber bullying.  A 15-year old girl, Phoebe Prince hanged herself because of the threatening messages and called names at school. There are still many instances which have not come under the limelight in the society. His work has addressed the issues very well but has provided no means of abating or eradicating the serious threat from the society. His work also lacks the discussion of any socio economic attacks [3]. R.Gandhi et al. has addressed the economic issues related to security [8]. His work has paved ways for the future extension of such critical issues in the context of economy such as damaging the reputation of perishable goods. This work includes providing solution to this critical issue at various stages. The work includes abating it at the very rudimentary level, tolerating it at the middle and the peak stages. Posting of fake videos and photographs may damage the goodwill of the good to a major extent. We are paying special attention to this type offensive videos and photographs which publicize the objectionable and fake content which has no source of authentication in it-self. Strategies We propose two techniques namely 1. FALSE CONTENT TOLERANCE and 2. FALSE CONTENT PREVENTION. To illustrate these techniques let us consider the following scenario. Say, a plate contains five types of fruits namely apples, oranges, mangoes, bananas and grapes that constitute the daily supplements of an individual in the country, India. The usual cost of apples and grapes is higher than the others. So we can categorize these as costly fruits. The cost of mangoes and oranges is greater than that of bananas but cheaper than apples and grapes. So these can be categorized as medium cost fruits. The cost of bananas is far cheaper than the others. So this fruit can be categorized as a low cost fruit. The country India exhibits a large proportion of population of average and low salaried people. This infers that an average salaried person generally resorts to buying or ordering either medium cost fruits or low cost fruits. So the demand for medium and low cost fruits is higher, again amongst these, the demand for low cost fruit dominates. Without loss of generality the restaurant managing personnel will have a greater quantity of bananas, a medium quantity of mangoes and oranges, and a lower quantity of apples and grapes. Suppose the managing staff of mango production unit wishes to raise the demand for mangoes in the market so as to increase the net profit of the production unit. To achieve this, the unit plans a scheme to decline the popularity of other fruits in the market. The competitors in this scenario are apples, grapes, oranges and bananas. As the cost of apples and grapes are higher, the competing effect of these fruits can be neglected. So the real competitors are oranges and bananas in which the competing effect of bananas is higher than that of oranges. Hence they would like to target the sales of bananas and oranges. The plot is as follows: The production unit will create a video which tarnishes the popularity of the target fruits, oranges and bananas. In the video, they may use all types of defaming contents which shows that eating these fruits will spoil the health of common mass and will show side effects in the upcoming future. Also the video may make some false claim that this video is approved by some of the well known, reputed doctors or health societies. Further as the present social media does not provide any authentication for posting of these types of videos, this video may go viral in the social media negotiating the genuineness of the target fruits, oranges and bananas achieving the goal of the production unit successfully. In due course of time, the video gets popularized in all the sections of the society. This may not have any drastic effect on consumption of that particular fruit if an individual is considered. But as a whole this may have a serious decline in consumption of the target fruits and therefore also reducing the profits. This may have a drastic impact on the sales of these target fruits. Everyone may start to pick a mango instead of taking an orange or a banana. Gradually the demand for mangoes will sharply increase and the market price of mango will soar. Thereby the target of mango production units is achieved easily just by posting a fake video which has no authentication at all. The same thing can be done by many adversaries for spoiling the goodwill of their opponents in one or the other way. This is a serious issue which needs urgent consideration. To cater the need of addressing such issues we propose techniques to reduce or possibly diminish the effect of this false content. The FALSE CONTENT TOLERANCE approach aims at minimizing the fake post by associating the information of the user with the post he uploads. In this strategy, the social media allows all types of videos to get posted. Possibly the video may be seen by an individual and he/she starts sharing it. If a video is getting shared, it should be shared along with the source id's URL (who posted it for the first time) should also be shared. If this type of control mechanism is implemented these false content videos may reduce to an appreciable amount as the source identity can be known easily from the URL. So the spammers may have the fear of their identity getting easily traced. For example, instances of false content promotions or defamation of rival products administered through uploading and sharing of videos and other media are witnessed on the social networking sites like Facebook, Twitter etc. on a routine basis. Now if it is made mandatory to reveal one's source id and that the aforementioned detail is displayed publicly along with the given video or other media then the culprit might get apprehensive about being publicly shamed or subjected to persecution through law. He knows that now he can be traced till a certain point. So, it is safe to conclude that a general disinclination towards uploading of such false videos and other media may be evident post implementation of the aforementioned technique. This requires very few changes in the existing framework and the present architecture of the social media. As it requires very few changes, it only requires a minimum effort to do this work. But still the users can make Sybil accounts (fake id) and do this type of unethical things to fulfill their greedy need of tarnishing a product. Hence the problem still persists which can be resolved by using FALSE CONTENT PREVENTION technique. The second technique is illustrated as below: When a video is posted the social media authorities ask for the user's telephone number as an authentication mechanism. The OTP mechanism can be used to verify the user's telephone number. If this type of authentication is done successfully then those videos are called certified videos. The user's telephone number will not be shared and will be kept safe and secure by the social media authorities. If the user refuses to reveal his identity the social media still allows to post the video but these videos are called uncertified videos. Such videos can still be shared but they lose their credibility. This ensures that the video having fake content may not get popularized. The additional changes in the settings of giving the option to show only certified videos help to reduce the effect in a very effective manner. This requires more effort than the tolerance technique but ensures that the fake content is not encouraged in any manner. Analysis The tolerance strategy though requires minimal changes in the existing architecture and the present framework is not at all a viable solution because it is just a matter of few minutes to create one fake id. These types of fake ids persist in all types of social networks. Albeit the prevention strategy requires much effort and time but it guarantees the addressing of the problem from the very root level. The effort involves changing the settings architecture, an overhead of time and effort during authentication and also verifying whether the video is certified or not during the time of sharing. This technique has an authentication mechanism by which the user can be tracked easily by the means of telephone number which is re-verified using OTP (One Time Password). So the user cannot simply escape by giving a false telephone number. As the certified videos can only be shared by using this technique, it has an indirect effect of gradually diminishing this sort of misuse in all contexts. If an annotation of 'recommended for most of the users' is provided along with the option of show only certified videos, then most of the users will go for it. Gradually it gets popularized and spreads from one user to another resembling the chain effect. Consequently, only the certified videos get posted and also only those videos are shared. Just like prevention is better than cure, the prevention strategy stated above is better than the tolerance strategy. Conclusion Careful and selective content filtration is essential so as to stop socio economic backstabbing which may have a serious business effect [7] and [8]. Our proposed methods work well in all platforms. The technique of false content prevention is much viable and reliable than the false content tolerance though it requires more effort. Validation of fake videos (if reported by an organization or a company to social media authorities) and posting new videos along with certification to counter the false attacks (counter videos) addresses the problem in an effective way. Further this can also be applied to contents such as textual posts which are abusive and non-ethical. However, the proposed techniques suffer with certain limitations as in case of FALSE CONTENT TOLERANCE which can be defeated by using Sybil accounts. The prevention approach provides an improvement over the tolerance strategy but requires a certain overhead from implementation point of view. These techniques can be improved in the future by real time implementation and proper feedback integration. In addition, the rating of videos based on its authenticity may help to make the platform more trustworthy.
3,467.8
2016-09-13T00:00:00.000
[ "Computer Science" ]
Thermodynamics of hairy black holes in Lovelock gravity We perform a thorough study of the thermodynamic properties of a class of Lovelock black holes with conformal scalar hair arising from coupling of a real scalar field to the dimensionally extended Euler densities. We study the linearized equations of motion of the theory and describe constraints under which the theory is free from ghosts/tachyons. We then consider, within the context of black hole chemistry, the thermodynamics of the hairy black holes in the Gauss-Bonnet and cubic Lovelock theories. We clarify the connection between isolated critical points and thermodynamic singularities, finding a one parameter family of these critical points which occur for well-defined thermodynamic parameters. We also report on a number of novel results, including `virtual triple points' and the first example of a `$\lambda$-line'---a line of second order phase transitions---in black hole thermodynamics. Introduction The study of higher curvature corrections to the Einstein-Hilbert action is an active area of study motivated primarily by attempts to quantize the gravitational field. While general relativity is non-renormalizable as a quantum field theory, the addition of higher derivative terms to the action can lead to a power-counting renormalizable theory while having negligible influence in the low energy domain [1]. Higher curvature corrections are also present in string theory where the Gauss-Bonnet term appears in the low energy effective action [2]. Within the context of the AdS/CFT correspondence, higher curvature corrections appear as 1/N c corrections within the dual CFT or, alternatively, as new couplings between operators in the dual CFT yielding a broader universality class of dual CFTs [3][4][5]. While well-motivated, higher curvature gravities bring a number of difficulties and potential pathologies along with them, making their investigation a non-trivial undertaking. For example, the resulting equations of motion will generically feature derivatives of fourth order or higher, and the linearized equations of motion for a graviton perturbation often reveal that the graviton is a ghost. A number of these issues are alleviated in Lovelock gravity [6]. Lovelock gravity is the most general torsionless theory of gravity for which the field equations are second order and is the natural generalization of Einstein gravity to higher dimensions. The essential idea is to augment the Einstein-Hilbert action with the dimensionally continued Euler densities. The k th term is then either topological or vanishes identically below the critical dimension d = 2k + 1 where it becomes gravitationally nontrivial. The theory is ghost-free for a Minkowski vacuum [7] and also in other maximally symmetric backgrounds provided the coupling constants are constrained. Lovelock gravity thus provides a natural testbed for exploring the effects of higher curvature terms on gravitational physics. Recently, Oliva and Ray have shown that it is possible to conformally couple a scalar field to the Lovelock terms while maintaining second order field equations for both the metric and the scalar field [8]. In subsequent work, these authors, along with collaborators, have demonstrated that this theory admits black hole solutions where the scalar field is regular everywhere outside of the horizon and the back-reaction of the scalar field onto the metric is captured analytically [9][10][11]. This work provided the first example of black holes with conformal scalar hair in d > 4 where no-go results had been reported previously [12]. The obtained solutions are valid for positive, negative and vanishing cosmological constant; however, the AdS case is of especial interest due to the role scalar hair plays in holographic superconductors [13,14]. In the present work, our primary concern is the thermodynamics of the AdS hairy black hole solutions of this theory within the context of black hole chemistry. In this framework the cosmological constant is promoted to a thermodynamic parameter [15][16][17][18] in the first law of black hole mechanics, a result supported by geometric arguments [19,20]. One of the major results to follow was the discovery of critical behaviour for AdS black holes similar to the Hawking-Page transition [21], but analogous to that seen in everyday thermodynamic systems. For example, in [22] it was shown that the charged Schwarzschild-AdS black hole undergoes a small/large black hole phase transition with the phase diagram and critical exponents identical to those of the van der Waals liquid/gas system. A cascade of subsequent work established further examples of van der Waals behaviour, triple points, and reentrant phase transitions for AdS black holes [23][24][25][26][27][28][29], developed entropy inequalities for AdS black holes [30,31], and discussed the notion of a holographic heat engine [32]. For a broad class of metrics these phenomena can be understood in a more general framework [33,34]. However in the particular case of higher curvature gravity, this framework is not adequate: not only are many of these phenomena observed [26,28,29,[35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54], but new behaviour such as multiple reentrant phase transitions [29] and isolated critical points [29,42,47] are seen, the latter corresponding to critical exponents which differ from the mean field theory values. Isolated critical points have so far only been observed in cubic (and higher) theories of gravity with finely tuned coupling constants [42]. Despite the growing literature on the critical behaviour of AdS black holes, relatively little investigation has been carried out for theories coupled to scalar fields. Those cases which have been studied report van der Waals behaviour [55][56][57][58]. Previous studies of black hole thermodynamics within the conformal coupling model of Oliva and Ray have focused primarily on the case where the gravitational sector consists of only the Einstein-Hilbert term [9,10,59]. The resulting black holes have been shown to exhibit Hawking-Page type transitions [9,10] along with reentrant phase transitions and van der Waals behaviour [59]. More recent work has focused on the determination of boundary terms and the evaluation of the Euclidean action [60] and enforcing causality constraints on the scalar field coupling inherited from the AdS/CFT [61]. In this work we aim to fill gaps in the existing literature by considering the thermodynamics of these hairy black holes in both Gauss-Bonnet and cubic Lovelock gravity and addressing the issue of vacuum stability. Our paper is organized as follows. In Section 2 we review the conformal coupling model of Oliva and Ray, derive the resulting charged black hole solutions for general Lovelock gravity, and obtain the thermodynamic quantities which satisfy the first law of thermodynamics. In Section 3 we consider the linearized field equations about a maximally symmetric background and derive the constraints on the coupling constants which ensure the graviton is neither a ghost nor a tachyon. In Section 4 we specialize to the case of Gauss-Bonnet gravity where we study the criticality of hairy black holes in 5 and 6 dimensions with and without electric charge. In Section 5 we consider the criticality of hairy black holes in cubic Lovelock gravity in 7 and 8 dimensions with and without electric charge. In each case we observe a rich structure of critical phenomena, including novel examples of "virtual triple points" and pockets of local thermodynamic stability. In Section 6 we study the case of cubic Lovelock gravity. We find that the hair gives rise to a one-parameter family of isolated critical points which occur under much more general conditions than in previous work, clarifying the relationship between isolated critical points and thermodynamically singular points. Finally, we show that under certain circumstances the hairy black holes in cubic Lovelock gravity exhibit λ-lines [62]. These lines of second order phase transitions are the first such examples in black hole thermodynamics, and due to an interesting connection with superfluid 4 He, we have termed the black holes with this property 'superfluid black holes'. Exact Solution & Thermodynamics We consider a theory containing a Maxwell field and a real scalar field conformally coupled to gravity through a non-minimal coupling between the scalar field and the dimensionally extended Euler densities. The theory is conveniently written in terms of the rank four tensor [8], which transforms homogeneously under the conformal transformation, g µν → Ω 2 g µν and φ → Ω −1 φ as S γδ µν → Ω −4 S γδ µν . The action for the theory in d spacetime dimensions is given by with F µν = ∂ µ A ν − ∂ ν A µ , the Lagrangian densities given by (the brackets denoting the integer part of (d − 1)/2), with and δ (k) is the generalized Kronecker tensor, One can obtain the equations of motion for the gravitational field in the standard way, and they can be conveniently written in terms of the generalized Einstein tensor, a k 2 k+1 δ νλ 1 ···λ 2k µρ 1 ···ρ 2k R ρ 1 ρ 2 λ 1 λ 2 · · · R ρ 2k−1 ρ 2k λ 2k−1 λ 2k . (2.7) The theory has stress-energy associated with both the scalar and Maxwell fields, with the former given by (T 1 ) ν µ = kmax k=0 b k 2 k+1 φ d−4k δ νλ 1 ···λ 2k µρ 1 ···ρ 2k S ρ 1 ρ 2 λ 1 λ 2 · · · S ρ 2k−1 ρ 2k λ 2k−1 λ 2k (2.8) and the latter, (2.9) The gravitational field equations then read, (2.10) By varying the action with respect to the scalar field, one can show that the scalar field must obey the following equation of motion: Note that the above equation of motion ensures that the trace of the stress-energy tensor of the scalar field vanishes on shell, as expected for this conformally invariant theory. Similarly, varying the action with respect to the Maxwell gauge field A µ , we obtain the Maxwell equations, ∇ µ F µν = 0 . (2.12) Here we are interested in spherically symmetric topological black hole solutions to this theory with a metric of the form where dΣ 2 (σ)d−2 represents the line element on a hypersurface of constant scalar curvature equal to (d − 2)(d − 3)σ corresponding to flat, spherical and hyperbolic horizon topologies for σ = 0, +1, −1, respectively. We denote the volume of this submanifold as Σ (σ) d−2 , which for the case of σ = +1 reduces to the volume of a sphere, (2.14) We obtain a solution to the field equations provided f solves the polynomial equation, while the scalar field is given by where, in order to satisfy the equations of motion, N must satisfy the constraints Note that in the preceding equations there is only a single unknown, N . Thus, one of these equations serves as a constraint on the allowed coupling constants. The electromagnetic field strength is given by where Q is the electric charge and is related to the quantity e used in [10,11,59] via The scalar hair enters into eq. (2.15) through the term H, which is given by, It is easy to see for the planar case (σ = 0) that H = 0, i.e. that planar solutions have no hair. The polynomial (2.15) can be put into a more convenient form with the following rescaling Having the solution in hand, we now turn to a discussion of the thermodynamics. We construct the first law in extended thermodynamic phase space in analogy with [20] where we treat all dimensionful couplings as thermodynamic quantities. For the mass, temperature and electric potential, straightforward calculation yields Employing Wald's method for determining the entropy [63,64], we compute where L is the Lagrangian density, γ h is the determinant of the induced metric on the horizon andε ab is the binormal to the horizon. For the solution (2.22) we find The part of this expression explicitly containing products of the Riemann tensor leads to the standard Lovelock black hole entropy, while the remaining part represents a contribution to the black hole entropy due to the scalar hair. Explicitly, the full black hole entropy is given by In the case where b k = 0 for k > 2, the hairy contribution to the entropy takes a particularly nice form Since the fall off is the same to all orders in the hairy terms, and the contribution to the entropy is always just an additive constant, in this paper we will employ this latter form of the entropy. Since the hairy contribution to the entropy is independent of r + , we shall employ (2.28) henceforth. Doing so allows for calculational convenience and one does not miss out on any of the new physics that the extra b k -dependent terms in (2.27) contribute. Employing the extended first law we find it is satisfied provided (2.30) and the Smarr relation which follows from scaling also holds. We point out that in the situation when black hole solutions are considered, the couplings b k are not all independent, but are constrained by Eq. (2.17). As a result, in these cases, one must keep in mind that the variations of b k in the first law above are not all independent. Henceforth we shall set α 1 = 1 so that we recover general relativity in the limit α k → 0 for k > 1 and we will also set G = 1. Equations of motion and ghosts In this paper we will report on hairy black hole solutions in Lovelock gravity sourced by non-minimal conformal scalar hair. The black hole solutions produced by this theory are remarkably simple, providing an excellent arena for investigating the effects of scalar hair on higher curvature black holes. As mentioned in the introduction, higher curvature theories often suffer from having unstable vacua, i.e. the graviton is a ghost in maximally symmetric backgrounds. The absence of ghosts is a necessary condition for a stable ground state and thus for a sensible theory. Here we address this problem for the hairy black holes studied in this work, where the non-minimal coupling leads to modifications of the standard Lovelock result. Our approach is to consider the linearized theory for a constant scalar field Φ in a maximally symmetric AdS background. Naively, one might expect trivial results for the following reason: For a constant scalar field, the action reduces to Lovelock gravity with redefined couplings. Thus, one might expect that the conditions for a ghost free vacuum would reduce to the standard Lovelock conditions but for these redefined couplings. However, this is not the case since the equations of motion for the scalar field must also be respected, leading to a non-trivial result. The main results of this section are the following: The theory is ghost free when expanded about a background in which the scalar field vanishes. However, we point out that when the scalar field takes on a constant value, there will generically be ghosts for the couplings corresponding to spherical black holes. We report first on the case of Einstein gravity non-minimally coupled to a scalar field, before moving on to a detailed analysis of cubic Lovelock gravity. Einstein case As a warm up exercise, let us consider the theory that contains only the Ricci scalar and cosmological constant in the gravitational sector, and scalar hair coupled up to the Gauss-Bonnet term in the matter sector. We wish to expand the equations of motion about a ground state of the theory. To this end, we explore the situation where the scalar field takes on a constant value (denoted by Φ) and the spacetime is asymptotically AdS with curvature radius L. That is, In this case, the equations of motion for the scalar field and the gravitational field are given (respectively) by The second equation, given the first, is nothing more than the statement indicating that, for this configuration, the cosmological constant sets the curvature for the AdS space, as we might well expect. Taken together, Eqs. (3.2) and (3.4) comprise the background about which we shall expand. We consider perturbations to our background metric of the form g ab = g [0] ab + h ab and work to first order in this perturbation (here g [0] ab represents the maximally symmetric background metric which solves the field equations). Without fixing to any particular gauge, the linearized equations of motion take the form, Making use of the scalar field equations of motion, this equation can be further simplified to Thus, the linearized field equations are of the same form as Einstein gravity, and we need only check the sign of the pre-factor to ensure the absence of ghosts. In the limit Φ = 0, these linearized equations reduce to the expected form for perturbations of the Einstein equation about an AdS background, that is, the theory is ghost-free in this limit. We wish to use these linearized equations to place constraints on the coupling constants of the scalar field and we would like these constraints to be relevant, not only in the trivial case of constant scalar field, but also in the case considered in this work where the coupling constants of the scalar field obey certain fixed relationships set by Eq. (2.17). That is, we consider the linearized equations subject to Eqs. (3.2) and also which is a consequence of Eqs. (2.17) for this setup. To ensure that the graviton is not a ghost we require In this case we can make significant progress analytically. We find that condition where , λ = ±1. Since the dimension dependent part of the square root is always positive for d ≥ 4, we can see that b 0 and b 1 are required to have the same sign for Φ to be real. Since here we assume Φ = 0, Eq. (3.8) can be written as For even d, we have Φ d−4 > 0 so we can ignore the multiplicative prefactor Φ d−4 . We see that if b 0 > 0, then the second term is positive whenever we choose λ = 1 and if b 0 < 0, then we may choose λ = −1 to ensure the second term is still positive. Eq. (3.10) can be satisfied independent of our choice for . For odd d, note that if we choose = 1 (so that the first term is positive), then b 0 < 0 is allowed for λ = −1 and b 0 > 0 is allowed for λ = 1. The key point here is that there is always an appropriate choice of ( , λ) specifying Φ that allows the ghost constraint to be satisfied for any d ≥ 3. Therefore, we conclude that for any given choice of b 0 , b 1 such that b 0 b 1 > 0 we can always make an appropriate choice of ( , λ) for Φ in order to make the gravitational theory free from ghosts. Thus in the Einstein case, the theory is free from ghosts if expanded about an AdS background with the scalar field vanishing. If instead a constant scalar field is used, the only constraint for vacuum stability is that b 0 and b 1 have the same sign. Unfortunately, this restriction is in contradiction to what is required for the existence of spherical black holes, where b 0 b 1 < 0 is required. Gauss-Bonnet and Cubic Lovelock cases Having completed our study of this relatively simple case, we are now poised to consider higher order Lovelock terms in the gravitational sector. For concreteness (and for applicability in this work) we shall present the linearized equations for third order Lovelock gravity. We can then obtain the Gauss-Bonnet case by setting α 3 = 0. The first noteworthy change when considering Lovelock gravity is that the curvature scale of the AdS background is no longer simply the the length scale associated with the cosmological constant, but contains contributions from the higher curvature terms. Explicitly, the Riemann curvature of the AdS background is written as where L is the length scale associated with the cosmological constant and F ∞ represents the leading order behaviour of the metric function at large r, i.e. F ∞ is the leading order term of f (r)/(r 2 α 0 ) as r → ∞ and is required to be positive for AdS asymptotics. The particular value of F ∞ can be obtained using the fact that it solves the equation which is what one obtains from evaluating the field equations on this maximally symmetric background. We now repeat the calculations from earlier but for this more complicated theory. We find that the equations of motion for the (constant) scalar field reduces to the constraint, while the linearized gravitational field equations take the form, where we have included on the right-hand side a stress energy tensor which may arise from minimal coupling to other matter fields and have once again used the equations of motion of the scalar field. We then have the constraint, where Φ is a solution of Eq. (3.13) and we again enforce Eq. (3.7). After some algebra, we see that Eq. where , λ = ±1. This is identical to Eq. (3.9) for Einstein case except for the extra factor of F ∞ in the square root. Therefore in both the Gauss-Bonnet and cubic Lovelock cases we see that b 0 , b 1 , b 2 also must have the same signs. 1 In these more complicated cases, it is difficult to make additional meaningful progress analytically. We have investigated these conditions under a number of circumstances and report the results here. In the case of Gauss-Bonnet gravity, the situation is quite simple and depends only on the ratio α 2 /L 2 . In particular, if α 2 /L 2 < 0, there will be one asymptotically AdS branch and one asymptotically dS branch; the AdS branch is free from ghosts provided b 0 and b 1 have the same sign. If 0 ≤ α 2 /L 2 ≤ 1/4, both branches are asymptotically AdS and in each case the vacuum will be free of ghosts instabilities provided only that b 0 and b 1 are of the same sign. For α 2 /L 2 > 1/4, there exist no solutions to the field equations with asymptotic regions; in thermodynamic language, the maximal pressure constraint is violated: in Gauss-Bonnet gravity this entire parameter range is unphysical. These results are true independent of the spacetime dimension d ≥ 5. For cubic Lovelock gravity there is the additional parameter α 3 . To simplify matters, we work with the dimensionless variables defined in Eq. (5.6) and additionally define, so that all of the expressions are dimensionless. In subsequent sections we will see that the existence of AdS asymptotics for all three Lovelock branches translates to a pressure constraint, p ∈ (p − , p + ). For (p, α) combinations that yield AdS asymptotics, we find that the Lovelock and Gauss-Bonnet branches are ghost free provided b 0 and b 1 have the same sign, though there are additional constraints on for the Einstein branch, as highlighted in Figure 1. For p ∈ (p − , p + ) only the Lovelock branch has well-defined asymptotics. As it turns out, if b 0 and b 1 have the same sign, then this branch will be free from ghosts. ). This is a tighter constraint than for the other branches where a sufficient condition to satisfy ghost-free condition is b 0 b 1 > 0. While this plot was constructed for d = 7, higher dimensions are qualitatively similar. Thus we have seen that in both Gauss-Bonnet and cubic Lovelock gravity with a scalar field conformally coupled to the first three Euler densities (i.e. up to the Gauss-Bonnet term), a necessary condition for the absence of ghosts is that b 0 b 1 > 0. Furthermore, this is also a sufficient condition in all cases except for the Einstein branch in the cubic Lovelock case. From this we can conclude that the hyperbolic black holes studied previously in [59] and those considered in this paper are free from ghost instabilities regardless of whether the scalar field vanishes or is taken to be a constant. This follows because, for σ = −1 it is always possible to solve the equations of motion for the scalar field with b 0 b 1 > 0. However, if one limits the coupling to the first three Euler densities, then the restrictions forced upon the b k couplings by a solution of the scalar field equations of motion (see Eq. (2.17)) require b 0 b 1 < 0 in the case of spherical symmetry. This contradicts the conditions required for the theory to be ghost free in a constant scalar field background. Therefore, for this setup, for the couplings required in the spherical case, the theory suffers from ghost instabilities if the scalar field is taken to be a non-vanishing constant in the background. This conclusion does not mean that the spherical black hole solutions are pathological. For example, if the constant scalar field were taken to be zero, then all of the above constraints could be simultaneously satisfied. Furthermore, the above analysis was for the very specific case where the scalar field couples only to the first three Euler densities, such as that considered in [9,10,59]. More generally, one could couple the scalar field to additional terms, e.g. the cubic Lovelock term (by keeping b 3 non-zero) or even, for example, to the quasi-topological term [5,65]. We now demonstrate that by coupling to the cubic Lovelock term the ghost conditions can be satisfied in the case of spherical symmetry. Consider the case where b 0 , b 1 , b 3 = 0 with b 2 = 0. While clearly not the most general case, this instance will allow us to illustrate the point most clearly without having to perform a full analysis of a four-dimensional parameter space. For these couplings, Eqs. (2.17) and (2.20) reduce to, with ε ∈ {−1, 0, +1}. The constant scalar field and ghost constraints are (respectively) (3.21) Considering the expression for N above, it is clear that for σ = +1 a sensible solution for the scalar field would require b 0 b 1 < 0. Exploring the constraints above we find that solutions of this type are certainly possible, though not for the entire (b 0 , b 1 ) parameter space. Figure 2 illustrates the situation for cubic Lovelock gravity in seven dimensions. For the case of Gauss-Bonnet gravity in d ≥ 7, non-zero b 0 , b 1 , b 2 , b 3 will cure the ghost problems in the spherical case for a general, constant scalar field background. Of course, since non-zero b 3 indicates coupling to the cubic Lovelock term, this approach is only valid in d ≥ 7. However, one could also couple to the cubic quasi-topological term [5,65] to achieve this in d = 5 Gauss-Bonnet gravity. It is not clear how one could alleviate the problem in d = 6 Gauss-Bonnet gravity for general constant scalar field backgrounds. In summary, we have shown that if b k = 0 for k ≥ 3, the hyperbolic solutions (with σ = −1) are free from ghost instabilities for all constant scalar field configurations. The spherical black hole solutions require coupling constants that give rise to an unstable vacuum for a generic constant scalar field background, a result which is true in any dimension d ≥ 5. However, for a vanishing scalar field background or if one allows additional b k to be non-zero, this will not be true in general, and sensible solutions exist. One may naturally wonder what effect this has on the thermodynamics of the spherical black holes. The overall effect of the scalar hair is to add a term to the equation of state which falls off as 1/v d and to add a constant shift to the entropy. This is true irrespective of the number of non-zero b k . Thus the qualitative results will be the same regardless of the number of b k s included, and only shifts in the precise values of h will occur. It would be interesting to consider more general perturbations about the black hole background to determine if these black holes are unstable in general. Gauss-Bonnet gravity In this section we discuss the effects of the conformal hair on the thermodynamics of black holes in Gauss-Bonnet gravity. Thermodynamic equations and constraints The first issue we address is the existence of asymptotically AdS regions. Due to the presence of the non-linear curvature, it is not a generic feature that asymptotically AdS regions exist. We consider the polynomial for the metric function. Assuming an asymptotic region exists for both solutions of the above equation it must be the case that We therefore restrict our focus to parameter values that satisfy this inequality. We shall refer to the solution that reduces to that of Einstein gravity in the limit α 2 → 0 to be the 'Einstein branch' (denoted f E ) and the other solution to be the 'Gauss-Bonnet branch' (denoted f GB ). From Figure 3 we see that the presence of conformal hair is compatible with existence of AdS asymptotics and event horizons if the above inequality is satisfied. We shall employ the dimensionless quantities (v, t, q, p, h) in our thermodynamic analysis, where Condition (4.2) thus becomes a constraint on the pressure which must be satisfied for well defined asymptotics 2 . Furthermore, positivity of the entropy requires from Eq. (2.28) that assuming α 2 > 0. While for σ = 1 this is satisfied trivially provided h < 0, care is required otherwise. With the addition of the hair it is possible for the entropy to be negative for σ = 1 and we shall investigate if this results in any new and interesting thermodynamic behaviour. Solving the expression for the temperature in Eq. (2.23) for α 0 we obtain for the equation of state. 3 We also find for the Gibbs free energy. The state of the system will be that which minimizes g at fixed t, p, q and h. In what follows we shall specialize to five and six dimensions and perform a detailed study of the thermodynamics of these black holes. We have found that all of the interesting thermodynamic behaviour produced by the scalar hair is already present in five dimensions. That is, going to higher dimensions does not produce any novel phenomena. We consider both spherical and hyperbolic black holes in five dimensions, and include a short discussion of hyperbolic black holes in six dimensions. In the case of five dimensions, we require that to ensure the existence of AdS asymptotics (c.f. Eq. (4.2)). If this inequality is violated the nonlinear curvature becomes too strong and the spacetime becomes compact in the radial coordinate, as illustrated in Figure 3. From Eq. (4.5), the positive entropy condition for d = 5 is given by For spherical horizons (σ = 1) the inequality is trivially satisfied for h ≤ 0. For h > 0, the entropy is positive if v is not too small; that is, there is a lower bound on v depending on h > 0 for spherical black holes. Figure 4 shows the region of positive entropy as function of h. For hyperbolic horizons (σ = −1), the situation is slightly more complicated, and is shown graphically in Figure 4. The figure makes it possible to see the general result: large black holes with lots of hair have negative entropy and so are not physically allowed. In specific terms, for h < 0, there is an upper bound on the allowed volumes which depends on the specific value of h. For 0 < h < 6 √ 2/(5π) ≈ 0.540 the hyperbolic black hole volume is bounded from above and below. For h 0.540, the entropy is always negative and hyperbolic black holes of positive entropy cannot exist. The equation of state is given by and a critical point occurs when p = p(v) has an inflection point, i.e. The critical points satisfy Note that in the uncharged case with no hair, the compact radial region approaches zero as the mass tends to zero, hence there is no red curve in this plot. Bottom centre: h = 0.5, q = 0, p = 1.5p max . Bottom right: h = 0.5, q = 0.5, p = 1.5p max . For uncharged solutions, the scalar hair generally results in f ± → ±∞ as r → 0 while the behaviour of the metric function at infinity is unaffected. Since charge term has the highest fall-off rate, the presence of charge will result in f terminating at finite, nonzero x regardless of h and m. Note that if the maximum pressure constraint is not satisfied, then the radial coordinate becomes compact and we do not have AdS asymptotics. and we see that the qualitative effect of the hair is to introduce new terms of odd powers in the volume v. As usual, planar black holes do not have any critical points and this is true for both Gauss-Bonnet and higher-order Lovelock gravity in arbitrary dimensions. Furthermore, as noted earlier, in the planar case the black holes cannot support hair. For these reasons we disregard the planar solutions in our analysis and henceforth assume σ = 0 for both the equation of state and Gibbs free energy 4 . In the following sections we will consider the critical behaviour for spherical and hyperbolic topologies separately. Right: For hyperbolic horizons (σ = −1). In each case, the green shaded region corresponds to positive entropy, while in the gray shaded region the entropy is negative. It is interesting to note that, in the hyperbolic case, positivity of entropy results in a maximum black hole volume for any given hair parameter, h. Note that, since Eq. (4.9) in independent of the electric charge, these plots are valid for any value of q. Spherical case Since analytic methods are not possible in our analysis due to the complexity of the Gibbs free energy and equation of state, we resort to graphical and numerical methods. In Figure 5, we plot the set of critical volumes v c as function of h and superimpose the condition for positive entropy. We see that in the uncharged (q = 0) case, there are no critical points for h 0.1240 while h < 0 admits at most one physical critical point. For h ∈ (0, 0.1240) we can have up to a maximum of two critical points which are not necessarily physical (for example p c or t c may be negative, or p c > p max ). It is not difficult to show that for q = 0, h < 0 we have exactly one physical critical point which displays standard Van der Waals' behaviour, as shown in Figure 6. For 0 < h 0.1240 there can be up to two physical critical points. Already in the uncharged case we can see a new thermodynamic phenomenon in d = 5 spherical black holes, namely a reentrant phase transition 5 (RPT), as shown in Figure 7. This type of phase transition, though now common to observe in black hole systems, has not been found previously in 5 dimensional, spherical black holes in Gauss-Bonnet gravity. Since critical points exist only up to h ≈ 0.1240, this exhausts all possibilities for uncharged spherical hairy black holes. A more complete and organized classification of the possible thermodynamic behaviour for Gauss-Bonnet hairy black holes is given in Table 2. 5 A reentrant phase transition is said to occur if, through a monotonic variation of a thermodynamic parameter, the system is observed to change phase two or more times, with the final phase and initial phase being macroscopically identical. These transitions were first observed in the miscibility of nicotine/water mixtures as temperature is varied [66]. (not necessarily all physical) while for negative h, there is exactly one physical critical point which exhibits standard VdW behaviour. Center : Charged (q = 0) case. Depending on the value of q, as many as three critical points are possible. For the weakly charged case q ∈ (0.1414, 0.3073), there is a small interval (h 2 , h * ) with standard VdW behaviour, with h * given by the intersection of the zero-entropy curve (blue) and critical volume curves. In both of these plots, the region below the blue, diagonal line corresponds to negative entropy. Right: a plot of (q, h) parameter space showing the number of possible physical critical points, taking into account positive entropy and maximum pressure constraints. We ignore h < 0 in this plot since it always has just one critical point with VdW character. Here, black, blue, red and green regions represent zero, one, two and three physical critical points, respectively. Note that the plot does not exclude critical points associated with, for example, a 'cusp' in the Gibbs free energy, for which no phase transitions occur. showing the usual VdW oscillation. The red curve represents critical isotherm at t = t c . The blue and black curves correspond to t > t c and t < t c , respectively. Centre: the g − t diagram. The black curve represents p < p c , the blue curve correspond to p > p c and the red curve is for p = p c . We observe standard swallowtail behaviour. Right: The p − t diagram, showing the coexistence line of the first-order phase transition terminating at a critical point (shown here as a red dot). These plots are analogous to typical behaviour of the liquid-gas phase transition of the Van der Waals' fluid. There is a reentrant phase transition corresponding to the zeroth-order phase transition at t = t 0 followed by a first-order phase transition at the intersection with the swallowtail structure. Right: the corresponding p − t phase diagram. The red curve represents the zeroth-order phase transition, while the purple curve marks the boundary where no black hole solution exists. The black curve is the first-order coexistence curve, which terminates at the critical point, marked here by a blue circle. For the charged case, we observe (c.f. Figure 5) that critical points can exist, in principle, for any h; however, for any given q, there will always be an h * such that for h > h * all of the critical volumes correspond to negative entropy black holes. For h < 0, the critical behaviour is analogous to that of the standard Van der Waals' fluid. If the charge is sufficiently small i.e. q 0.3073 then there is a small range of h where up to three critical points are possible. For q ∈ (0.1414, 0.3073), there is a small range of positive h 2 < h < h * (c.f. Figure 5) with standard VdW character, noting that h * is determined by both the positive entropy condition and charge, given by the intersection of the zero-entropy curve and the critical volume curves. Since there can be up to three physical critical points for some range of q and h, we can expect some new thermodynamic behaviour which cannot be found for d = 5 Gauss-Bonnet spherical black holes in the absence of hair. It turns out, this expectation is justified; the complete list of possible thermodynamic behaviour is presented in Table 2, and we now proceed to illustrate some of these thermodynamic phenomena. Overall, we find that the presence of conformal hair has allowed a plethora of new thermodynamic behaviour for d = 5 spherical Gauss-Bonnet charged black holes including: reentrant phase transitions (RPT), triple points (TP), and a new behaviour that we refer to as a virtual triple point (VTP). Figure 8 shows a particular case of RPT, where for some range of pressures there is a large/small/large black hole reentrant phase transition. Figure 9 shows a particular example illustrating a triple point, where the triple point occurs The p−t phase diagram. The red curve denotes (p, t) at which zeroth-order phase transition occurs, while the black line corresponds to a first order phase transition. The purple curve marks the border of the region admitting no physical (i.e. positive entropy) black hole solutions. It is clear that there is a large BH/small BH/large BH reentrant phase transition for 0.11 p ≤ p max . Note that in this case, none of the critical points are physical-while the first order phase transition does terminate at a critical point, this occurs for p c > p max . at p triple ≈ 0.02408 for h = 0.064, q = 0.1. Instances of reentrant phase transitions in higher curvature gravity have been reported previously for cubic Lovelock gravity, while a triple point was previously found for electrically charged Gauss-Bonnet black holes in d = 6 [29]. Therefore the presence of conformal hair has given rise to genuinely new thermodynamic phenomena in five-dimensional Gauss-Bonnet gravity. In Figure 10 we show an example of a virtual triple point (VTP). This is a situation where one has a triple-point type phase diagram but one of the coexistence lines terminates exactly at the other coexistence line. In this scenario the only point in the phase diagram where three phases coexist is exactly at the triple point which is also a critical point. This is different from the usual triple point scenario where the coexistence line 'branches out' of another coexistence line before terminating, producing a region where 'intermediate-size black holes' can exist (cf. Figure 9). The Gibbs free energy plot in Figure 11 shows a locally thermodynamically stable branch (a 'small' swallowtail) which moves 'counter-clockwise' as pressure increases until it eventually disappears exactly at where the Gibbs-minimizing first-order phase transition occurs (red dot). In contrast, the usual triple point will have three Gibbs branches intersecting at the Gibbs-minimizing first-order phase transition as the small swallowtail shape does not vanish as it crosses that point (cf. centre plot of Loosely speaking, h acts as a tuning parameter which 'moves' the position of the critical points in the phase diagram, thus altering the overall phase behaviour of the black hole spacetime. We see that as h increases, starting from a virtual triple point, one of the coexistence curves extends upwards while the other shrinks giving rise to a triple point, and eventually producing another virtual triple point. These thermodynamic phenomena follow a general trend: for small values of the charge q, a triple point occurs for some h ∈ (h 1 , h 2 ) (where the particular values of h 1 , h 2 depend on q) and VTPs occur at precisely h = h 1 and h = h 2 . For h h 1 , h h 2 the system exhibits VdW type behaviour where there is a locally stable swallowtail structure of the type shown in Figure 11 that never globally minimizes the Gibbs free energy. Therefore in Figure 11. Double-swallowtail behaviour for spherical case in d = 5: Left: g-t plot for σ = 1, h = 0.06998, p = 0.018. There is a locally thermodynamically stable black hole branch at the smaller swallowtail structure, as the curvature of the locally minimizing Gibbs free energy branch indicates positive specific heat capacity. Centre: g-t plot for p = 0.024. As the pressure is increased, the smaller swallowtail 'moves counterclockwise' towards the red point where the global first-order phase transition occurs. Right: g-t plot for p = 0.027. Observe that the swallowtail has become very small. In general, as p is further increased, the smaller swallowtail will completely vanish at its critical point before merging with the first order phase transition (red dot). However, there are two precise values of h (h ≈ 0.0487682, 0.06998) at which the critical point of the smaller swallowtail coincides exactly with the red point where global first-order phase transition occurs. We call this a 'virtual triple point'. some sense, a VTP marks the point where a transition from VdW to TP behaviour (vice versa) takes place. Intuitively speaking, we can view a VTP in two ways: • As the limit where a second-order phase transition (of the non-minimal Gibbs swallowtail structure) coincides with first-order phase transition (of the physical swallowtail structure). • As the limit where a triple point is also a critical point. In the usual triple point phenomenon, the triple point is a bifurcation point of two coexistence curves; in the VTP case, this bifurcation point is also the endpoint of one of the coexistence curves. Figure 11, which plots the Gibbs free energy, is most naturally interpreted from the first viewpoint, while for Figure 10, which shows the phase diagrams, the second interpretation is more natural. Here we pause to remark on the nomenclature used in this paper. The term virtual triple point has appeared in a previous instance in the literature on this subject [27] where it was used to describe the situation where the coexistence lines of a zeroth and first order phase transition meet. In this article we refer to this latter phenomenon as a pseudo-triple point (PTP). We believe that the term virtual triple point is more appropriate for the phenomena highlighted in Figure 10 because, except for a single finely tuned value of h, the system exhibits a bonafide triple point. Furthermore, we distinguish this result from the isolated critical points discussed in [29,42] since in those cases two critical points merge to create the isolated critical point, which has non-standard critical exponents. Here we only ever have individual critical points which have standard, mean field theory critical exponents. Hyperbolic case For the hyperbolic case we only need to consider h < 6 √ 2/(5π) ≈ 0.540, otherwise the entropy is negative for any v. We also note from Figure 4 that for σ = −1 the thermodynamic volume always has an upper bound (and for h ∈ (0, 0.540) also a lower bound). The possible critical volumes are simpler than in the spherical case: for uncharged hyperbolic black holes no criticality is observed for h > 0 while at most one critical point is possible for h < 0, as illustrated in Figure 12. However, by enforcing the pressure and entropy constraints (S > 0, p < p max ), one can show that there are no physical critical points for any q and h. Nonetheless, there is still interesting thermodynamic behaviour, which we discuss below. As in the no-hair case, there is a thermodynamic singularity in hyperbolic black holes which can be identified by the presence of 'crossing isotherms' in the p − v diagram, as seen in Figure 13. Formally, a thermodynamic singularity occurs at the point where ∂p ∂t v=vs = 0, ∂p ∂v t=ts = 0 (4.14) This gives v s = √ 2 and we can compute the singular pressure p s which is given by Eq. (4.15) shows that the thermodynamic singularity can be physical for positive h, provided that q 2 < √ 2h so that p s < p max . There is also the further restriction 0 < h < 6 √ 2/(5π) so that entropy is positive. This is contrary to the no-hair case where p s ≥ p max with p s = p max only when q = 0 and the thermodynamic singularity occurs at negative entropy [29]. Therefore, in the no-hair case, the thermodynamic singularity is always unphysical (unless p s = p max ) even if we allow for negative entropy. Since here the thermodynamic singularity can be physical for some combinations of (q, h), it is interesting to check the case when thermodynamic singularity coincides with a critical point. In the context of third (and higher) order Lovelock gravity, this situation gives rise to critical exponents that differ from the mean field theory values, and are (so far) the only such examples in black hole physics [29,42]. However, as we shall see below, it turns out to be impossible to tune q and h such that a critical point occurs at the thermodynamic singularity while still respecting the various physicality constraints. Nonetheless, the thermodynamic singularity itself can occur within physical constraints so we still have the peculiar situation where at the 'singular' volume v s the pressure is constant at all temperatures or at the 'singular' temperature t s , the volume is constant at all pressures. This peculiarity is very different from conventional fluids. Unlike the spherical case, here physicality constraints put an upper bound on both volume and pressure. Consequently the p − v diagram shows that physical black hole solutions are confined to a compact region in (p, v) space. As a result, the p − v diagram Right: for q = 0.4, highlighting the effect of electric charge. Enforcing all physicality constraints (S > 0, p < p max ), none of these apparent critical volumes will be physical critical points. will be discontinuous, as one can see from Figure 13. In particular, as h → 0 − , p s → p max +q 2 /8 and the oscillatory portion of the p−v diagram moves to lower pressures, while as h → −∞, the oscillatory portion of the p − v diagram is 'shifted' upwards. Therefore, we have a situation where for a given fixed h < 0, there is a maximum temperature t max above which the isotherms are not physical, i.e. no physical black hole solution exists for t > t max . For example, q = 0, h = −0.750 we have t max ≈ 0.69 and above t max the physical region contains no isotherms at all -hence no black holes can exist, as shown in Figure 13. This does not happen for spherical black holes because in that case the black hole volumes are bounded only from below and hence the physical (p, v) region is not a compact set. Therefore for hyperbolic black holes with any fixed (q, h) with h < 0, the entire thermodynamic phase space . We note that this is generic feature of hairy hyperbolic black holes in Gauss-Bonnet gravity in any dimension. Here in five dimensions, for 0 < h 0.540, there is always a black hole solution but still with discontinuous p − v diagram. Recall that for small, positive h the thermodynamic singularity occurs in the physical region of p − v space and the g − t diagram will show the characteristic 'reconnection' near the thermodynamic singularity, illustrated in Figure 14. We found that for h < 0, one can observe the following thermodynamic behaviour: zeroth-order phase transitions (0PT), first-order phase transitions without critical points (which we denote 1PT), and a double reentrant phase transition (which we denote as RPT2). Figure 14 shows how positive entropy and 'Gibbs reconnection' allows a zeroth-order phase transition to occur, since the Gibbs free energy is discontinuous. For h > 0, only a zeroth-order phase transition occurs. Note that the VdW-type oscillation is now completely within the physical region, and so the Gibbs free energy can display swallowtail structure even if the critical point itself is unphysical. We also classify this as 1PT. In each plot, the point where the isotherms cross marks the thermodynamic singularity. This occurs in the physical region in the right-most plot, a result which was previously unobserved for d = 5 Gauss-Bonnet gravity. Representative plots of the Gibbs free energy for the 1PT and RPT2 cases are shown in Figure 15 in the context of charged hyperbolic black holes. It is interesting that such a variety of thermodynamic behaviour is present even in the absence of critical points. Earlier it was mentioned that it is of interest to find out if the thermodynamic singularity can coincide with critical points, since this may lead to non-standard critical exponents. We will now formally show that if all maximum pressure and positive entropy constraints are enforced, there is no physical critical point that coincides with the thermodynamic singularity for any h and q. A situation such as this would occur when thermodynamically singular points also satisfy Eq. (4.12), giving To maintain positive entropy, Eq. (4.5) requires that Demanding that h s,c , t s,c , p s,c , S > 0 and p < p max = 3/(4π), we see that the physical thermodynamic singularity can also be a critical point if it satisfies Eq. (4.16) and 24 7π < |q| < 6 π and |q| < 3 π (4.18) which cannot be simultaneously satisfied. Therefore, we are forced to conclude that a critical point at the thermodynamic singularity is unphysical in d = 5. However, suppose for argument's sake that we relax the entropy condition. Then we only need to ensure that 24 7π < |q| < 6 π (4.19) We shall consider the most general case in d = 5. If we do a transformation and do a Taylor expansion of the equation of state about the critical point, we obtain From this expansion, it is easy to calculate the critical exponents using standard techniques (see, for example, [22].) Doing so we find that, which are the usual Van der Waals mean-field critical exponents. Therefore, when the critical point occurs at the thermodynamic singularity (relaxing the entropy constraint), it is characterized by standard mean-field theory critical exponents. This demonstrates that a critical point occurring at a thermodynamic singularity is not a sufficient condition to observe the non-standard critical exponents found in [29]. This argument is true for all values of the electric charge that satisfy the above constraints; however, when q = 6/π (corresponding to p = p max ), the critical point is indeed an isolated critical point of the type reported in [29]. We shall return to this topic later in our discussion of third order Lovelock gravity. Finally we note that since the thermodynamic singularity is in the interior of the physical region, the crossing isotherms imply that for v < v s we have usual VdW property while for v s < v < v max we have reverse VdW behaviour (RVdW), because for volumes within this range, lower temperature implies higher pressure (c.f. Figure 13). We emphasize that both VdW and RVdW can occur within the same physical sector because conformal hair allows a physical thermodynamic singularity where isotherms cross at that point. Overall, we observe that charged hyperbolic hairy black holes can exhibit other interesting thermodynamic behaviour in d = 5 even without physical critical points. For h < 0, we observe 0PT, 1PT, RPT2 for both the charged and uncharged cases, while for h > 0 only 0PT is observed for the uncharged case while all three phenomena can be seen in charged case. Furthermore, we can have a first-order phase transition with a VdW-type swallowtail structure for the charged case. These are all highlighted in Figure 15. We also have a physical region which can accommodate both VdW and reverse VdW-type behaviour. These are all new features of five-dimensional hyperbolic Gauss-Bonnet black holes made possible by the conformal hair. A full classification can be found in Table 2. At p = p s the disjoint branches of the Gibbs free energy 'reconnect' (center plot) and then split once more as pressure is increased, forming new asymptotic behaviour. Note that p s depends on q and h hence the value of p s for the left g − t diagram is different from the centre and the right g − t diagrams. In all cases, a dashed line indicates a branch of black holes with negative entropy. At t = t 1 , there is first order phase transition followed by another first order phase transition at t = t 2 back to original branch. At t = t 3 , there is zeroth order phase transition due to negative entropy. Thus we have two reentrant phase transitions. Centre: g − t diagram for h = 0.2, q = 0.4, p = 0.9p s ≈ 0.201 < p max , showing a first-order phase transition that is not of the VdW type. Right: g − t diagram for h = 0.2, q = 0.4, p = 1.01p s ≈ 0.226 < p max . Note the swallowtail structure at the lower left. Since the critical point is unphysical for these hyperbolic black holes, this first order phase transition differs from the VdW type. P − v criticality in d = 6 We will discuss the six-dimensional case very briefly for hyperbolic black holes, in particular clarifying the role of the thermodynamic singularity. We will see that all the behaviour in d = 6 is similar to that in d = 5, as seen by comparing the list of thermodynamic phenomena in Table 2 and Table 3 in Section 4.4. In six dimensions, the equation of state is given by and the critical points satisfy the following: The maximum pressure is given by p max = 5/4π ≈ 0.3979. Positivity of entropy implies the constraint: As before, the entropy enforces a lower bound for the volume in the spherical case and an upper bound for the hyperbolic case. As mentioned earlier we will not discuss at length the six-dimensional spherical black holes but we emphasize that nothing thermodynamically interesting is lost as the five-dimensional case captures all the interesting thermodynamic behaviour for Gauss-Bonnet hairy black holes in higher dimensions. We have verified that the thermodynamics for the six-dimensional spherical case adds nothing new compared to the five-dimensional case. We will discuss very briefly the six-dimensional hyperbolic case in particular to explore the thermodynamic singularity in these black holes. The full classification of possible thermodynamic phenomena is presented in Table 3 in Section 4.4. Hyperbolic case For the hyperbolic case, Figure 16 shows that there is at most one physical critical point for the hyperbolic case in six dimensions. In fact, one can go further and show that by enforcing all physicality constraints, there are no physical critical points for any q and h = 0. (The (q, h) space plot similar to Figure 5 would be entirely black). However there are still interesting thermodynamic results, albeit none of which are new. A full classification of these is provided in Section 4.4. Similar to the d = 5 case, we can have reverse VdW, zeroth-order phase transition (0PT), first-order phase transition without criticality (1PT), as well as reentrant phase transition (RPT) for d = 6, and qualitatively similar to the d = 5 case. Overall we observe that the conformal scalar hair enables various thermodynamic behaviours that would otherwise not be possible without conformal hair in their respective dimensions. These black holes also exhibit a thermodynamic singularity, which occurs at a pressure given by and we can once again show formally that a physical thermodynamic singularity will never occur at any physical critical point. By performing the same calculation as in d The red curve is the locus of critical points and the blue curve is the locus of zero-entropy points. Here we observe that there is no critical point for h > 0 for uncharged case, but presence of charge allows critical points to exist for all h (not necessarily physical, e.g. we have not imposed maximum pressure constraints). To maintain positive entropy, Eq. (4.5) implies (4.29) Demanding that h s,c , t s,c , p s,c , S > 0 and p < p max = 5/(4π), we can show that these constraints cannot be simultaneously satisfied. In particular, the first four can be satisfied but p < p max is incompatible with the rest. Therefore we are again forced to conclude that a critical point at the thermodynamic singularity is unphysical in d = 6. We conjecture that this situation holds for this theory in arbitrary dimensions. However, the underlying reason is slightly different for d = 5 and for d ≥ 6. For d = 5, S < 0 is incompatible with other constraints while for d ≥ 6 we expect that p < p max is incompatible with the rest and this incompatibility gets worse as d → ∞. In some sense, the incompability in d > 6 is worse because there one does not have the correct asymptotics to speak of a proper AdS black hole whereas the physical status of S ≤ 0 is less clear. Summary: thermodynamics of Gauss-Bonnet hairy black holes We summarize here in the form of tables the thermodynamic behaviour of Gauss-Bonnet hairy black holes for d = 5 and d = 6, along with our nomenclature (Table 1) for the various kinds of transitions we observe. While the values of q 0 , q 1 , and q 2 are universal, the other specific values of electric charge q are for exemplary purposes; any value of charge chosen within the allowed range would yield the same qualitative phase behaviour. For given value of q in the table adjusting h leads to a continuous deformation of the phase diagram with distinct values h a , h b , h c , h d , h e , h f marking the boundaries between different kinds of criticality. Furthermore, note that the summary only suggests that the said criticality occurs in some parameter range between the threshold values. For example, for d = 5, σ = 1, q = 0.1 (Table 2), RPT does not occur for the entire interval (h c , h d ) but only in a subset within the interval. However it is the only distinct thermodynamic behaviour within this region. Note that all non-zero charges within a given row yield the same phase behaviour; the value q = 0.2 describes a representative case. Note that all non-zero charges within a given row yield the same phase behaviour; the value q = 0.2 describes a representative case. 3 rd order Lovelock gravity In this section we consider the thermodynamic behaviour of U (1) charged hairy black holes in cubic Lovelock gravity. We begin by making some general considerations and then proceed to focus on d = 7 and d = 8, which are the lowest dimensions for which cubic Lovelock terms are gravitationally active. Thermodynamic equations and constraints In Gauss-Bonnet gravity, requiring the existence of AdS asymptotics implies a maximum pressure induced by the cosmological constant. Similar constraints apply to cubic Lovelock theory. To check for the existence of an asymptotic AdS region, we consider the polynomial for the metric function in 3rd order Lovelock gravity. Being a cubic polynomial in f , this equation has three distinct solutions. One of these solutions will not have a smooth α 3 → 0 limit; we refer to this as the Lovelock branch (f L ). Of the remaining two branches, we denote the Gauss-Bonnet branch (f GB ) as the one that does not permit a smooth α 2 → 0 limit and the Einstein branch (f E ) as the one that does. Each will have AdS asymptotics provided the Lovelock coupling parameters obey the following inequality If this inequality is violated, then only the Lovelock branch has valid asymptotic AdS structure, whilst the Gauss-Bonnet and Einstein branches terminate at finite r. Constructing the dimensionless quantities (v, t, h, a, m) via Positivity of entropy implies that from Eq. (2.28). The equation of state 6 can be shown to be which reduces to the usual no-hair case [29] by setting h = 0. The Gibbs free energy is given by The physical state of the system will be those values of t, p, q and h that globally minimize g for a given fixed α. Here in cubic Lovelock gravity we are confronted with an additional difficultly compared to the Gauss-Bonnet case considered earlier: there are now three parameters (α, q, h) that can be adjusted. This makes it cumbersome to completely characterize the thermodynamics in terms of the effects of q and h. We shall instead describe the salient effects of q and h in different regions of α that bear particular significance. P − v criticality in d = 7 For the time being we assume α > 0 for simplicity. In seven dimensions, the entropy condition (5.7) becomes and it is clear that for negative h this is trivially satisfied for spherical black holes. The equation of state is given by Recall that we have assumed σ = 0 because planar black hole solutions cannot admit nonzero hair. Therefore in cubic Lovelock gravity we also express the equation of state and Gibbs free energy in a simplified form where σ 2k = 1. This is an invalid simplification if the planar case is included. and the critical points satisfy the following equations where these expressions all reduce to the cubic Lovelock case studied in [29] when h = 0. As before, the qualitative effect of the hair is to introduce new terms of odd powers in the volume v. Spherical case For h = 0 it was shown in [29] that for α ∈ (0, 10) there is exactly one physical VdW-type critical point for both charged and uncharged spherical black holes. Furthermore, for fixed charge q this critical point may become unphysical for sufficiently large α. From our study of Gauss-Bonnet hairy black holes, we have seen that h acts as a 'tuning parameter' which gives a continuous one-parameter family of phase diagrams allowing for more thermodynamic possibilities. We will show that this also occurs in cubic Lovelock gravity: for hairy black holes in d = 7, we will recover all results previously found for d = 7 and d = 8 from [29], along with all of the thermodynamic behaviour we described for hairy Gauss-Bonnet black holes in the previous section. Figure 17 shows the possible number of critical points for four representative (q, h) values in parameter space as α varies. We see that the main difference lies in the grey region, which is the region in parameter space where the critical points become unphysical due to maximum/minimum pressure constraints. A careful analysis reveals that the grey region only applies to the Gauss-Bonnet and Einstein branches, since the Lovelock branch still possesses AdS asymptotics beyond the maximal pressure. To study the Lovelock branch, f L , we can simply ignore the grey region, as per the centre diagrams in Figure 17. We shall not attempt to fully classify the possible thermodynamic phenomena as functions of α, q, h here, but instead focus on the results which differ from the no-hair case. We refer to summary Section 5.4 for further, more detailed examples which are not explicated upon below. We will focus specifically on three results that do not occur in the absence of scalar hair: (1) reentrant phase transitions, (2) triple points and (3) virtual triple points. In the absence of scalar hair, the lowest dimension for which RPT can occur for uncharged spherical black holes is d = 8. Figure 18 (left) shows the Gibbs free energy for h = 0.5, q = 0, α = 1 demonstrating that h = 0 allows RPT to occur in uncharged sevendimensional spherical black holes. The coexistence plot for this transition is qualitatively no different than that shown in Figure 7. Furthermore, in the no-hair case, the lowest dimension for a triple point is also d = 8, and as usual small charge is needed. In contrast to this we find that hair allows a triple point to be realized in d = 7, e.g for h = 0.5, q = 0.1, α = 1 (c.f. Figure 18). The virtual triple point phenomenon observed in Gauss-Bonnet hairy black holes in the previous section (c.f. Figure 10 and accompanying discussion) indicated that h acts like Figure 17. The (q, h) parameter space for various α: d = 7 and σ = +1: A plot of the possible number of physical critical points for various α. Black corresponds to no critical point, blue to one critical point, red to two, green to three and magenta to positive pressure but negative entropy. In these figures, there is a sliver of black region on the vertical axis, since for q = 0 the parameter space as a function of h is qualitatively different from when q = 0. Grey corresponds to the region of parameter space where the physical critical point is outside of the maximum pressure bound for the Einstein and Gauss-Bonnet branches; it is only physical for the Lovelock branch f L . Top left: α = 1 < √ 3. Note that since there is only one branch with valid AdS asymptotics for α < √ 3, there is no maximum pressure constraint and hence there is no grey region. Top centre: α = 2 without the maximum pressure constraint. This is the parameter space for the Lovelock branch f L . Top right: α = 2 with the pressure constraints imposed. Bottom left: α = 4.55. This is approximately the value of α at which the blue region ends at q = h = 0 in d = 7. In particular, for h = 0 there is minimum nonzero charge q min given by the boundary between the blue and grey regions such that if q < q min , then the critical pressure will exceed the maximum allowed pressure [29]. This special value of α depends on the dimension d. For α 4.55, the grey region extends to negative h. Bottom center : α = 6 4.55, without the pressure constraints. Bottom right: α = 6, with the pressure constraints imposed. A simple observation shows that increasing α simply expands the grey region away from the magenta region and shrinks the green and red regions. Eventually for sufficiently large α there can be only one physical critical point. a 'tuning parameter' that relocates various critical points of the system. This is essentially the reason why we can obtain VTP behaviour. Once a triple point occurs, one can use h to adjust the location of critical points within the triple point phase diagram until a point where a critical point becomes a VTP, i.e. touches one of the first order coexistence lines as in Figure 10. Therefore, it is not difficult to see that if we can find a triple point for a fixed triple (q, h, α), then h would generate one-parameter family of phase diagrams containing a VTP. For example, a triple point occurs for α = 1, h = 0.5, q = 0.1 and p ≈ 0.51 and we can obtain the following exact sequence of phase diagrams as h is adjusted: VdW → VdW* → VTP → TP → VTP → VdW* employing the notation of Section 2, where the arrow denotes an increase in h and each occurs for some pressure p. This example is valid for the Lovelock branch only, since α = 1, and care must be taken to ensure the pressure constraints are satisfied for the Gauss-Bonnet and Einstein branches. A more general example, applicable to all three branches, has a triple point and a longer sequence of critical behaviours: VdW → VdW* → VTP → TP → VTP → VdW* → VdW where (in this example) α = 3, h = 0.5, q = 0.3 and p ≈ 0.044. This is because, for α ∈ (2.5, 3.8), there is a small additional range of h with VdW-type behaviour, which we can understand by plotting figures similar to Figure 19. This discussion shows that the d = 7 cubic Lovelock spherical black holes reproduce all of the thermodynamic phenomena found earlier for Gauss-Bonnet spherical hairy black holes as all phenomena previously obtained [29] for d = 8 hairless cubic Lovelock spherical black holes. We provide a more detailed representative analysis in Section 5.4. Hyperbolic case Following [29], we split the range of α into four distinct regions bounded by the following values of α: 0, 5/3, √ 3 and 3 3/5. Within each region we see distinctive thermodynamic There can be up to two critical points. Right: q = 0.1. Similar to Gauss-Bonnet case, the presence of sufficiently small charge enables up to three critical points to exist provided α is also sufficiently small. There is a maximum h = h * given by the intersection between the solid curves and the dashed curves, such that no criticality occurs for h > h * due to negative entropy. Note that h * appears to be independent of α. behaviour. In the no-hair case [29], special attention was given to the point α = √ 3 since at this point an isolated critical point was found at a thermodynamic singularity. However, we will show in this section that ICPs are not a unique property of α = √ 3, but can occur for a much larger parameter space when h = 0. Figure 20 provides the (q, h) parameter space in terms of the number of physical critical points at various representative α within the four regions. These plots show the four qualitatively distinct behaviours in (q, h) space that are characteristic of these partitions of α. We do not exhaust all possibilities here, but list some further examples in Tables 6 and 7. For more details the reader is directed to Section 5.4. For α < 5/3 (e.g. α = 1), we see that the (q, h) parameter space is partitioned into two regions, namely the one with one physical critical point (blue) and no critical point (black). If q = 0, then for h = 0 one there is one critical point with standard VdW behaviour and for h = 0 the behaviour is of ideal gas type and there is no criticality. For q = 0, the thermodynamic behaviour is VdW for h > h 0 where the value of h 0 depends on the charge (h 0 corresponds to the boundary between the blue and black regions of Figure 20) while the system displays ideal gas behaviour if h < h 0 . Thus for this range of α, the thermodynamics is simple: we observe only VdW and ideal gas behaviour. For 5/3 < α < √ 3 (e.g. α = 1.5), the situation is more complicated because now it is possible to have more than one critical point (the red regions in Figure 20). From Figure 20 we see that for all q there is a minimum h below which all critical points have negative entropy. It has been shown in [29] that for q = h = 0 there are two critical points corresponding to VdW and RVdW behaviour with an unphysical thermodynamic singularity. As q increases (keeping h = 0), one critical point becomes unphysical, the VdW behaviour ceases while the RVdW remains. In Figure 20 we see that this behaviour persists to some extent for nonzero h, though now there is a region with no critical points (black). We will use three exact sequences to represent the thermodynamic behaviour in the three different regions. We have where the RPT is qualitatively similar to Figure 8 and the square bracket implies both occur in a system with fixed α, q, h. Figure 21 shows the situation where both VdW and Note that there are two critical isotherms (red), one corresponding to VdW type and the other corresponds to RVdW type. RVdW is characterized by VdW-type 'oscillation' but at temperatures higher than critical temperature while VdW-type oscillation occurs at temperatures below critical temperature (blue). Right: the corresponding p − t diagram. The RVdW behaviour manifests itself in an 'inverted' coexistence curve (shown here with a black line). RVdW occur in the same physical region. The 1PT phenomenon corresponds to an infinite coexistence curve where a swallowtail structure persists at all pressures without critical point. Also note that since the exact sequence for q = 1.0 shows that VdW and RVdW are lost as h increases, we expect that there is a unique h * where the transition occurs. In particular, at h = h * ≈ 1.493070374 the critical points coalesce for this value of q = 1.0, as show in Figure 22. It turns out (see Section 6 for the explicit calculation) that this is an instance of an isolated critical point (ICP) phenomenon which has non-standard critical exponents. For h = 0 this point occurs exactly when α = √ 3 [29] and therefore coincides with the thermodynamic singularity. We see from this example that the conformal hair allows ICP to occur for a range α; the no-hair α = √ 3 condition is no longer required. We will further elucidate the features of this isolated critical point later in this paper. In particular we will show that certain properties of the isolated critical points described in e.g. [29,42,67] are in fact independent of the critical points themselves. For √ 3 < α < 3 3/5 (e.g. α = 2), the situation is surprisingly simple. Although we can choose four different charges q = 0, 0.1, 1, 3 corresponding to four regions in (q, h) space with different qualitative features (c.f. Figure 20), there are only two possible exact sequences of critical behaviour: where the arrows once again correspond to increasing h. Here 1PT here refers to a first- order phase transition with an infinite coexistence line. It is remarkable that h = 0 does not add any additional features to the system, since 1PT is already a feature of h = 0 hyperbolic black holes for α ∈ ( √ 3, 3 3/5). Lastly, for α > 3 3/5 (e.g. α = 4), we can find instances of RPT2 along with other simpler phenomena such as 1PT (with infinite coexistence line) and also purely 0PT depending on pressure, as shown in Figure 23. Refering to the leftmost plot: when the pressure is small, the two branches are separated and the 'parabolic' part is below the straighter curve, leading to purely 0PT phenomenon. As pressure increases, the curves will eventually intersect and an RPT2 is observed. As the pressure is further increased, a point is reached where there is one intersection point, i.e. purely 1PT corresponding to an infinite coexistence line in the phase diagram. As h is varied, the following sequences of critical behaviour are observed: where RPT2 refers to double reentrant phase transition. Note that if at some pressure p 0 we find an RPT2, lowering the pressure sufficiently will lead to a 0PT, whereas raising the pressure sufficiently will lead to a 1PT, as can be observed from the rightmost plot of Figure 23. P − v criticality in d = 8 In eight dimensions, the equation of state is given by 14) The critical points correspondingly satisfy and we note that the presence of hair now introduces additional even powers of v. In the following paragraphs we briefly comment on the thermodynamic properties of spherical and hyperbolic black holes in d = 8. As it turns out, there is a close connection between the results in d = 7 and d = 8. We therefore simply provide a summary of the critical behaviour. Considering spherical black holes, in Figure 24 we plot the number of critical points in (q, h) parameter space for various α. Inspection of Figure 24 reveals obvious qualitative similarities between d = 7 and d = 8, and we find this to be so. That is, we find nothing Figure 17. The black, blue, red, green regions represent zero, one, two and three physical critical points respectively. The grey region represents the part of parameter space where the critical points are unphysical for the Gauss-Bonnet and Einstein branches. For Lovelock branch we simply ignore the grey region, as shown in the centre plots above. Top left: α = 1. Top center : α = 2, without pressure constraints valid for Lovelock branch. Top right: α = 2, with pressure constraints imposed. The grey region is the region where critical pressures exceed maximum pressure allowed, applicable for Gauss-Bonnet and Einstein branches. Bottom left: α = 4.55. Bottom centre: α = 6, without pressure constraints. Bottom right: α = 6, with pressure constraints imposed. qualitatively new in d = 8, with the sequences of critical behaviour reducing to the seven dimensional case identically. For example, Table 5.3 provides representative parameters values for a reentrant phase transition and triple point occurs. The associated sequence of critical behaviour which results from varying h is identical to the seven dimensional case. Moving on to consider the hyperbolic case, we can see from Figure 25 that the possible critical points and entropy conditions very closely resemble the d = 7 case once again. Furthermore, our analysis has not revealed any additional critical behaviour different from that reported in the d = 7 case. Therefore, we do not pursue the criticality analysis any further here. Once again in the d = 8 case we find instances of ICPs occurring away from thermodynamic singularities provided α ∈ ( 5/3, √ 3) (ensuring the existence of two critical points). A particular example is given by α = 1.5, q = 1, h = 1, which gives both VdW and RVdW behaviour similar to Figure 21. As h is increased, eventually the two critical points coalesce at h ≈ 2.2884146804856005889. Performing similar computations as in d = 7, we find that the critical exponents are given by Eq. (6.6). Thus in d = 8 these black holes belong to the same universality class near the isolated critical point. Summary: thermodynamics of cubic Lovelock hairy black holes Here we summarize the possible thermodynamic behaviours of cubic Lovelock hairy black holes for d = 7. We find that d = 8 provides no new critical phenomena, so we do not reported a detailed analysis here. For concreteness we use specific examples of q, α wherever appropriate to highlight representative thermodynamic behaviour. We mark the boundary between different thermodynamic behaviour by various thresholds h a , h b , h c , h d , which are of course dependent on the values of q and/or α. While the threshold values change with q and α, the qualitative thermodynamic behaviour is robust. In many cases we do not provide explicit threshold values as their actual values do not illuminate the physics and they depend on q, α directly. Instead we focus on describing the underlying sequences of critical behaviour, and provide sample values of h where a given behaviour can be observed for certain values of q and α. We employ the same notation as in the Gauss-Bonnet case and additionally denote the isolated critical point phenomenon as ICP. We ignore the maximum pressure condition as there is always one branch that has AdS asymptotics. Lastly, for the hyperbolic case recall that entropy condition forces black hole volumes to be bounded from above in general, and there is some maximum h beyond which black holes have negative entropy for any volume. For simplicity we do not list these regions in the table below but remark that this should be taken into consideration. Isolated critical points and superfluid transition for cubic Lovelock black holes Thus far we have explored various critical phenomena resulting from conformal hair and we have seen that richer thermodynamic behaviour is possible. In this section we focus on two important novel phenomena, namely (1) thermodynamically non-singular isolated critical points, and (2) a 'superfluid transition' in hyperbolic hairy black hole solutions. Isolated critical points have been previously found in [29,42,67] but we shall clarify their properties in this section. The term 'superfluid transition' in our black hole spacetimes will also be motivated here. Isolated critical points In previous work focusing on Lovelock and quasi-topological gravity (without hair) examples of isolated critical points characterized by non-standard critical exponents have been found [29,42,67]. In these studies, the isolated critical point is an extremely special phenomenon, occuring in all cases when the critical point coincides with the thermodynamic singularity and for massless, hyperbolic black holes. Furthermore, the coupling constants have to be finely tuned to allow the existence of these critical points, e.g. α = √ 3 for the cubic theories. Here we demonstrate that the thermodynamic singularity in fact has nothing to do with the existence of isolated critical points. For simplicity, we restrict our discussion to d = 7 cubic Lovelock gravity. Recall from (5.12) that the critical temperature is given in terms of the critical volume by the expression with q remaining a free parameter. It is remarkable that under these conditions, the discriminant of the polynomial determining the critical volume, It is straightforward to expand the equation of state near this critical point. Although the resulting expression for general α is too cumbersome to present here, the expression takes the general form p p c = 1 + Aτ + Bω 2 τ + Cω 3 + · · · , (6.5) where A, B, C are complicated α-dependent coefficients that can be computed exactly. The most significant feature of this expression is that the ωτ term vanishes identically. It follows from this expansion that the critical exponents for this critical point are which are the same as those found for the isolated critical point in [29]. Here the critical point does not occur at the thermodynamic singularity except for the two specific values of α given above, and in these two cases the critical exponents are the same as in (6.6). The behaviour in the p − t plane is shown in Figure 26 for a representative ICP. We initially see two distinct critical points which then merge as h approaches the special value given by Eq. (6.2). From these plots it is abundantly clear that these ICPs are not occurring at the thermodynamic singularity, since it is obviously the case that ∂p/∂t v=vc = 0. This suggests that all that is required to have an isolated critical point is to have two (or more) critical points coincide. Here we find that this can occur when both VdW and RVdW behaviour are manifest physically (i.e. respect all of the relevant physicality constraints)the two critical points can in general be made to coincide through a tuning of h. Note that in the previous work on isolated critical points, the black holes possessing these non-standard critical exponents are all massless-here this is not the case. The dimensionless mass m for d = 7 can be easily computed to be and can be both positive and negative for the critical points we have discussed, as shown in Figure 27. The negative mass solutions are not an issue here since in the case σ = −1 sensible negative mass black holes exist, provided appropriate identifications are made [68]. This demonstrates that vanishing mass is not a generic feature of black hole systems with isolated critical points. Superfluid-like lambda transition One might be interested in the expansion Eq. (6.5) for the cases B = 0 or C = 0, since then the critical exponents γ and δ would differ from those presented above in Eq. (6.6). While we have not been able to determine any interesting behaviour for C = 0, we find where t c is the critical temperature and is a free parameter. Therefore, we now have: (1) infinitely many critical points given by v c = 15 1/4 at arbitrary temperature, and (2) the critical pressure scales linearly with critical temperature. The p − v diagram (shown in Figure 28) shows that every isotherm is a critical isotherm, while the phase diagram shows that there is a linear locus of critical points, thus we have a situation where there is no first-order phase transition but a continuous line of second-order phase transitions. In Figure 28 we show the isotherms and phase diagram and in Figure 29 we display the Gibbs free energy and the specific heat i.e. c p = −t ∂ 2 g/∂t 2 . In these latter plots, we see that for a given pressure, at the temperature which satisfies Eq. (6.11), the specific heat diverges, signaling a second order (continuous) phase transition. . The Gibbs free energy and specific heat for d = 7 hyperbolic black holes with infinitely many critical points: Left: the Gibbs free energy diagram for various pressures, chosen such that the critical points occur at t c = 3, 5, 7 which corresponds to red, blue and black curves respectively. Center : a plot of specific heat capacity c p = −t ∂ 2 g ∂t 2 . Note the divergence in specific heat capacity at t c = 3, 5, 7, indicating a second order phase transitions at these temperatures. Right: the zoomed-in version for the case where divergence occurs at t = 3. This plot bears a strong resemblance to the 'lambda phase transition' between fluid/superfluid for 4 He. Lines of second-order phase transitions occur in condensed matter systems and correspond to, for example, fluid/superfluid transitions [69], superconductivity [70], and paramagentism/ferromagnetism transtions [71]. Building on the black hole/van der Waals fluid analogy [22], the natural interpretation here is that this second order phase transition between large/small black holes corresponds to a fluid/superfluid type transition. The resemblance to the fluid/superfluid λ-line transition of 4 He (cf. Figure (2) of [62]) is striking. In each case, a line of critical points separates the two phases of fluid where specific heat takes on the same qualitative "λ" structure. Of course, the phase diagram for helium is more complicated, including solid and gaseous states as well. This is to be expected since helium is a complicated many body system, while our black hole solutions are comparatively simple being characterized by only four numbers: v, h, q and α. However, it is remarkable that with so few parameters we can capture the essence of the λ-line. Unfortunately, most of the interesting properties of a superfluid are either dynamical or require a full quantum description to understand (see, e.g. [72,73] for an introduction and review). Here we do not have access to a model of the underlying quantum degrees of freedom, and so cannot explore these properties at a deeper level. In fact we can generalize this result to arbitrary dimension in d ≥ 7. Starting from the equation of state in arbitrary dimensions given in Eq. (5.8), and considering the values of external parameters and fixing σ = −1 i.e. hyperbolic horizon topology, we still obtain a line of second-order phase transitions because there is a continuum of critical points given by It is natural to wonder if there is any pathological behaviour hiding behind the scenes here. To explore this we consider the Kretschmann scalar evaluated on the horizon, (6.14) The first derivative of f is clearly finite for any finite temperature, so we need only consider f . For simplicity we consider the case d = 7 where the superfluid solution was first observed [62]. Expanding near v c , p c we see that, t c dv . (6.15) This is completely finite both at the critical point and near it; there are no curvature singularities associated with this thermodynamic behaviour. For thoroughness, we have also examined the explicit solution to the field equations in detail. Outside the horizon the metric function is well-behaved and the Kretschmann scalar is everywhere finite. Within the horizon, the metric function extends all the way to r = 0, but develops an infinite first derivative at finite r for large enough p. The latter is not new behaviour nor is it in any way fatal-similar behaviour occurs for charged Gauss-Bonnet and cubic Lovelock black holes, but there the metric function actually terminates at this point (cf. top right plot of Figure 3). Thus, there is nothing pathological about the curvature invariants of these black holes. Furthermore, both the Gibbs free energy and the temperature have smooth expansions near each critical point. These results are not particularly surprising -the values of v c and α used here do not correspond to the thermodynamic singularity. The specific heat is positive (indicating thermodynamic stability) and the positivity of entropy, ghost, and tachyon conditions are all satisfied here. We note that due to the value of α = 5/3 < √ 3, the result is only valid for black holes of the Lovelock branch since the Gauss-Bonnet and Einstein branches do not exist for this value of α. The λ-line is a line of critical points, and it would be desirable to determine the critical exponents along this line. The standard method of computing the critical exponents ultimately fails in this case, as we will now explicitly highlight. To see this, suppose we proceed to calculate the critical exponents in the naive way (we do this for d = 7 for concreteness). We can expand the equation of state near any one of the infinitely many critical points to obtain, with ω 1 and ω 2 free. This latter case is not a sensible solution since in the limit of τ → 0, we are left with a negative t c , while for the trivial solution we have that the order parameter, η = v c (ω 2 − ω 1 ) vanishes, suggesting that β = 0. This result is unchanged by the inclusion of higher order terms in the expansion of the equation of state. The exponent γ governs the behaviour of the isothermal compressibility near criticality, Computing this for the above expansion we find, κ T = 1 420ω 2 112π(15) 1/4 t c + 555 2π(15) 1/4 t c τ + 2π(15) 1/4 t c + 85 (6.20) which in the limit of the critical point is independent of τ , suggesting that γ = 0. Therefore, by this argument, it seems that each critical point on this line of criticality is characterized by the critical exponents α = 0 , β = 0 , γ = 0 , δ = 3 . While these are manifestly 'non-standard' critical exponents, these bizarre results signal that something has gone wrong in the approach. The problem lies in the assumption that the pressure is still the ordering field. Here, this is not the case-changing the pressure merely changes the temperature at which the second order phase transition occurs. Thus, pressure is no longer the appropriate ordering field, a situation similar to that in liquid 4 He at the λ line [69]. To calculate valid critical exponents, the correct ordering field, Θ, must be identified and in our case there are three options for Θ: q, h or α. It turns out that the obtained critical exponents are the same regardless of which choice is made, but the electric charge q is in some sense the most natural choice since it is easy to imagine adjusting q by throwing charged material into the black hole. To calculate the critical exponents, we proceed as usual, expanding the ordering field near any of the critical points in terms of τ and ω. We find, Θ Θ c = 1 − Aτ + Bτ ω − Cω 3 + O(τ ω 2 , ω 4 ) , (6.24) where the values of A, B, C depend on both the pressure, p, and will be different (but non-zero) depending on which choice is made for the ordering field. This expansion yields the following critical exponents, α = 0 ,β = 1 2 ,γ = 1 ,δ = 3 , (6.25) which govern the behaviour of the specific heat at constant volume, C V ∝ |τ | −α , the order parameter ω ∝ |τ |β, the susceptibility/compressibility (∂ω/∂Θ)| τ ∝ |τ | −γ and the ordering field |Θ − Θ c | ∝ |ω| δ near a critical point. These results coincide with the mean field theory values, which agree with those for a superfluid in a d > 5 (cf. Table I of [69]). One way to visualize this result is that the line of critical points in the p − t plane represents a line where a surface of first order phase transitions terminates in some larger space (p, t, Θ). Our calculation of critical exponents then represents the behaviour of the system as the line of criticality is approached, not in (p, t) space, but rather in this larger space. We highlight this in Figure 30 for the case (p, t, q 2 ). We can go a bit further and analyse under what situations these type of λ-lines can be expected for black holes. The necessary feature of our result is that the conditions for a Figure 30. Line of criticality in (p, t, q) space: If the space of thermodynamic parameters is enlarged, the line of critical points in the p − t plane (the bold red line here) can be thought of as the critical line at which a surface of first order phase transitions terminates. For each constant p slice, this is a first order small/large black hole phase transition as temperature increases. critical point are satisfied irrespective of the temperature. Thus, consider a general black hole equation of state of the form, P = a 1 (r + , ϕ i )T + a 2 (r + , ϕ i ) (6.26) where ϕ i represent additional constants in the equation of state (here they would correspond to α, q and h). Our condition will be met provided the following holds: ∂a 1 ∂r + = 0 , ∂ 2 a 1 ∂r 2 + = 0 , ∂a 2 ∂r + = 0 , ∂ 2 a 2 ∂r 2 + = 0 , (6.27) with both a 1 and a 2 non-trivial. From this perspective, it makes sense that we found the behaviour that we did: here we have four equations, and in the case considered in this paper there are a total of four variables. It is natural then to wonder if other systems exhibit this behaviour. We have checked this for the rotating black hole of 5d minimal gauged super-gravity [74] which has four parameters, but have found that no solution to the above equations exist. Furthermore, in the case of higher order Lovelock gravity with electric charge (but not hair) solving the four equations forces a 1 = 0. Hence such a line of critical points does not occur (or, if something similar does, it happens under a different configuration). It is possible that the superfluid transition could take place in five-dimensions when coupling the scalar field to the quasi-topological density, along the lines considered in [75]. Thus it remains an interesting line of future work to determine for what other black hole solutions these superfluid-like transitions can occur. Conclusion In this paper we have studied hairy black holes within Lovelock gravity, focusing explicitly on Gauss-Bonnet and cubic Lovelock gravity. We began this study by examining the AdS vacua of the theory and constraining the coupling constants such that the graviton is not a ghost/tachyon in these backgrounds. It was found that hyperbolic black holes always satisfy these conditions, but for a non-zero, constant scalar field the spherical black holes require coupling of the scalar field to cubic (or higher) terms to meet this condition. As a result, for the coupling constants required for spherical black holes in D < 7, the theory suffers from the ghost instabilities unless the scalar field vanishes for a constant curvature vacuum. This could be remedied in D = 5 (by coupling to the quasi-topological term, for example), but for D = 6 there is no obvious cure to the problem for non-vanishing scalar field. We then considered the thermodynamics of these hairy black holes in Gauss-Bonnet gravity. We recover a number of previously found results, such as van der Waals behaviour, reentrant phase transitions, and triple points. In addition to established thermodynamic behaviour for Gauss-Bonnet gravity, we find new behaviour such as virtual triple points. These correspond to a limiting case of an ordinary triple point with a critical point occurring directly on a first order coexistence line. In considering the thermodynamics of hairy black holes in cubic Lovelock gravity, we have found two particularly interesting results. First, we have clarified the connection between isolated critical points and thermodynamic singularities. In previous work, isolated critical points have been encountered only as very special results, always occurring for massless black holes at the thermodynamic singularity. The addition of scalar hair permits a family of isolated critical points which occur for black holes away from the thermodynamic singularity. The key requirement for the existence of such a critical point appears to be the merging of van der Waals and reverse van der Waals behaviour, and therefore has nothing to do with the thermodynamic singularity at all. Most interestingly, we have found for these hairy black hole solutions the first example of a black hole λ-line: a line of second order (continuous) phase transitions. This phase transition bears an interesting resemblance to the superfluid phase transitions which occur, for example, in liquid 4 He, leading to the name 'superfluid black holes' for those which exhibit this behaviour. We have found that a necessary condition for a black hole λ-line is three external parameters in the black hole equation of state. Hence, we have found them to occur for black holes in third (or higher) order Lovelock gravity with scalar hair and electric charge. However, this is by no means a sufficient condition and determining further examples of black holes that exhibit this behaviour remains an interesting problem for future work.
23,643.2
2016-12-20T00:00:00.000
[ "Physics" ]
The Modified Phillips Curve as a Possible Answer to Japanese Deflation A modified Phillips curve is useful to explain the contradictory findings sometimes arising from conventional Phillips curve estimation. In this paper, we estimate the inflation–unemployment and real wage inflation–unemployment dynamics for both Japan and the United States using data between 1972:Q1 and 2014:Q4. We divide this into two roughly equally sized sub-periods, 1972:Q1–1991:Q4 and 1992:Q1–2014:Q4. The first sub-period tracks the Japanese economic boom prior to the bursting of the bubble economy; the second continues to reflect the long recessionary period in Japan that followed. The modified Phillips curve serving as the aggregate supply (AS) curve with a quantity equation with zero velocity as the aggregate demand (AD) curve in an AS–AD framework, reveals that much of the slowdown in Japanese inflation was due to the lack of the postwar acceleration of “productivity-based” real wage inflation, a feature unexplained within a traditional demand-oriented approach. Some of the efficacy of the productivity-based real wage acceleration that we identified is related to the use of this simpler formulation of the AD curve, which even though it has an inherent analytical bias in supporting the role of monetary policy, it is permissible when focus lies on the decisions of suppliers. Introduction The slope of the Phillips curve has become flatter in the past few decades, and until recently has been largely considered moribund. The watershed moment came when Gordon [1] and later Watson [2], adopted the approach of excluding long-term unemployment in reviving the negative correlation between inflation and unemployment. In this paper, we highlight another major development in the Phillips curve literature in the reformulation of the productivity variable as an eight-quarter change in the productivity trend (see, Dew-Becker and Gordon [3]). This seems to have greatly contributed to the reemergence of the Phillips curve relation in 2013, with the estimation results in Gordon [1] indicating that this rather unfamiliar productivity measure exerts significant upward pressure on the U.S. inflation rate (see Table A1). 1 However, in examining Fig. A1, we can see that for Japan, the working of this new measure of productivity is rather uncertain. Gordon's ideal model setting that allows the nonaccelerating inflation rate of unemployment (NAIRU) to vary over time does not work as predicted and leads to very flat NAIRUs. Therefore, on further examination, we fix the value of the variance of the NAIRU, the simplest way being to adopt a constant NAIRU setting, which provides the first approach to estimating the Japanese Phillips curve. 2 Critically, we find the estimation results of the Phillips curve with a constant NAIRU setting to be contradictory (see Table 1). In our analysis, we find the sign of the regression coefficient for the productivity variable to be positive for an earlier period, although weakly so. In contrast, the ordinary Phillips curve is negative for this same period. The question is why this is the case and we believe the answer may well provide a remedy to the current problem of deflation in Japan. This paper is thus our attempt to provide empirical evidence and corresponding theoretical underpinnings to support this assertion. Therefore, in this paper, we formulate a modified Phillips curve relation and use this to reevaluate the above findings. We find that the modified Phillips curve (aggregate supply [AS] curve), in conjunction with a quantity equation with constant velocity equals zero (aggregate demand [AD] curve), reveals that much of the slowdown in Japanese 1 Gordon [1] revealed that the new index of productivity dramatically improves the explanatory power of the inflation equation for the Phillips curve. 2 The constant NAIRU setting can also be justified by OECD-sponsored studies of various countries (see, e.g., the study of 21 OECD countries by Richardson et al. [4] and OECD update after 2008 by Guichard and Rusticelli [5]); which is also in line with prior research on Japan's Phillips curve undertaken by Nishizaki [6]. For unsatisfactory outcomes from the time-varying NAIRU model, see, e.g., Apel and Jansson [7] and Laubach [8]. inflation is because of the lack of the postwar acceleration of "productivity-based" real wage inflation not explained within a traditional demand-oriented formulation. Part of the efficacy of productivity-based real wage acceleration we identify is related to this simpler formulation of the AD curve, which has an inherent analytical bias in supporting the role of monetary policy, although this is permissible when focus is on the decisions of suppliers (see, e.g., Lucas [9] for a justification). The literature relating to this paper dates back to analysis conducted during the rapid growth period in Japan. In 1986, Hamada and Kurosaka [10] shifted the emphasis from downward wage rigidity to upward wage flexibility. Comparing Japanese data with that of the U.S. and three other countries, they highlighted the upward volatility of wages in Japan-a feature consistent with the commonly held view of the Japanese labor market. This could also be a way for us to include wages in Gordon's triangular Phillips curve, creating a modified Phillips curve. In fact, Hamada and Kurosaka's findings on the Japanese labor market that real wage flexibility accommodates productivity changes during the period 1971-75 and, until 1983, is consistent with the real wage-setting function we include as a missing element in the standard (triangular) Phillips curve equation. In contrast, Kuroda and Yamamoto [11] have focused on downward, not upward, wage flexibility. They also decomposed the change in the nominal rather than the real wage. The quasi-Phillips curve they plotted suggests a malfunctioning of the Japanese labor market in recent years, and is consistent with the approach taken in this paper. The standard Phillips curve equation also appears in Nishizaki et al. [12], which indicates that a decline in price expectations and import prices, the negative output gap, and a higher exchange rate all account for long-lasting, mild deflation. Ashiya's [13] analysis also provides preliminary insights into the price dynamics, which we extend in a more general framework using Okun's coefficient (see Appendix B). We extend this Phillips curve relation to an AS relationship, which when combined with the AD curve provides a theoretical underpinning for the behavior of price and output in the Japanese economy, and thereby helps to explain the slowdown in inflation. We maintain the generality of Ashiya's findings on the role of price expectations in the postwar (mature) Japanese economy in our AS−AD framework. In a somewhat different analysis, Sudo et al. [14] used daily scanner price data to decompose the price changes to find that retail shops (or suppliers) play an important role in Japanese price dynamics. This focus on suppliers supports our emphasis on the supply side of the economy. Our findings are also close to Urasawa [15] in that the translation of productivity gains into adequate wage growth and demand may help resolve Japanese deflation. Urasawa estimates a structural vector auto regressive model using productivity, wage costs, and prices, similar in many ways to the modified Phillips curve we extend to an AS−AD framework. We reasonably expect the acceleration of productivity-based real wage inflation in the postwar boom economy, but that this will become increasingly more difficult in Japan's currently mature economic state. One consequence is that the role of economic policy in externally affecting price expectations has become more significant, exactly as the modified AS−AD equilibrium suggests. The remainder of the paper is structured as follows. Section 2 details the formulation of the model and Section 3 discusses the data. Section 4 presents the estimation results and Section 5 elaborates upon the theoretical underpinnings. Section 6 concludes. In Appendix A, we summarize the estimation results of the Phillips curve model with the time-varying NAIRU setting and in Appendix B we do the same for Okun's coefficient. Estimation Models The estimation models for the inflation-unemployment dynamics (Phillips curve) and the real wage inflationunemployment dynamics (quasi-Phillips curve) are as follows. To estimate these constant NAIRU models, it is adequate to use ordinary least squares (OLS), where we calculate the NAIRU, , as = − 2 in (1). The specification and estimation results of the time-varying NAIRU model are in Appendix A. Model 1 (Phillips curve equation with constant NAIRU setting) where denotes the level of price inflation, the unemployment rate, energy prices, productivity, and is the error term. Our specification follows the reasoning in Gordon [1]. We adopt four lags for the dependent variable, and three lags for energy, , all of which could specify a small amount of price inertia. The level of price inflation is then included using four terms in each sigma component. The productivity variable is the eight-quarter change in the trend rate of productivity growth (see, Gordon [1], Dew-Becker and Gordon [3]) and already smoothed, which is why we omit the lags for this variable. In a general form, we specify the model as: where denotes the additional supply-shock variables, and ( ), ( ) and ( ) are the lag-polynomials that specify the dynamics of the price behavior. For details, see Gordon [1], [16], and [17]. To obtain (1), we first specify the lag lengths and polynomials and the supply-shock variable as: 558 The Modified Phillips Curve as a Possible Answer to Japanese Deflation Then we assume that , NAIRU, is constant. With this assumption, we can extract from ( − ) in (3) to obtain the portion − . We then substitute , a constant term, for this extracted portion, − , to transform (3) into (1), a simpler form of the Phillips curve equation with constant NAIRU setting. Model 2 (Quasi-Phillips curve explaining real wage-setting behavior) where denotes real wage inflation. We specify the real wage-setting behavior in the simplest form, which provides us with a step to arrive at a modified Phillips curve. We estimate both Models 1 and 2 for Japan and the United States using the data between 1972:Q1 and 2014:Q4, which is divided into two parts between 1972:Q1 and 1991:Q4 and 1992:Q1 and 2014:Q4. The first part reflects the Japanese economic boom prior to the bursting of the bubble economy, and the second part continues to track the long recessionary period that has followed. We use the beginning of 1992:Q1 as the breakpoint for this sample. See Krugman [18] for the justification. Data We focus on consumer price inflation. For Japan, we use the consumer price index (CPI) data for all consumers and all items excluding fresh food, which is the targeted index (from 2013:Q2) or guide index (up to 2013:Q1) for the inflationary trend the Bank of Japan tracks. For the U.S. we use CPI data for only urban consumers, which means that fresh food prices are included, but unfortunately not all areas of the U.S. are surveyed, even though it is the most relevant variable for our comparison (see, Figs 1 and 2). To emphasize firm price-and wage-setting behavior as represented by the Phillips curve (see Section 5) we do not subtract the inflationary effects of the sales tax hike from both CPI series. Consequently, we can compare the estimation results for Japan and the U.S. We support this in that firms rationally consider the upward pressure of sales tax hikes on wages and prices. Moreover, the economic logic behind the Phillips curve also supports the use of these CPIs. All data are quarterly, which coincides with the frequency of the release of the national accounts (particularly GDP used in the calculation of new productivity). The raw price inflation data for Japan is the percentage change in CPI from the same month one year previously, and, for the U.S., the seasonally adjusted monthly series of CPI, which we convert into percentage changes. Consequently, in this paper, we estimate the relationship between the percentage change in CPI and the level of unemployment, unlike Gordon's work, which employs the first differences in the logarithm of CPI. We mainly follow Gordon [1] in most aspects of our analysis, but attempt to improve the performance of our models using the percentage change not the logarithm of CPI. We also specify total unemployment, not unemployment excluding long-term unemployment as in Gordon [1]. The reason is that the short-run unemployment data for the U.S. is very similar in definition to that for Japanese total unemployment 3 . We source the Japanese unemployment rate from labor statistics and that for the U.S. from the current employment statistics. As the raw data are monthly, we adjust them to their quarterly values to make them consistent in frequency to productivity, for which we specify quarterly data (particularly for GDP) extracted from the national accounts. This is also consistent with the frequency of the 3 The level and variance of unemployment in Japan is generally quite small. See, e.g., pioneering work by the U.S. Department of Labor [19] and Shiraishi [20], and the review in Hamada and Kurosaka [21]. By contrast, for U.S. unemployment, Cao and Shapiro's [22] decomposition clearly shows that the variance of short-run unemployment, which they obtain by excluding long-run unemployment from total unemployment, has been relatively small. From the viewpoint of variance, Japanese total unemployment and U.S. short-run unemployment seem very close by definition, and therefore play similar roles in Phillips curve estimation. data we collect for inflation and unemployment as described above. Productivity, which is the focus of this analysis, is also as per Gordon [1]. We define productivity as the productivity trend growth acceleration variable, which is the Hodrick−Prescott filtered trend in productivity growth (using 6400 as the smoothness parameter) minus the same trend eight quarters earlier (see, Dew-Becker and Gordon [3]). For the same reason as the CPI variables, we specify the productivity trend as percentage changes not logarithms (see Fig. 3). The energy variable we specify is also consistent with Gordon's [1] methodology and is the energy CPI for Japan and the U.S., both from the OECD database. The same database provides the average hourly wage index, for both Japan and the U.S., as the real wage variable. Estimation Results Tables 1 and 2 provide the estimates of the inflationunemployment and real wage inflation-unemployment dynamics using Models 1 and 2, respectively. Both tables reflect our focus on the effects of the productivity variable. The estimates in Table 1 illustrate two remarkable results. One is the difference in productivity effects that arise in the first and second periods for both countries and the other is the difference in results between Japan and the U.S. The third column in Table 1 reports the estimates for Japan for the period between 1972:Q1 and 1991:Q4, with a positive regression coefficient of the productivity variable (3.014), which, although weak, lies contradictory to the price theory represented by the Phillips curve. Furthermore, in the succeeding period between 1992:Q1 and 2014:Q4, the coefficient on the productivity variable in the fifth column has become negative (−0.575). Notes: OLS estimates with heteroscedasticity and autocorrelation consistent (HAC) standard errors and covariance (Bartlett kernel, Newey−West fixed bandwidth = 4.0000). t-statistics in parentheses. *, **, and *** indicate coefficient or sum of coefficient is statistically significant at the 10, 5, or 1 percent level, respectively. As shown, the estimated coefficient for Gordon's [1] productivity variable fits the U.S. Phillips curve well at the 10 percent level with a large magnitude. Notes: OLS estimates with HAC standard errors & covariance (Bartlett kernel, Newey−West fixed bandwidth = 4.0000). t-statistics in parentheses. *, **, and *** indicate coefficient or sum of coefficient is statistically significant at the 10, 5, or 1 percent level, respectively. The results represent the additional productivity effects on the real wage-setting behavior of the firm. The dependent variable is the rate of change in the average hourly earnings index (MEI) extracted from the OECD database. The presence of productivity-based real wage acceleration is suggested for Japan in the first period (1972:Q1 to 1991:Q4), which reveals that the productivity variable may have been exerting upward pressure on real wage inflation. In the same period, the regression coefficient of the original Phillips curve for the same productivity variable is weakly positive. For Model 3 (quasi-Phillips curve model), we select the change in the index of average hourly earnings (MEI) from the OECD database as the measure of real wage inflation. 560 The Modified Phillips Curve as a Possible Answer to Japanese Deflation We fail to reject the hypothesis that the coefficient on the productivity trend change in (1) is zero even at the 10% level in both periods for Japan. However, the U.S. results (also reported in Table 1), combined with supporting evidence on the productivity effects that exert upward pressure on the real wage rate (see the third column in Table 2), suggest that the upward shift in the real wage-setting function took place in the first period. We discuss this later in terms of the specification of real wage-setting function. In terms of the magnitude of the effects in Table 1, the productivity effects on inflation are very high for the U.S. (-23.752 in the first period and -11.862 in the second) relative to those for Japan (3.014 and -0.575). This relationship reverses in Table 2. 4 Simply put, the productivity effects on real wage inflation for the U.S. are relatively small when compared with the effects on price inflation, and almost seem to almost disappear (-0.001) in the second period (sixth column in Table 2), although not significantly. Conversely, for Japan these effects seem to be become weaker in the second period (-0.105), but are still significant. Importantly, the third column in Table 2 shows evidence of significant productivity growth for Japan in the period between 1972:Q1 and 1991:Q4. It begins with positive and dramatic growth in the early 1970s, and then the slow growth period of the oil crises of 1973 and 1979, the yen/dollar appreciating after the Plaza Accord of 1985, and the growth associated with the "bubble" economy. This thus reveals a certain level of pass-through effects of the productivity trend change on real wage inflation in the postwar Japanese economy. In the next section, we model thus as the real wage-setting function and incorporate it into our modified Philips curve. The remaining results are as follows. First, as shown in the fourth and sixth columns in Table 1, the U.S. displays a weak negative relationship between inflation and unemployment. Comparing the U.S. and Japan for the first period, the magnitude of this relationship is -0.163 for the U.S. and -1.151 for Japan. The Phillips curve correlation for the U.S. is then only about one-tenth of that for Japan before the bubble economy burst. In contrast, in the second period the two countries are much closer although still with a negative relationship, -0.109 for the U.S. and --0.161 for Japan. As shown, the Phillips curve analysis reveals a dramatically reduced reaction of price in relation to a decrease in unemployment in Japan, resulting in an estimate for the second period similar to that in the U.S. Second, as shown in both Tables 1 and 2, the energy effects are weak and this is consistent with the findings in Blanchard and Galí [23] who note the declining pass-through effects of changes in energy or oil prices on overall inflation. The magnitude of 4 Using a time-varying NAIRU setting, Gordon [1] argued that this productivity effect had a minor (but noticeable) influence (see Table 1 in Gordon [1], pp. 24), which is more consistent with our estimation results in Table A1 in Appendix A. However, we do not consider further the difference in results between the time-varying and constant NAIRU models. these extremely low for Japan compared to the U.S., which likely reveals the international differences in resource endowment (e.g., oil) and relates to the insights in Bruno [24]. Theoretical Underpinnings for Seemingly Contradictory Results In this section, we formulate a modified Phillips curve theory and use this to explain the contradictory findings in Gordon [1]. The new theory includes supporting evidence on the real wage behavior. We also extend this to a more general form of the AS curve, which, in conjunction with the AD curve, yields a modified AS−AD framework. General Settings Our theoretical model adopts a conventional production function with diminishing returns, which without any additional settings would generate the countercyclical movement of real wages-as employment increases (or conversely, the unemployment rate decreases), the real wage rate decreases (see Akerlof et al. [25], and comments by Mankiw [26] on this). 5 Importantly, this fact counters our empirical findings (see Table 1), although our results are limited to the specific period between 1972:Q1 and 1991:Q4, In this period, we confirmed the contradictory findings for Japan's Phillips curve, and also found a cyclical real wage, which increases as employment/output increases. We start by specifying the real wage-setting function, ( , , ) : . This incorporates the empirical economic characteristics into our theoretical analysis in its simplest triangular form. The term denotes the real wage rate and the rate of change in energy prices. We assume that real wage inflation is sensitive to the business cycle, and is affected by imperfections in both labor and product markets. The term works as a productivity-based factor with an external accelerator , which governs the magnitude of . The term 1⁄ represents downward wage rigidities. More precisely, as increases, the rate of decrease in declines. , energy, is multiplied against these two factors. Table 3. The Relationship between Productivity and Real Wage Inflation Governed by the "Accelerator" Notes: Despite these unsatisfactory findings, our focus is on the sign of the regression coefficient of the productivity variable. Furthermore, we do not discard such an insignificant and contradictory finding on the productivity coefficient. The reason is that we take this as a reflection of the overall economy over time and not as any changes for the specific estimation period between 1972:Q1 and 1991:Q4, which corresponds to the first part of our sample period. As emphasized, the term (Dew-Becker and Gordon-type productivity trend change) may coincide with the slope of the production function at the marginal level; this is the marginal product of labor, which may coincide with the real wage rate. Therefore, we specify as the base of the productivity effect on the real wages, and combine it with , a positive parameter that increases the acceleration of real wage rates. We assume that as increases, results in more upward pressure; the -governed shifts of the real wage-setting function is included in nonlinear form (see Table 3). A large indicates a positive economic state, or even rapid economic growth of the economy, and replicates the cyclical movement of real wages suggested by our results. The real wage sometimes becomes countercyclical (when = 0 or 0 < < 1 ), and sometimes moves cyclically with output growth (when > 1). Originally, Dew-Becker and Gordon [3] introduced the new definition of the productivity variable to improve the tracking of the effects of productivity growth (see Gordon [1]). However, we use the same variable to arrive at the simplest possible understanding of the Japanese Phillips curve. By replacing marginal labor productivity with , the new measure of productivity (or marginal product of labor), makes it possible for us to extend the textbook specification into a specific framework that could help us explain Japanese deflation. Related work by Akerlof et al. [25] adopts the ordinary definition of the same variable (labor productivity) to explain the U.S. data, but as we found using the triangular model in Table 2, the U.S. evidence does not generate a contradictory (positive) productivity coefficient. These specifications allow the term to work as a productivity-related real wage accelerator. However, this holds only if the parameter is sufficiently large, and the real wage-setting function as a whole shifts upwards by an amount sufficient to cancel out the downward pressure that arises from the model setting with diminishing returns. This shift, although in the context of diminishing returns, should exert the same upward drift on the Phillips curve, as confirmed by Akerlof et al. [25]. 6 The Derivation of the Phillips Curve One of the usual ways to derive a Phillips curve is to 6 They did so under the assumption of constant returns, and the additional term that specifies the shift in expected unit labor costs arising from downward wage rigidity. consider the actions of the representative firm and start from the firm's profit-maximizing behavior. Therefore, we first consider the simplest possible behavioral assumption, namely, the representative firm model in a single competitive market. Within this setting, we do not consider the difference between absolute and relative prices. Later, we extend this to a more conventional setting within a monopolistically competitive market. Therefore, the first equation we consider is the demand for labor by profit-maximizing competitive firms, which corresponds to the first postulate of classical theory as follows: where denotes the money wage rate, the price of the firm's output, and the labor unit. The term ′( ) is the derivative of the production function ( ), or the marginal product of labor. Then, with the use of price expectations, we transform (4) into the price-setting equation as follows: where denotes the expected price level and the real wage rate. We have replaced ′( ) in (4) with , the Dew-Becker and Gordon-type productivity to obtain (5). Next, we consider firm behavior in a monopolistically competitive market. The simplest way is to assume unit elastic demand for a firm's output, which is dependent on both the price of the firm's output and the average price in the economy. By introducing the markup term , which is defined as = 1 + using , the markup rate, we extend (6) as follows: where reflects the elasticity of demand, , and = /( − 1). The competitive firm has = 1, and the firm with some market power has a markup larger than 1 or = /( − 1) > 1 holds. With the assumption of unit elastic demand, we could treat as given and constant to highlight the suppliers' behavior in order to arrive at the simplest formulation of the Phillips curve equation. The above is the conventional specification, except that we replace the marginal product of labor with the new productivity variable , and this becomes crucial in the explanation of our seemingly contradictory results. The Modified Phillips Curve as a Possible Answer to Japanese Deflation The Modified Phillips Curve To derive the modified Philips curve, we include the real wage-setting function, specified above as = ( ) ⁄ , in (6) as follows: Then, the modified Phillips curve, or the inflation−unemployment dynamics that our findings support, uses three steps. First, replacing (the price of the firm's output) with (the average price in the economy) in (7), and second, taking the natural log of each side of this transformed equation, third, assuming = −1 and subtracting −1 from both sides gives: where denotes expected price inflation. Equation (8) suggests that if > 1 holds, then the productivity effects from the variable (productivity) will become positive. Recall the estimation results in Table 1. The third column in Table 1 shows the positive productivity effect. This can place upward pressure on price inflation, which corresponds to the first period between 1972:Q1 and 1991:Q4, during which there was an economic boom in Japan. This implies that Japanese inflation was once supported by the postwar acceleration of the productivity-based real wage inflation (theoretically, supported by the condition of > 1); the positive real wage effect would then pass through to price inflation, resulting in the contradictory positive coefficient of the productivity variable. Furthermore, (8) suggests the important role of price expectations. In a mature economy, it is reasonable to expect severe price competition, which results in the assumption of = 1. Under = 1, the term becomes zero, and thus places no pressure on the resulting price inflation. As we have already considered the role of , the new productivity, in placing upward pressure on price inflation, the final variable is , price expectation. As a spinoff of our theoretical examination, we arrive at the role of price expectation, which through economic policy can continue to influence even a mature economy. An Extension of the Modified Phillips Curve to the AS Relationship Now, we use (8), the modified Phillips curve, to derive the AS relation. First, we replace the term with − in (8), based on the negative relationship between , the rate of unemployment, and , the AS. This relation is implied empirically by our calculation of the Okun coefficient (see Tables A2 and A3), and theoretically by the model assumption of diminishing returns. We obtain the modified AS curve as: Then, recall that the reported coefficient of is negative (see Table 1), which is consistent with classical theory, and we obtain the positive coefficient of . That is, the upward-slope of the AS curve is implied by the stable Phillips curve relationship. If that relationship becomes ambiguous, then the AS relationship also becomes ambiguous and suggests we cannot confirm the effects of the expansion of AD. Completion of the Modified AS−AD Framework The modified AS curve demonstrates the role of productivity and the acceleration of productivity-based real wage inflation in the postwar Japanese economy. However, to completely understand the Japanese price dynamics, we need to introduce the AD curve. Given our emphasis on the supply side of the economy, the following simple form should be sufficient for this purpose: where denotes the money supply. This formula for the AD side of the economy is in the form of a quantity equation, which is consistent with Lucas [9] and subsequent studies such as e.g., Akerlof et al. [25]. To activate (10) as an AD curve, we set the normalization assumption regarding velocity ( = 1 ), or the log of velocity assumed to be zero ( = 0) as: Then, subtracting −1 from both sides of (11), the AD relationship is written as: The assumption of = 1 implies that the velocity does not affect or multiply the circulation of money. This is consistent with an economic state under an extremely low interest rate condition. In light of Abenomics, Japan's comprehensive policy approach, the most powerful "arrow" 7 is an aggressive monetary easing, which decreases the interest rate to the zero lower bound. However, as the interest rate falls even lower, the velocity also declines and approaches one. In addition, starting from February 2016, the Bank of Japan introduced a negative interest rate policy, which consequently clears the zero lower bound restriction relating to this formula. The Modified AS−AD Equilibrium To obtain the modified AS−AD equilibrium, we first remove the assumption = −1 to transform (9). The resulting AS curve becomes: Then, using (13), the modified AS curve, in conjunction with (12), the AD curve, we can obtain the solutions of the model as follows: This set of equations determines the modified AS−AD equilibrium. Equation (14) shows the equilibrium output and (15) shows the equilibrium inflation. Consider the case where > 1 holds. Recall that is an external accelerator which governs the magnitude of (the productivity-based factor in the real wage-setting function), which determines the real wage rate. Recall also the modified Phillips curve expressed as (13). From (13) (the modified AS curve), we find that if > 1 holds, the productivity coefficient will become positive. This is consistent with our seemingly contradictory findings in Table 1. Next, consider an economic state in which > 1 holds for (15). In this case, the equilibrium level of inflation will likely be positive. Finally, confirm whether output at the modified AS−AD equilibrium could expand. Equation (14) shows that where > 1 holds, productivity growth lowers the equilibrium level of the output, but we could avoid this if the money supply grew. This policy, which ensures a larger , means both equilibrium output and equilibrium inflation will be positive. This is why our findings suggest the decisive role of monetary policy to ensure the unconstrained effects on output growth. In these equations, governs the acceleration of real wage growth. If > 1 holds, then acceleration is expected. If < 1 holds, then acceleration does not take place. As we can explain our contradictory findings with > 1, then the seemingly contradictory results cited earlier in Table 1 are recognizable as fitting the modified theory, and therefore no longer contradictory. Moreover, this indicates that postwar acceleration of productivity-based real wage inflation existed in the first part of our sample period. This contrast with the second period, which is explained by < 1. No acceleration arises after the bubble economy burst, and this is revealed by the negative productivity coefficient in Table 1. We can ascribe that difference, at least according to our theory, to , which suggests that much of the slowdown in Japanese inflation is because of the lack of the postwar acceleration of productivity-based real wage inflation. Conclusions We have shown that a modified Phillips curve, which includes one of the missing elements of the standard Phillips curve (real wage behavior) in an implicit form, can explain Japanese inflation-unemployment dynamics over a 40-year period. The modified theory, in conjunction with a quantity equation with zero velocity, proved the simplest economic framework for representing the behavior of Japanese economy in response to shocks and government policies (particularly monetary policy). This framework, or the modified AS−AD equilibrium we identified, shifts the emphasis of the aggregate economy from the pure supply side (see, Ashiya [13]) to its interaction with the demand side, although the focus is still on the supply side of the economy and the specification of the AD curve is drastically simplified. We conclude that much of the slowdown in Japanese inflation has been because of the lack of the postwar acceleration of productivity-based real wage inflation unexplained within a traditional demand-oriented formula. We could reasonably expect the existence of this factor in a booming economy to place upward pressure on actual price inflation. However, this becomes more difficult in a mature economy as exists in Japan in recent decades. Consequently, the role of external economic policy to affect price expectations becomes more important. The main implication for policymakers is the need for money growth to secure the unconstrained effect on the output, which is consistent with the "first arrow" of Abenomics, Japan's comprehensive policy approach. Some may argue that part of the efficacy of the productivity-based real wage acceleration that we identified is related to the use of the simpler formulation of the AD curve, which has an inherent analytical bias supporting the role of monetary policy. However, it is permissible given the focus on the decisions of suppliers. The lower the rate of interest is, the lower the velocity becomes, all else being equal. This is consistent with our setting of zero velocity in the analysis of Japan's deflationary economic state as the Japanese central bank now keeps the policy rate near zero. Practically, this primary factor exerts upward drift on the Phillips curve, a feature contrasting with Akerlof's et al. [25] formulation of the Phillips curve, which specifies the shift in expected unit labor costs arising from downward wage rigidity. Moreover, our estimation results ( Table 2) provide evidence that support our finding that upward flexibility in the real wage rate, and its acceleration along with productivity growth, is the solution to Japan's deflationary problem. This counters critics arguing our primary factor of productivity-based real wage acceleration is problematic. Although the historical data varies, the general trend is upward during our first sample period, especially close to periods of rapid growth. It is an open question whether money growth or monetary policy is effective in bringing inflation up to the target level. Sims's [27] idea of the use of fiscal policy when the economy suffers from extremely low or negative inflation inspired the fashion of Abenomics (Hamada [28]). We ignore this aspect of the underlying modified AS−AD model in this analysis. A more detailed specification of the AD side of the economy, e.g., at least including government expenditures, is necessary before any concrete recommendations on how to address the deflationary problem in Japan. There is a further question of whether the real wage moves cyclically under the assumption of diminishing returns. Our approach is to set an exogenous productivity-related real wage accelerator. The volatility of the accelerator, or whether its value is greater than one, is thought to be dependent on the trend or phase (boom or not) of the economy. This is consistent with Hamada and Kurosaka's [10] findings on the link between the economic state and the trend in the change of the real wage rate. The puzzle that relates to Okun's coefficient can also provide an explanation of this as its value generally exceeds one as expected, which is consistent with increasing returns. Note that the conventional setting with diminishing returns would not support the implication of Okun's law. Thus, a remaining issue is to elucidate the contradictions between commonly held economic law and conventional theoretical issues; part of this could be the calculation of the Okun coefficient and its resulting volatility. Finally, our analysis of the deflationary state in Japan has a direct bearing on the reduced form, triangular Phillips curve literature that examines the dynamics of the inflation−unemployment relationship. As in Gordon [1], we treat the productivity variable as a Dew-Becker and Gordon-type productivity variable [3]: the value of the productivity trend growth acceleration being equal to the Hodrick-Prescott filter of the productivity growth trend with that trend eight quarters earlier. This evolution of the productivity variable has led us to posit our modified AS−AD theory based on a modified supply-side analysis. Table A1 provides the estimation results of the Phillips curve with a time-varying NAIRU using Model A1. The specification is next to Fig. A1. Notes: Estimated by maximum likelihood (Marquardt). z-statistics in parentheses. *, **, and *** indicate the coefficient or sum of coefficient is statistically significant at the 10, 5, or 1 percent level, respectively. The coefficients of the lagged dependent variable and energy effect variable are estimated separately in the state space. The variance of the NAIRU fixed at 0.001. It is well known that the size of this variance affects the estimation results, and this is why we only present these outputs as an additional source. See, e.g., Laubach [8], who estimates the time-varying NAIRU models in a similar context by setting those variances at, e.g., 0.026 (for the U.K.), 0.006 (for Germany), or even zero (for Canada, Australia, France, and Italy), and so on. Gordon's [1] productivity variable was found to fit very well at the 1 percent level both the U.S. and Japanese data, although the robustness of the results could not be confirmed for the Japanese estimates. To obtain this model, we follow the common specifications widely used in the Phillips curve literature. First, we define the movement of the NAIRU, , which is an unobserved component, as a random walk without drift, expressed as (A2). We also define the gap between the unemployment rate and the NAIRU, or the unemployment gap, − , as a stationary AR (1) series, expressed as (A3). We also adopt Watson's [2] decomposition of the observed series, , as in (A1). See, e.g., Ebeke and Everaert [32] for stable specification of the time-varying NAIRU model; concerning the NAIRU specification in this context, see, e.g., King et al. [33], Staiger et al. [29], and Gordon [16]. Laubach [8] shows that calculations with or without drift do not affect the performance of the NAIRU estimates, even if a country has an unemployment rate with an upward trend. We extended our time-varying NAIRU model to provide more information on the specification of the NAIRU. That is, we tried adding drift or the delta of the NAIRU as explanatory variables, or the change of the specification of the unemployment gap-which reflects Okun's relation and the corresponding demand-side shocks that represent business cycle fluctuations. However, these attempts still provided unsatisfactory results in terms of the limited variance of the NAIRU movements. Table A2 summarizes the estimation results for Okun's coefficient. We employ the simplest relationship between the change in unemployment rates and the change in GDP, namely, GDP growth to obtain these values. This simplification does not change the nature of the analysis, but makes it possible to confirm our general understanding of Okun's law. 566 The Modified Phillips Curve as a Possible Answer to Japanese Deflation Table A2. The Okun Coefficient Notes: Calculated using the estimation results of Δ = + + (Table A3). The simplest way to arrive at the Okun coefficient is by using these estimation results to obtain the value of −(1 ) ⁄ . The level and volatility of the Okun coefficients could be the source of the upward shift of the wage-setting function. Table A3. The Simplest Source for the Okun Coefficient Notes: OLS estimates with HAC standard errors and covariance. The level and volatility of the Okun coefficients could be the source for the upward shift of the wage-setting function. t-statistics in parentheses. *, **, and *** indicate the coefficient or sum of coefficients is statistically significant at the 10, 5, or 1 percent level, respectively. The results are for the simplest calculation of the Okun coefficient. The dependent variable is the change in the unemployment rate.
9,601.2
2017-10-01T00:00:00.000
[ "Economics" ]
Deep Learning Based Instance Segmentation of Titanium Dioxide Particles in the Form of Agglomerates in Scanning Electron Microscopy The size characterization of particles present in the form of agglomerates in images measured by scanning electron microscopy (SEM) requires a powerful image segmentation tool in order to properly define the boundaries of each particle. In this work, we propose to use an algorithm from the deep statistical learning community, the Mask-RCNN, coupled with transfer learning to overcome the problem of generalization of the commonly used image processing methods such as watershed or active contour. Indeed, the adjustment of the parameters of these algorithms is almost systematically necessary and slows down the automation of the processing chain. The Mask-RCNN is adapted here to the case study and we present results obtained on titanium dioxide samples (non-spherical particles) with a level of performance evaluated by different metrics such as the DICE coefficient, which reaches an average value of 0.95 on the test images. Introduction In many cases, the estimation of particle size distribution of a nanoparticle population remains a major challenge for the industrial development of nanomaterials. Today, scanning electron microscopy (SEM) is widely used in laboratories and manufacturing industries and is considered in metrology as a reference technique capable of reliably determining the size, size distribution and shape of nanoparticles. It is a so-called direct technique because it is based on direct observations and the measurement result is directly traceable to the SI unit of length, the meter [1]. The basic principle of image analysis is: (i) identifying the contours of each nanoparticle using automatic or manual tools and (ii) determining the value of different measurands (equivalent surface diameter, Feret diameter, etc.) from surface or profile measurements. However, the reliability of the measurement is mainly related to the performance of the segmentation algorithm used to identify the nanoparticle edges. However, the determination of this contour is complicated by the natural phenomenon of agglomeration, which tends to form 3D particles bunches ( Figure 1). This paper proposes a methodology to automate the size characterization of titanium dioxide particles imaged by SEM. Titanium dioxide (TiO 2 ) in nanoparticulate form is produced in very large quantities for intensive use in many applications (food, paint, construction products, etc.). However, the study of the dimensional properties of titanium dioxide particles remains challenging because of their non-spherical shape and their ability for agglomeration. Due to this complexity, the characterization of this type of content is not robust and is frequently performed manually by experts in nanometrology. This task is excessively long and tedious, hence the interest in automating, even partially, the current processing chain is growing. The methodology presented in this paper relies on deep learning algorithms. Indeed, the literature in the field of deep learning has been flourishing for a few years, offering new perspectives for improvement in many areas. In particular, these recent tools have proven their effectiveness in many computer vision tasks, for which they have, in many cases, surpassed and, when appropriate, at least equaled the performance of state-of-the-art algorithms. It should be noted, however, that the major limitation to the dissemination of these algorithms has been for a long time the size of the database that had to be created in order to drive these deep networks. However, this limitation has been reduced with the development of transfer learning, which allows for use pre-trained networks, and then to train them specifically on our task with a reduced database. Starting from images acquired by scanning electron microscopy, the processing chain leading to the particle size characterization of titanium dioxide, i.e., the calculation of the equivalent diameter of each particle in the image, involves four major tasks. The first is the instance segmentation (individual particle region), followed by the steps of classifying the state of agglomeration of each particle, then, completing the partially visible particles [2], and finally computing the equivalent diameter of each particle. Even if each step must be the subject of a particular attention, the main task remains obviously the segmentation because the other processing steps depend directly on these initial performances. The learning based segmentation algorithm and its adaptation to the case study are detailed in the Sections 2 and 3. Section 4 focuses on the analysis of the performances of the algorithm compared to human achieved segmentation. Deep Learning Based Instance Segmentation Image segmentation is a long time challenge for the computer vision community and many different approaches were developed. These problematics are essential in the field of nanometrology where the studies of particles are based on microscopic imaging such as Scanning Electron Microscopy imaging (SEM) or Transmission Electron Microscopy imaging (TEM). We will focus in this section on the various strategies of automation of image segmentation. A first approach of image segmentation is based on classic image processing techniques, among which we can find watershed methods and its variants and adaptations [3][4][5], his-togram based methods [6], graph-cut methods [7], or active contour methods [8]. These techniques present the advantage to be efficient but need a strong parameter tuning in order to make them work properly on specific images. Indeed, before applying any of these algorithms, it is necessary to apply numerous image processing techniques such as filtering, thresholding, morphological operations, ... All of these techniques require specific tuning to achieve high level instance segmentation accuracy on a given image. Thus, the major drawback of these methods is their lack of genericity. A second approach is based on software tools such as ImageJ [9], Ilastik [10] or self developed software such as SEMseg [5]. These types of software propose a wide range of functionalities allowing the user to process the image without implementing code or having a deeper knowledge of image processing algorithms and techniques, but does not offer a sufficient level of automation of the specific task to the end-users. A third approach is based on the development of image processing workflows, mixing general image processing methods and machine learning methods such as the EM algorithm [11] or the K-means algorithm [12]. These types of workflows show strong performances at the cost of parameter tuning (lack of genericity) or include strong prior hypothesis, such as shapes to be detected for example. The final category of methods is based on deep-learning algorithms which we can separate on two distinct sub-categories: semantic segmentation and instance segmentation methods. Semantic segmentation methods are mainly using auto-encoders network such as UNet [13]. These algorithms are powerful, generic and highly accurate. These methods propose to recreate the input image as a segmentation map where each pixel represents a class label of the detected object. While semantic methods are adapted to numerous cases of study such as medical images [14,15], mineral characterization [16] or meteorology [17], they can not be applied to our images, where TiO 2 particles tend to agglomerate. Therefore, instance segmentation methods seem to be adapted to our case study and the specificity of the acquired TiO 2 agglomerate images. These methods offer genericity, a separate detection of every instance and robustness, but need a large amount of training data, which can be difficult to obtain. Indeed, some of the TiO 2 particle SEM images can show more than 500 different instances which need to be manually segmented. The main algorithm used for instance segmentation is an algorithm developed by the Facebook Research Team (FAIR) called "Mask-RCNN" [18]. It has been tested in various fields such as medical image analysis (brain tumor detection [19], nucleus detection in microscopy [20], detection of lung nodule [21]), satellite images analysis [22], very high spatial resolution aerial imagery [23] or astronomy [24] to name but a few examples. This algorithm showed also promising results on STM images of nanoparticles [25]. The Mask-RCNN has become a standard in the deep learning community, being both generic and efficient. It can be seen as a two-stage algorithm: the first stage generates candidate object bounding boxes and the second stage predicts the class, bounding box, and binary mask for each region of interest (ROI). The convolutional backbone performs feature extraction over the whole input image and the head part ensuring both bounding-box recognition (classification and regression) and the prediction of the binary masks. A specific network called the Region Proposal Network (RPN), as its name indicates, computes the region proposals by using a sliding window (called anchors) technique directly on the feature maps (backbone outputs) to generate the bounding boxes. The proposed bounding boxes, having different sizes, are mapped to fixed spatial resolution using standard bi-linear interpolation to produce processed feature maps. After this, a more common pipeline is setup so that, for each ROI, fully connected layers (FC) serve to predict the class and to refine the associated bounding box, and, simultaneously, a fully convolutional network (FCN) produces binary masks. Residual learning network (ResNet) [26] is the standard backbone and it is coupled with a Feature Pyramid Network (FPN) [27] for improving the representation of the objects at multiple scales and in the meantime improving particularly the accuracy. For more insights on the algorithm, please refer to the original article [18]. Adaptation of the Mask RCNN for the Detection of TiO 2 Particles Measured by SEM The key step in any development using statistical learning is the creation of a functionspecific database from which the network will learn how to make predictions on new data, never processed by the network, hence the systematic breakdown of entries into a learning, testing and validation database. If the Mask-RCNN algorithm is very powerful for computer vision tasks, it is nonetheless resource-intensive, in other words, its training requires a very large image database (several million samples). In our case, it is simply unthinkable to manually segment so many images. Section 3.1 describes the functionspecific database created for the segmentation task and Sections 3.2 and 3.4 respectively present two approaches to use the Mask-RCNN with this reduced database, namely data augmentation and transfer learning. Section 3.3 focuses on hyper-parameters and network architecture adjustment to achieve high level accuracy in the predictions of the binary masks. Finally, Section 3.5 details the training strategy and the software and hardware specifications used during the training phase. Creating the Function-Specific Database The database built to perform the segmentation is currently composed of 77 images manually segmented by four operators trained by nanometrology experts representing 5947 particles of TiO 2 . These same experts then validated the masks thus produced before their incorporation into the database. The input of the algorithm is a tensor of size W × L × C × N: W and L correspond to the width and length in number of pixels of each SEM image (W = 2048, L = 1536), C corresponds to the number of channels of the image (C = 1, SEM measurement returns a greyscale image) and finally N represents the number of available samples (N = 77). The annotations for each image in the reference database are stored in a JSON file which allows for representing a class instance (here a particle in the image) by an identifier and a binary mask (corresponding to the reference segmentation produced by each operator). Figure 2 shows an SEM image of agglomerated titanium dioxide particles and its annotated agglomerates (manual segmentation). Data Augmentation Data augmentation is very common in statistical learning, especially when a small database is available. This step should improve the robustness of the algorithm and artificially enrich the initial database by applying mathematical transformations on the input images such as rotation, translation, cutting, ... The data enhancement proposed here is application specific. The main objective is to create "fake" data images as close as possible to real images without requiring any post processing to obtain the reference segmentation. We also want to avoid introducing any underlying logic in the generated images. Starting from the reference segmented images, each agglomerate of each segmented images is extracted to build a library of agglomerates ( Figure 3). For example, in Figure 2, three agglomerates are extracted. The data enhancement then consists of simulating new images by first randomly applying flipping and rotation to these clusters, and, second, randomly positioning these clusters of particles on different empty SEM backgrounds (with or without matrices, the matrix may be made up of other particles such as silicon dioxide particles SiO 2 ). It is important to note that, during the phase of placement, no superposition among agglomerates should occur. While superposing agglomerates can create a totally new configuration of TiO 2 particles, it also produces unrealistic borders. Indeed, agglomerate borders are usually spread over couple pixels due to the angle of incidence of the electron beam of the SEM (Figure 4). Thus, after inserting the first agglomerate in the frame, eight different sub-frames are "created" where the following agglomerate will be placed randomly ( Figure 5). The procedure continues until there is no more space available or when the maximum number of agglomerates to insert has been reached. Finally, a 5 × 5 median filter over all the agglomerates borders is applied in order to smooth the transition between the background frame and the inserted agglomerates. The proposed data augmentation procedure is coupled with more common data augmentation strategy such as randomly applying one or multiple transformations among horizontal flip, vertical flip, Gaussian blurring, contrast normalization, additive Gaussian noise, and pixel value multiplication. Figure 6 shows an example of a simulated image. Hyper-Parameters and Network Architecture Adjustments While the work of creating the database is tedious, the choice of network hyperparameters remains a matter for experts. Among the most common hyper-parameters in computer vision, one can list the learning rate, the optimization solver (Stochastic Gradient Descent (SGD), ADAM [28], Momentum, ...), the intrinsic parameters of each solver (regularization parameter, number of epochs, ...), the stopping criterion (when should I stop learning), .... For these rather generic parameters, it is advisable to rely on experts in the field and to read the good practice guides, review papers such as [29,30] and numerical comparisons [31,32] in order to decide on a set of relevant hyper-parameters. However, the task of titanium dioxide particle segmentation is very different from the tasks for which such algorithms are used. It will therefore be necessary to adapt this one accordingly. The SEM images under study involve a large number of particles in each image, sometimes several hundred or even thousands. However, the algorithm was initially developed to detect people, cars, etc... in other words, the number of class instances per image is drastically different from that of the original task. One of the first adaptations of the algorithm was therefore made to the Region Proposal Network (RPN). The objective of the RPN is to scan (by means of anchors) over all the feature maps extracted by the backbone network and scores the presence of object and estimates refining deltas for a better fitting of the anchor to the detected object. Anchors allow the network to detect multiple object of different sizes and scales. Figure 7 displays at one location the set of anchors specified by the hyper-parameters specifying the anchor sizes, the anchor aspect ratio. It is then necessary to adapt the sizes and aspect ratios of anchors to the objects we want to detect. We can also specify the number of trained anchors which must be of the same magnitude as the number of objects to detect and the anchor stride which specifies the number of pixel between two anchor positions. Thus, we adapt these hyper-parameters to the case study: • The sizes of anchors were modified according to the minimum and maximum size of TiO 2 particles: [8,16,32,64, 96] (in pixels); • The number of trained anchors was modified according to the maximum number of particle in one image: 1024; • The stride between two consecutive anchors was set to 1 due to the agglomeration phenomenon. The second part of adjustments was driven by the wish to improve the segmentation accuracy produced by the network. Indeed, this is the most critical aspect for our specific task. In order to do so, we modified the FPN Mask Graph, the network responsible to produce the final predicted mask. We simply augmented the "resolution" of the predicted mask by adding an additional transposed convolutional layer. This modification allowed to improve slightly the segmentation performances (improvement of 0.5 over the dice coefficient). Finally, we adjust several hyper-parameters of the algorithm to fit the specificities of our segmentation task, such as the number of maximum ground truth instance (corre-sponding to the maximum instance to detect in one image), the number of trained ROIs (corresponding to the number of ROIs to generate and to feed the head networks), and the use of mini-mask (SEM images have high resolution, it is therefore mandatory to work with mini-mask), the size of mini-mask roughly corresponding to the maximum size of a single particle (in our case, 96 pixels) and, finally, the mean pixel value for image normalization (calculated over all the training dataset). Other hyper-parameters linked with training strategy are detailed in Section 3.5. Transfer Learning Transfer learning is a deep learning technique allowing one to avoid training its network from scratch. Indeed, transfer learning consists of initializing the network weights from weights of another network (having the same architecture) but trained for another task. Then, it is only needed to fine-tune these weights according to the specific task. This technique is possible due to the fact that, during training, neural networks "learn" how to extract low-level features on shallow layers, while task specific features are extracted thanks to more deeper layers. The most common approach in transfer learning is to start the training by focusing only on the network heads (transferred weights in the firsts layers are kept fixed at this stage), and, then, after a chosen number of epochs, to train the whole network for the specific task (the whole weights are updated), in this case the detection of titanium dioxide particles. The weights transferred are from the algorithm trained on the MS COCO database [33]. This database contains 91 distinct object categories and nearly 2,500,000 instances annotated in 328,000 images. At this stage, it is difficult to decide how many samples are required in the learning process to achieve a given level of performance. Simply put, the size of the database is increased until a satisfactory level of performance is achieved. In our case, satisfactory means a segmentation close to what the human operator would do. Indeed, if the automatic segmentation is almost equivalent to what the human operator would do, a simple and very short correction pass will allow us to obtain an accurate segmentation of the particle in a reduced time. Our strategy of transfer learning in our application follows the common approach previously explained. Details about the training strategy are available in Section 3.5. Network Training This section details the training strategy of the Mask R-CNN and the software and hardware specifications used during the training process. Our train set was constituted of 699 images, from which 77 were "real" images and 622 were images produced via the data augmentation strategy. As explained previously, the network heads were trained during 38 epochs with a learning rate of 0.001, four epochs with a learning rate of 0.0001. Then, we trained the all network during 28 epochs with a learning rate of 0.001 and finally seven epochs with a learning rate of 0.0001. Each epoch is made up of 698 steps, where each step uses two images per GPU during the head training phase and one image per GPU during the entire network training phase. We used a stochastic gradient descent optimizer with a momentum of 0.9 and with a gradient norm clipping of 5.0. We used L 2 regularization with a weight decay [34] of 0.0001. For more details about data augmentation strategy and transfer learning, please refer to the respective section. Our network was trained on a GPU NVIDIA GeForce RTX 2080 with 8 GB of memory (with the 26.21.14.4575 version of the driver). We used the 2.1.6 version of Keras with the 1.8.0 version of Tensorflow GPU library. We used the 7.6.5 version of Cudnn and the 9.0 version of Cudatoolkit. Test Set The test set is made of 19 images representing 3741 particles of TiO 2 . These 19 images are representing the different types of configuration we can encounter in our field of study. TiO 2 particles can be present in a form of aggregate, scattered or in the presence of matrix of other types of particles (for example, with a matrix of SiO 2 particles). Figure 8 displays three test images representing the different type of particle configurations. The different images also show a great diversity of particle layout (Figure 9) : • Isolated: the particle is completely imaged and located outside of an agglomerate, • Complete: the particle is completely imaged and located in or near an agglomerate, • Touch complete: the particle is completely imaged but interlocked with another particle (between the complete state and the masked state), • Masked: the particle is partly hidden by an other particle, • Unusable: the particle is masked by an other particle with a very small visible area (less than 40 percent of its area is imaged and, therefore, does not constitute an interest for our purpose of size distribution measurement) The following results will be displayed accordingly to these configurations. Result Analysis In order to evaluate the detection performance of the network, the mAP metric from the COCOApi was used. Then, the Sørensen-Dice coefficient [35] between the detected particle and the corresponding reference segmented particle is used in order to evaluate the segmentation performance of the network. Finally, a focus will be made on the impact of these metrics on the final measurands of the particles such as the projected diameter, the perimeter, the area, or the Feret diameter. As specified previously, our test set consists of 3741 particles of TiO 2 , showing a great diversity of configurations (isolated particles, complete particles, masked particles, ...), shapes and sizes. The equivalent projected diameter of our reference particles is between 2 and 128 pixels (some particles are almost entirely hidden by other particles inside agglomerates, they are classified as "unusable"). Table 1 explicits the different parameters used during the test phase. The test evaluation was performed using the same hardware as for the training: • GPU NVIDIA GeForce RTX 2080 with 8 GB of memory • CPU Intel Core i9 3.60 GHz • RAM 32Go The segmentation task over the entire test set (19 images) was performed in 110 s, equivalent to 5.79 s per image or 0.035 s per detected particle. To give some points of comparison, a segmentation performed manually takes about 15 to 30 s per particle. If extended to the entire test set, it takes at least 15 × 3741 = 56,115 s (∼15.6 h). Visually, we can note several interesting points. First, almost all particles were detected over the images, except for one particle in Figure 11 located at the top of the image. Secondly, we can see that only three false positives were inferred: two in Figure 12 located in the SiO 2 matrix and one in Figure 11 in the darker area on the right side of the image. As previously noted, Figure 12 shows a mixture of SiO 2 and TiO 2 particles. At these locations, we can note a degradation of the segmentation borders. This slight lower performance on these areas can be explained by the composition of the training database where only 23 images show a matrix of other types of particles (recalling that the training database was constituted of 699 images). We can also observe that the Mask R-CNN produced segmentation masks of high precision inside the agglomerates. Borders between different particles are clear and well defined, no overlaps between the different segmentation masks. Finally, it is more difficult to appreciate the performance of the network over segmentation mask borders on the agglomerate periphery. Indeed, as Figure 4 is showing, the borders of agglomerates are diffused over a few pixels. This is mainly due to electron sample interaction phenomena. Therefore, the segmentation performance on these areas can not be evaluated with the naked eye, and it is necessary to evaluate metrics by comparing the measured segmentation with a reference (produced by operators under expert supervision of directly by the experts). Visual Analysis We can also underline the fact that the segmentation mask shows saw-tooth borders. This phenomenon is due to the rescaling step applied to the inferred segmentation masks in order to retrieve the original size. Table 2 summarizes the number of detected particles in the test set versus the number of particles in the reference segmentation (created by manual segmentation and validated by experts). Overall, the Mask R-CNN algorithm was able to detect around 84 percent of particles. This value reaches 97 percent when we only keep useful particles (i.e., not classified as "unusable"). The detection performance is piloted by the hyper-parameter controlling the detection threshold and thus can be modified. The threshold value is set in order to detect almost all useful particle instances without generating false positives (especially in images presenting a matrix). Detection Results To complete the evaluation of the detection performance of our network, we calculate the mean Average Precision [36] (mAP). mAP is a popular metric allowing one to evaluate the detection performances of an algorithm by computing the precision/recall curve over different IoU thresholds. Table 3 presents the results of this evaluation. Segmentation Results Previously, we detailed the detection performance of the algorithm thanks to the detection scores and the mAP metric. Now, we want to detail the segmentation performance of the algorithm by measuring the Dice coefficient over the detected particles. These results are shown below and are expressed according to the type of the detected particle. Table 4 details the dice coefficient statistics over detected particles according to their type (complete, touch complete, masked, or unusable). Once again, the global performance of the network is satisfactory with a global dice coefficient of 0.936. The coefficient reaches 0.95 over useful particles. The dice coefficient drops to 0.90 when we are looking at "unusable" particles. This can be explained by looking at the definition of particles classified as "unusable". We recall that they are masked particles, whose visible area is very small (less than 40 percent of the particle area), which makes them more difficult to segment. Finally, one can wonder how the dice coefficient is translated to the final measurands which are the area, the equivalent surface diameter, the Feret min diameter [37], the Feret max diameter, and the perimeter. Histogram Figure 13 shows the residual distribution of the equivalent surface diameter of detected particles classified as useful in percentage and histogram Figure 14 shows the residual distribution for the area of the useful detected particles in percentage. The distribution shows a mean of 0.56 and a standard deviation of 2.48. Moreover, more than 96 percent of measurements show an error of less than 5 percent and 51 percent of measurements show an error of less than 1 percent (compared to the manual reference produced by operators). The distribution shows a mean of 1.06 and a standard deviation of 4.93. Two observations can be made over these two distributions: • Bias: the residual distributions are globally centered on 0, the bias could come from the network itself, or from a bias in the reference annotations. • Variance: the variance of the residual distribution is higher over the area measurand than for the Feret min diameter; in fact, a small error over the radius has a squared influence over the calculated area, We should highlight that the hardware is currently the main performance limitation. Stronger architectural modifications of the Mask R-CNN network will allow a sensitive performance improvement but need more GPU memory to process it. Conclusions Thus, through this work, we successfully showed that the instance segmentation deep learning algorithm Mask R-CNN shows robustness and high performance when applied to SEM images of TiO 2 particles with the presence of big agglomerates. These performances were achieved by performing hyper-parameters tuning and through a small architecture modifications. We also proposed a new method of data augmentation adapted to our specific task and will take another level of relevance when applied to SEM images containing different types of particles. Finally, for SEM image processing, we highlight that deep learning algorithms challenge human performance in terms of precision but with a much higher genericity, repeatability, speed of processing and robustness. This work aims to fully automate the particle size characterization chain in scanning electron microscopy. Coupled with the work of [2] to complete the masked particles within the agglomerates and the work undertaken on the classification of the agglomeration state of the individual particles in the SEM image (depending on the state of the particles, the processing differs, e.g., the masked particles go through the completion phase before the calculation of the equivalent diameter as shown in the diagram below), a fully automatic characterization chain is achieved easily. However, if this automatic characterization process materializes, other questions are raised at the same time, including the confidence and uncertainty associated with these learning-based methods. Indeed, the use of these tools raises questions about the control that can be established over the predictions and thus about our ability to understand how uncertainty propagates within these types of networks. It is therefore necessary to establish a methodological framework for quantifying the sensitivity, robustness, and more broadly the uncertainty associated with these types of algorithms. Work is currently under way on this topic in order to be able to produce a level of confidence associated with the estimation of the particle size distribution on SEM samples. The current comparison is therefore limited to the comparison between the expertise and the predictions of the Mask-RCNN, and the dissemination of these methods, assuming that, if the results are equivalent, their use under control can be favoured, given the time savings they produce. Finally, in this article, we did not tackle the post processing techniques in order to improve segmentation results. Several approaches exist in order to post process the optimal border location [38]. It will be developed in a future work. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to industrial confidentiality.
7,095.2
2021-04-01T00:00:00.000
[ "Materials Science", "Computer Science", "Engineering" ]
Agent-Based Mobile Event Notification System In recent years, the noticeable move towards using mobile devices (mobile phones and PDAs) and wireless technologies has made information available in the context of "anytime, anywhere using any mobile device" experience. Delivering information to mobile devices needs some sort of communication means such as Push, Pull, or mixed (Push and Pull) technologies to deliver any chunk of information (events, ads, advisory tips, learning materials, etc.). Events are the most important pieces of information that should be delivered timely wherever the user is. Agent-based technology offers autonomous, flexible, adaptable, and reliable way of delivering events to any device, anywhere, and on time. Publish/subscribe communication model is the basic infrastructure for event-based communication. In this paper, we define the need to mobilize the event notification process in educational environment and the possible categories of event notifications that students can receive from their educational institution. This paper also proposes a framework for agent-based mobile event notification system. The proposed framework is derived from the concept of push–based publish/subscribe communication model but taking advantage from software agents to serve in the mobile environment. Finally, the paper provides a detailed analysis for the proposed system. 2. User-related events: are related to but not predefined by the user; these events are often very important and identified by the institution or the community the user belongs to. In the case of university, the user-related events could be registration date, exams schedule, course notifications, etc. Besides timely Push service, events should be delivered according to users' interests. Event-based communication is mostly based on the notion of publish/subscribe model through which publishers publish events; these events are pushed to the subscribers according to their interests. Such advantages need an intelligent and reliable approach; hence, agent-based technology is a good choice. Agent-based technology offers autonomous, flexible, adaptable, and reliable way of delivering events to any device, anywhere, and on time. Software agents are intelligent software programs that perform certain tasks on the user's behalf in autonomous, reactive, proactive and adaptive behavior [4]. Thus agents know what to do, how to do it, and when to do it; this promotes high level of autonomy, reactiveness, and proactiveness [16]. The Foundation for Intelligent Physical Agents (FIPA) provides standards for message transport protocols, Agent Communication Language (ACL), content languages, and interaction protocols for the sake of interoperability [13]. Taking benefits from the four technologies mentioned above (mobile computing, Push technology, software agents, and publish/subscribe model) this research proposes a framework for an agent-based mobile event notification system. Section 2 discusses a literature review on three advanced technologies used in the proposed framework. Section 3 describes the requirements for establishing event notification system in a mobile environment. Section 4 defines the problem to be solved in this paper. Section 5 describes the proposed framework architecture and detailed analysis of the system. Section 6 provides conclusions on the framework. Literature Review As a result of inventing more advanced mobile devices and the rapid evolution in wireless network infrastructure [1], [15], the use of mobile devices started to be beyond voice calls, video calls, text messaging (SMS), and multimedia messaging (MMS) activities. Nowadays, mobile devices are being used in many important activities in many fields in our life such as m-learning, m-banking, and m-advertising [1], [7]. Such mobile activities are served according to the user's preferences, location, and device limitation. Hence, the mobile computing promotes flexibility, mobility, and adaptability through small, light, and movable devices. For establishing event-based communication, the most popular push model is publish/subscribe communication model. The motive for using such paradigm arises from the need for an asynchronous, loosely coupled, and many-to-many communication in the context of mobile and/or large-scale distributed systems. Such model enables subscribers to express their interests in event notifications; these events are produced by publishers and delivered by event service to subscribers only if events match their interests. In general, publish/subscribe model consists of publishers, subscribers, and dispatch or event service as shown in Fig. 1. Subscribers issue subscriptions to express their interests in events by calling subscribe() operation on the event service without knowing the sources of these events. Subscribers can terminate their subscriptions by calling unsubscribe() operation. Publishers produce events by calling publish() operation without knowing the identity of subscribers who will be notified of that event. Event service provides storage and management of subscriptions, storage and filtering events, and efficient delivery of event notifications to interested subscribers by calling a set of operations such as updatesubacription(), store_event(), perform_matching(), and route_event() while event delivery could be through e-mail, SMS, or WAP messages. An event is asynchronously propagated to all subscribers who are interested in that event. Hence, event service serves as a neutral mediator between publishers and subscribers. Publishers are producers of events and subscribers are consumers of events. The event is delivered through route_event() operation that invokes remotely the notify() operation at the subscriber end. Fig. 1: Components of Publish/Subscribe Model Hence, event service is responsible for four main operations: 1. Managing subscriptions: assuming that updatesubscription() operation is responsible for managing subscriptions, by calling this operation, event service can add, update, and delete subscriptions. Those subscriptions are stored for later filtering purposes. A single subscription contains subscriber's profile and the selected interesting events. Managing incoming events: besides managing subscriptions, event service handles the incoming events from the publishers. These events are received in the form of messages; each event message contains event header and body. Events are stored using store_event() operation and used for filtering purposes. 3. Filtering events: event service calls perform_matching() operation that is responsible for filtering events. Filtering involves matching incoming events to interested subscribers using various filtering techniques such as fuzzy-logic, rulebased, semantics, etc. Delivering events: after filtering, event service invokes route_Event() operation to deliver events to interested subscribers in a timely fashion through an appropriate communication channel. The mantra for event-based publish/subscribe systems is the "decoupling" between publishers and subscribers. Decoupling is the key characteristic that distinguishes publish/subscribe communication from the other alternate communication paradigms such as message passing, RPC/RMI, notifications, shared spaces, and message queuing. Such paradigms have proved their inability to support fully decoupled communication between publishers and subscribers [11]. Decoupling can be decomposed into four dimensions: 1. Space decoupling: sometimes called "anonymity" where publishers do not need to know the address and identity of subscribers and subscribers do not need to know the identity of publishers as well. 3(10) 2. Time decoupling: both publishers and subscribers do not need to exist at the same time; publishers can publish events while subscribers are disconnected and subscribers get notified of an event while the publisher of that event is disconnected. Synchronization decoupling: where subscribers receive event notifications while being not connected to the system or doing other concurrent activities. The production and consumption of events is decoupled from the flow of control of the publishers and subscribers. Hence, publishers publish events without waiting for results and subscribers receive events without explicitly waiting for these events, thus promoting flexibility. Data decoupling: in which subscribers only receive data that they are interested in, and the event service may modify data if needed. This avoids delivering uninteresting events to subscribers and thus saves resources that would process these uninteresting events. Software agents have been integrated with publish/subscribe model to come up with "Rendezvous-Notify" framework that serves mobility requirements of the mobile environment [13]. This framework is able to maintain disconnected operations and manage subscriptions efficiently. Rendezvous-Notify framework involves using event service agents that are responsible for maintaining subscriptions, buffering events, managing event channels, and routing events. Mobile agents have been used in the applications of distributed events systems based on publish/subscribe communication protocol [11]. This approach involves using mobile agents as mediators between publishers and subscribers of events. In such scenario, subscribers are required to register in the system and define the events they are interested in, and publishers create mobile agents with the event to be published and dispatch them to the event server in which the mobile agents find out the interested subscribers to push the event to them. Thus flexibility is promoted via asynchronous communication. Publish/Subscribe model has contributed to delivering information in the mobile context. Hence, a publish/subscribe middleware was proposed to address the requirements of mobile computing applications [3], this middleware provides asynchronous communication as the publishers needn't to have direct contact with the subscribers, thus wireless connection failure is not a problem anymore. Also many-to-many decoupled interaction is provided as many publishers publish events and these events are sent to many subscribers without publishers being connected to the system. Anonymity is supported by such a model as publishers do not have to know the identity of the subscribers and vice versa. Implicit determination of the event notification receivers is provided rather than choosing them by the publishers. Consequently, the system is capable of dealing with a large number of mobile users. However, mobile publish/subscribe applications have been classified into two categories: (1) Static applications that reside in the user's mobile device, as the user is moving, the application can access the system network from different access points. (2) Mobile agent based applications that are able to execute autonomously on any host device to access the system. The first category has been implemented to support mobility service using client proxy [2]. This mobility proxy is an interface medium between the client and the publish/subscribe system while being in disconnected mode. Event Notification in Mobile Environment As we deal in our research with mobile environment, the possible delivery methods for event notifications are SMSs or WAP messages. Additionally, the mobile pub/sub system should offer a set of benefits such as: (1) timeliness that is achieved by pushing data to interested subscribers once it's produced by the publisher. (2) Asynchronous communication that enables delivering notifications while the subscriber is not connected to the system, thus reliability is guaranteed. (3) Anonymous communication where publishers do not need to know the identity of subscribers, thus flexibility is insured. (4) Supporting logical mobility (the user can receive notifications even if changed her/his mobile device) and physical mobility (the user can be notified anywhere). (5) Expressiveness that is the ability of event service to well-define interests of subscribers. (6) Implicit matching where the event service determines the target mobile subscribers who will receive notifications based on their subscriptions without needing publishers to choose recipients. (7) The ability to manage a large number of potential mobile subscribers allowing for manipulation of their subscriptions (update, insert, and delete). (8) The ability to manage a large number of publishers. (9) Support for simultaneous delivery of notifications to thousands of mobile subscribers. (10) Robustness guarantees delivery of notifications to all target mobile subscribers even in case of network failure while subscribers are moving (that is a characteristic of mobile network) by resending notifications to subscribers who could not be reached previously [3]. The common standard technologies for establishing mobile event notification systems are Common Object Request Broker Architecture (CORBA), Java Message Service (JMS), and Wireless Message Transport Protocol (WMTP) [9]. Mobile Event Notification in Educational Environment Educational environment, especially university, is filled-up with various activities and events that are offered for students. These events are usually announced orally or posted paperly on the pin board. The problem is that, students have different class schedules, so they exist in different time frames. Within these different time frames, some events might be announced, started, and finished without the student being notified. Also, students may forget to check the pin board due to their busy day schedule. Further problematic case, if a lecturer, for urgent matter, needs to cancel a lecture in the same lecture day, then the oral and pin board methods will be useless. Consequently, there is no option to notify students early and they will only know very late. However, events in university context can be classified into:  Social events: include notifications about incoming trips, sport competitions, conferences, and symposiums.  Career events: include notifications about incoming job opportunities, internships, and scholarships.  Academic events: comprise of registration events (include notifications about registration date, list of available courses for the new semester, timetable of the registered courses) and course notifications (include notifications about announcement for new lecture, canceling lecture, and exams schedule).  Warnings: include notifications to students who exceeded the allowed absence rate.  Library events: include announcements about new available resources, acknowledging student that the requested resources are sent to her/his e-mail, and notifying student to renew their membership. University should be active by choosing more flexible method to convey the previously listed events to students anywhere and timely. Hence, mobile event notification system is the best choice. The remaining sections of this paper issue a detailed explanation about our proposed framework for agent based mobile event notification system, scenario analysis and a discussion to analyze how it meets requirements of the mobile environment. Agent-Based Mobile Event Notification System (ABMENS): in the role of event service that serves as a middleware between university database server layer and both event providers and students. The ABMENS receives events from event provider through EA, stores events and matches them with interested students, and delivers event notifications to the target students through their PAs. Also the ABMENS allows for manipulating and storing students' subscriptions. The ABMENS performs all these functions through a set of four autonomous software agents: (1) Detector Agent (DA) that receives event data from the EA, gives a copy of event data to the LA, and sends the event data to the DB A to be stored in the database. (2) Logic Agent receives event data from the DA, retrieves the list of interested students from the DB A, and sends [event data + list of students] to the MA. (3) Manager Agent (MA) receives [event data + list of students] from the LA and deliver event notifications to the target students' PAs, also MA receives subscriptions from the PAs and sends them to the DB A to be stored in the database. (4) Trustee Agent (TA) monitors activities of all agents in the ABMENS and ensures synchronization between those agents. In wireless networks, disconnection and failure to deliver notifications is possible. In such cases, the MA is responsible for resending the notifications to the PAs that could not reach before and ensuring that all target students' PAs have received notifications. Notice that some events are produced by the event providers (e.g. social events and career events) and some others are propagated by the ABMENS (e.g. warnings and available courses for new semester). The later ones are the responsibility of the DA; the DA detects any new events stored in the database by other subsystems (e.g. grading system and registration system) and retrieves them through the DB A, then sends them to the LA. However, those propagated events are outside the scope of this paper. Notice that some events are produced by the event providers (e.g. social events and career events) and some others are propagated by the ABMENS (e.g. warnings and available courses for new semester). The later ones are the responsibility of the DA; the DA detects any new events stored in the database by other subsystems (e.g. grading system and registration system) and retrieves them through the DB A, then sends them to the LA. However, those propagated events are outside the scope of this paper. Therefore, there are two types of subscriptions: (1) Direct subscription: in which students subscribe themselves to events like social and career events. (2) Indirect subscription: in which students receive notification with out issuing subscription like in course notification case. The typical scenario for the proposed system is composed into: 1. Subscription scenario: the involved agents are the PA, MA, and DB A. The scenario starts when the student uses the PA to select the interesting event categories, and then the PA sends the selected events to the MA that will send them to the DB A that stores them into database. Same scenario is applied for unsubscription or updating subscription. 2. Event notification scenario: starts when the staff uses the EA to publish an event by entering event data (event category, event title, event date, event place, and description). The EA sends event data to the DA that will send a copy of event data to the LA, also the DA will send event data to the DB A to store it in database. After receiving event copy from the DA, then LA sends SELECT query to the DB A to retrieve list of students' IDs who are interested in that event. The LA receives the query result (list of IDs of subscribed students) from the DB A, then attaches the event data with that list and sends them to the MA that will disseminate the event notifications to PAs of the target students. The PA displays the notification to the student in friendly look. Assuming that students are already subscribed, the whole flow of activities through the system is depicted in Figure 2, starting from the staff that enters the event, each agent's activities on the event, ending with the delivery of event notifications to the target students. Note that the PA is identified by student's ID as a global identifier. When the staff enters an event, Event Agent collects event details that consist of: Event Category (social, career, academic, or library events), Event Title, Event Date/Time, Event Place (optional), and Event Description. Logic Agent (LA) receives event copy from the DA, and based on the event category, it matches the event to the corresponding students, and prepares the appropriate query to send to Database Agent. Thus the following logic is executed: o IF event category is social or career, THEN prepare a query to retrieve students' IDs who are subscribed to that category of event, and send it to Database Agent. o IF event category is course notification, THEN prepare a query to retrieve students' IDs who are enrolled in that course, and send it to Database Agent. After receiving query result from the Database Agent, the LA attaches the event with the list of IDs of students who will receive the notifications, and sends them to the Manager Agent. The Manager Agent looks for Personal Agents (PAs) whose IDs match IDs of recipient students using the Directory Facilitator (DF), then sends event notification to these PAs. Generally, typical publish/subscribe model consists of two basic models (subscription model and publication model) and two basic mechanisms (matching and routing). Publication model defines the data model for publishable event data; hence, events could be structured in various forms (simple unstructured strings, record type, class type, or XML document). Subscription model defines the selection constraints on the published events. Subscription model can be implemented in many forms (relational model, rule definition language, XPath, object-oriented languages, or fuzzy logic rules). Matching could be performed by either XML queries or SQL queries. Routing could be done by flooding, selective routing or gossiping. Accordingly, the proposed framework applies class type structure to event publication model, relational approach to subscription model, SQL queries to matching events to corresponding subscribers, an agent discovery for routing events to the matching subscribers' agents. Agent discovery mechanism is provided by the agent platform along with agent registration service. Conclusion The paper proposed a framework for agent-based mobile event notification system to deliver events offered by university to students on their mobile devices. The paper provided an analysis of the system scenarios and functionalities. The system should promote flexibility, intelligence, and reliability. The system helps university to reach students and keep them updated with the incoming and even the urgent sudden events. The system is currently under development based on the analysis produced in this paper. The chosen platform for design and implementation is the JADE framework because it enables developing light-weight agents using its extended library JADE-LEAP [6].
4,631.2
2010-09-29T00:00:00.000
[ "Computer Science" ]
A New Valuable Synthesis of Polyfunctionalized Furans Starting from β-Nitroenones and Active Methylene Compounds Highly functionalized furans are the key scaffolds of many pharmaceuticals and bioactive natural products. Herein, we disclose a new fruitful synthesis of polyfunctionalized furans starting from β-nitroenones and α-functionalized ketones. The protocol involves two steps promoted by solid supported species, and it provides the title targets from satisfactory to very good overall yields and in an excellent diastereomeric ratios. Introduction Furan ring is a useful building block of many biologically active targets, it is the core of many natural compounds and important polymers, and several furan-containing scaffolds serve as privileged structures in medicinal chemistry [1][2][3][4][5][6]. In this context, its significant importance has spurred the scientific community to investigate ever more efficient methodologies for preparing polyfunctionalized furan-based scaffolds, which, in turn, are suitable for further synthetic manipulations. In particular, complex furan derivatives can be achieved by the derivatization of a preexisting furan structure [7][8][9], or by the ex novo ring construction, planning the introduction of specific functionalities in the opportune positions [10][11][12][13]. Herein, following our studies on the preparation of heteroaromatic systems starting from aliphatic nitro compounds [14][15][16][17], we found β-nitroenones 1 and α-functionalized ketones 2 to be precious and practical building blocks of 3-alkylidene furans 3. In particular, this study complements our previous research concerning the reaction of α-functionalized ketones 2 with β-nitroacrylates 4 [18] to produce, by a different reaction mechanism, the tetrasubstituted furans 5 (Figure 1). The new protocol involves two steps (Scheme 1): (i) a base promoted addition of 2 to 1 for giving the adduct I, which eliminates a molecule of nitrous acid [19], to provide the intermediate 6 and (ii) the acidic catalyzed cyclization of 6 (passing through the adducts II and III) into the title targets 3. Scheme 1. Probable reaction mechanism. Results and Discussion In our attempt to maximize the process efficacy, we separately studied the two steps. At the beginning, we focused our attention on the domino addition-elimination process (Step I), using, as sample substrates, the β-nitroenone 1a and diketone 2a in stoichiometric ratio (Scheme 2). Initially, we tested different supported bases, conducting the reactions in acetonitrile. The best yield of 6a was obtained after 2 h (complete conversion), using 1 eq. of PS-carbonate at room temperature ( Figure 2). Then, once we identified the best base for promoting the one-pot addition-elimination process, we moved our attention to the selection of the reaction media. In this sense, a variety of solvents were screened, and, as depicted in the Figure 3, acetonitrile was the most effective; only ethyl acetate provided 6a in acceptable yield. After the optimization of the I step (1 eq. of PS-carbonate, MeCN, room temperature, 2 h), we performed an analogous study to optimize the reaction conditions for converting 6a into the tetrasubstituted furan 3a (II Step, Scheme 3). In this regard, and based on our experience about the use of Amberlyst 15 for promoting cyclization reactions [11,15,20], we explored this heterogeneous acidic species in different solvents and reaction temperatures (Table 1). As reported in Table 1, the use of 0.6 g/mmol of Amberlyst 15 in EtOAc, at 80 • C, produces, after two hours, 6a in excellent yield and diastereomeric ratio (Entry i, 91%, E:Z = 96:4). After increasing the temperature to 100 • C, the reaction finished in one hour, however the yield and the diastereomeric ratio decreased to 74% and 90:10, respectively. On the other hand, at a lower temperature (60 • C), we observed a longer reaction time (four hours) and a dramatic cutoff of the yield to 45%, albeit the diastereomeric ratio remained unchanged, at 96:4. Successively, in order to minimize the waste production and the energy consumption, we coupled the two steps, avoiding the purification of the intermediate 6a. In this aim, after the accomplishment of the I Step, PS-carbonate was removed by filtration, the resin was washed with fresh EtOAc, and then the solvent was evaporated under reduced pressure to give the crude adduct 6a, which was directly submitted to the II Step, providing 3a in 74% overall yield (Scheme 4). It is important to note that this result is absolutely comparable with that obtained over the two distinct steps (I Step 85%, II Step 91%, which correspond to a total yield of 77%). Finally, with the scope to assess the generality of our protocol, we submitted a variety of β-nitroenones 1 and functionalized ketones 2 to the optimized reaction conditions (Scheme 5). Thanks to the mild conditions, it was possible to install a variety of functionalities on the furan ring (ketone, ester, nitrile, and sulfone), obtaining the products 3a-n from satisfactory to very good overall yields (37-88%), and in excellent diastereomeric ratios (E:Z > 93:7), with the exception of compound 3f (E:Z = 65:35). Furans 3i and 3l-n were isolated as single E diastereoisomer. Scheme 5. Substrate scope demonstration. Conclusions In conclusion, by exploiting the high reactivity of β-nitroenones, we developed a new general and efficient two-step protocol for synthesizing poly-functionalized furans in good overall yields and excellent diastereoselectivity. In particular, thanks to the mild reaction conditions, a plethora of functional groups can be tolerated, thus giving the possibility to install several functionalities on the furan ring, such as ketone, ester, nitrile, and sulfone. Moreover, since the use of solid supported species in both steps, it was possible to avoid the use of the typical wasteful aqueous work-up, reducing the operation to a simple filtration, with evident advantages from the sustainability viewpoint. General Section OXFORD NMR S400, Varian Mercury Plus 400, Oxford, United Kingdom, equipped with workstation Sun Blade 150, software VNMRJ 1.1d, and operating system Solaris 9. 1 H NMR analyses were recorded at 400 MHz and 13 C NMR analyses were recorded at 100 MHz. Ir spectra were recorded with a Spectrum Two FT-IR spectrometer, Waltham, MA, United States equipped with ZnSe window, Dynascan Interferometer, detector type LiTaO 3 , and Spectrum 10 software. Microanalyses were performed with a CHNS-O analyzer Model EA 1108 from Fisons Instruments. GS-MS analyses were obtained on an Agilent GC(6850N)/MS(5973N), Stevens Creek Blvd, Santa Clara, CA, United States, EI technique (70 eV), GC/MSD software, and an HP-5MS column, 30 m, Id 0.25 µm, film thichness 0.25 µm. Microwave irradiations were performed by means of a Biotage ® Initiator + from Biotage, Uppsala, Sweden. Compound 3j is known, and its spectroscopic data are in agreement with those reported in the literature [21]. Chemistry Section General procedure for the preparation of compounds 3a-n: PS-carbonate (0.286 g, 1 mmol) was added to a stirred solution of the appropriate β-nitroenone 1 (1 mmol) and ketone 2 (1 mmol) in acetonitrile (2 mL, 0.5 M), and the resulting solution was stirred at room temperature for the appropriate time (see Scheme 4). Then the resin was filtered off by washing with fresh ethyl acetate (10 mL), and the crude intermediate 6, obtained after removal of the solvent under reduced pressure, was solubilized in ethyl acetate (12 mL) and treated with Amberlyst 15 (0.6 g) and irradiated Biotage ® Initiator + , at 80 • C, for 2 h. Finally, the Amberlyst 15 was removed by filtration (washing with fresh ethyl acetate, 10 mL), the solvent evaporated under vacuum, and the crude product 3 obtained from it was purified by flash chromatography column (95:5 hexane/Et 2 O). 13.7, 19.0, 119.9, 120.4, 122.7, 126.6, 127.8, 128.7, 128.8, 129.9, 131.1, 131.4, 133.2
1,724.8
2019-12-01T00:00:00.000
[ "Biology", "Chemistry" ]
A theoretical framework for general design of two-materials composed diffractive fresnel lens Near 100% of diffractive efficiency for diffractive optical elements (DOEs) is one of the most required optical performances in broadband imaging applications. Of all flat DOEs, none seems to interest researchers as much as Two-Materials Composed Diffractive Fresnel Lens (TM-DFL) among the most promising flat DOEs. An approach of the near 100% of diffractive efficiency for TM-DFL once developed to determine the design rules mainly takes the advantage of numerical computation by methods of mapping and fitting. Despite a curved line of near 100% of diffractive efficiency can be generated in the Abbe and partial dispersion diagram, it is not able to analytically elaborate the relationship between two optical materials that compose the TM-DFL. Here, we present a theoretical framework, based on the fundaments of Cauchy's equation, Abbe number, partial dispersion, and the diffraction theory of Fresnel lens, for obtaining a general design formalism, so to perform the perfect material matching between two different optical materials for achieving the near 100% of diffractive efficiency for TM-DFL in the broadband imaging applications. where r i is the i-th FZ radius, f 0 is the design focal length, 0 is the design wavelength, i is the spacing between the i-th FZ and the i + 1-th FZ, h is the FZ height, D is the FZ diameter, and r max is the outermost FZ radius. Due to the dispersion, generally the parallel incident white light with different wavelengths, ranging from 400 to 700 nm, is focused by DFL at different focal points on the optical axis. The optical power varies linearly with the wavelength of the incident light, as shown in Fig. 2, where f R , f G , and f B presenting the focal points focused by the red, green, and blue light respectively. (1) www.nature.com/scientificreports/ Furthermore, according to the above-mentioned theory of DFL 12 , the diffraction focus and efficiency of a light source with a single wavelength can be determined by the following equations. where 0 is the design wavelength, ƒ 0 is the design focal length, is the wavelength, f is the focal length, m is the diffracted order, η is the diffractive efficiency, α is the detuning factor, n(λ) is the refractive index of DFL as a function of wavelength λ, and n(λ 0 ) is the refractive index at λ = λ 0 . The calculation results reveal that most of the diffraction energy transmitted from the white incident light focused in the diffracted order at m = 1 and only a small portion of the diffraction energy contributed to the www.nature.com/scientificreports/ diffracted order at m = 2, 3, while the rest of the energy from the higher diffracted order being not considered because of the less contribution to the diffractive efficiency. Furthermore, in the spectrum of visible light, according to Eqs. (6) and (7), 100% diffractive efficiency can only be achieved in conditions of the diffracted order at m = 1, λ = 0 = 0.587 µm, and α = 1. Equation (6) describes how the diffractive efficiency drops off when the incident wavelength λ deviates away from the design wavelength λ 0 . More practically, the overall diffractive efficiency is generally evaluated with the mean value of spectrum distribution, as following equations. where η m is the averaged diffractive efficiency in the diffracted order at m and η T is the total averaged diffractive efficiency. For the parallel incident white light source, substituting the wavelength at 1 = 0.4 μm and 2 = 0.7 μm into Eqs. (8) and (9), η 1 = ~ 85.2%, η 2 = ~ 8.2%, and η 3 = ~ 1.2% are obtained in the diffracted order at m = 1, m = 2 and m = 3 respectively. Consequently, the total diffractive efficiency from m = 1 to m = 3 comes up with η T = 94.8%. In other words, there is still a certain portion of the incident light transmitted to other higher diffracted order (m > 3). For the diffracted order at m = 1 and h = 1.17 μm, about 15% of the incident light energy is disappeared. As a result, such light becomes the stray light and eventually deteriorates the imaging quality of DFL. A solution to make up for the energy loss in the first diffracted order was first presented by Kenneth J. Weible in 1999 20 . In their research result, a blazed grating, composed of two optical materials, including glass (Schott BaF52 glass) and PC (polycarbonate), was proposed to improve the diffractive efficiency for the diffracted order at m = 1 (i.e. α = 1) by exploiting the refractive index difference Δn(λ) = n 1 (λ) − n 2 (λ) being proportional to wavelength λ. When the refractive index difference satisfies Δn(λ)/λ=constant at all wavelengths λ in the spectrum of visible light, all the incident light energy in the higher diffracted order is all transferred into the first diffracted order, as to achieve the objective of 100% diffractive efficiency. Furthermore, B. H. Kleemann mentioned the design concepts for the blazed grating composed of two optical materials in 2008 21 and particularly named such structure as Common depth EA-DOEs. In contrast to the term "Common depth", a singlet DFL composed of two optical materials is called Two-Materials Composed Diffractive Fresnel Lens (TM-DFL) in this study. According to Kenneth's research 20 , two transparent optical materials of glass 22 and PC 23 , as shown in Fig. 4a and b, are used for explaining the difference in the diffractive efficiency between a Single-Material Composed Diffractive Fresnel Lens (SM-DFL), i.e. the conventional DFL, and TM-DFL. For SM-FDL, the average diffractive efficiency η 1 84.9% is calculated by Eqs. (6), (7), and (8) for the diffracted order at m = 1 and the incident wavelength at λ = 400-700 nm, as shown in Fig. 4c. Besides, for TM-DFL, the diffractive efficiency η 1 =94.3%, as shown in Fig. 4d, is obtained by the same calculations above with a modified tuning factor α in Eq. (10) which contains the refractive index n 2 (λ) and n 2 (λ 0 ) of the second material. Although TM-DFL can improve the diffractive efficiency of the conventional SM-DFL, the requirement for Δn(λ) ∝ λ is not perfectly satisfied. As a result, it is hard to achieve the theoretical 100% diffractive efficiency. In fact, Andrew Wood 24 indicated that in the case of SM-DFL composed with materials existing in nature, the detuning factor α in Eq. (7) could hardly retain the needs for α = 1 when λ deviated away from the design www.nature.com/scientificreports/ wavelength λ 0 . Moreover, as previously described, the regular materials, i.e. glass and PC, found by Kenneth J. Weible 20 were able to increase the average diffractive efficiency from 85 to 94%, but it still required more effort to realize the theoretical 100% diffractive efficiency. Latest works on the optical design of TM-DFL: a numerical framework for the design of broadband DOEs. To overcome the above-mentioned problems, Daniel Werdehausen in 2019 25 proposed to use dispersion-engineered nanocomposites for artificially generating a refractive index difference Δn(λ) to satisfy the requirement for n( ) ∝ λ. According to this definition 25 , the so-called nanocomposites are produced by adding a proper volume fraction of nanoparticles with a diameter smaller than 5 nm, such as Diamond, ZrO 2 , TiO 2 , ITO (indium tin oxide), and AZO (aluminum-doped zinc oxide) (presented as green stars), to the existing polymeric materials, such as PMMA (poly(methyl methacrylate)), COP (cyclic olefin copolymer), PC (polycarbonate), and PS (polystyrene) (presented as blue stars), to adjust the optical parameters of materials n d , v d , and P g,F . Though such new n d , v d , and P g,F (presented as orange points) do not exist in nature, they can be tailored within a certain range (presented as pink area) in both the Abbe diagram and partial dispersion diagram, as respectively shown in Fig. 4a and b 25 . For instance, by adding TiO 2 nanoparticles with different volume fraction to the polymeric material PC, the distribution of n d -v d can extend to cover a region in 17 < v d < 28 and 1.6 < n d < 1.85 while the distribution of P gF -v d covers a region in 17 < v d < 28 and 0.58 < P gF < 0.65. Furthermore, the same research team in 2020 26 proposed a design method of mapping and fitting based on the numerical computation for matching the material refractive index of TM-DFL to achieve the light diffractive efficiency higher than 99.9%. According to DOEs' phase profiles change across the different dispersion regimes 26 , the material parameters of material 1(Mat.1) are first selected and set at n d,1 = 1.8, v d,1 = 60, and P g,F,1 = 0.55. By the method of mapping, the material parameters n d,2 , v d,2 , and P g,F,2 of material 2(Mat. 2) are discretely varied to calculate the distribution of the average diffractive efficiency η in the Abbe diagram. Similarly, the distribution of the average diffractive efficiency η in the partial dispersion diagram is calculated and drawn. For later discussion, the diffractive efficiency η n d,2 , v d,2 mat1 and η P g,F,2 , v d,2 mat1 are defined as a function of (n d,2 , v d,2 ) and (P g,F,2 , v d,2 ) respectively. The subscript Mat.1 in both function η stands for the values of the material parameters n d,1 , v d,1 , and P g,F,1 of Mat.1 being fixed as a constant. The major feature of the mapping method is to discretely modulate n d,2 , v d,2 , P g,F,2 of Mat. 2 in a large region to calculate and set up both diffractive efficiency map of η n d,2 , v d,2 mat1 and η P g,F,2 , v d,2 mat1 , so to find the black dotted curved lines where the calculated n d,2 , v d,2 , P g,F,2 of Mat. 2 matches to the selected n d,1 , v d,1 , and P g,F,1 of Mat.1 for achieving the near 100% diffractive efficiency. Hereafter, the black dotted curved line in η n d,2 , v d,2 mat1 is named as Abbe characteristic curve of the near 100% of diffractive efficiency, or Abbe characteristic curve in brief, while the other one in η P g,F,2 , v d,2 mat1 is named as partial dispersion characteristic curve of the near N-BAF52 air www.nature.com/scientificreports/ 100% of diffractive efficiency, or partial dispersion characteristic curve in brief. Finally, the most suitable material parameters n d,2 , v d,2 , and P g,F,2 can be determined by comparing the producible nanocomposites for completing the optimal match of parameters in two nanocomposite materials. Furthermore, applying the mathematical fitting to the Abbe characteristic curve, a natural exponential function is obtained, as below. where a = 1.839, b = 0.9919, and c = 0.8782. Note that these values are determined only for n d,1 = 1.8, v d,1 = 60, and P g,F,1 = 0.55. To acquire the general relationship between a, b, c, and the material parameters of MAT.1, ten of the Abbe characteristic curves with different values of n d,1 , ranging from 1.5 to 2.0, are used to fit to obtain the n d,1 dependence of a, b, and c as below. However, the fitting to the partial dispersion characteristic curve, is not discussed 26 . Major issues left by latest works. In summary, to achieve the optical parameters match for two nanocomposite materials in TM-DFL, the above-mentioned design method of mapping and fitting by the numerical computation 26 confronts the following issues. (1) The methods of mapping can obtain a single Abbe characteristic curve and partial dispersion characteristic curve through a big volume of numerical computations at a time while losing efficiency and accuracy. (2) The fitting method can describe the Abbe characteristic curve as Eq. (11) which is a function of n d,1 in a special form of a natural exponential function with 1/3 power. In addition to the fitting accuracy or error, it is not a general equation for the analytical evaluation of the system. The so-called "analytical evaluation" refers to analyze the variance in the entire system caused by the parameter variation without using massive numerical computations. In other words, it is a qualitative and quantitative way to get insight into the general physical behaviors of a system by theoretical formulas. (3) Three coefficients a, b, and c of the Abbe characteristic curve merely contains n d,1 of Mat.1 without any further relation with v d,1 and P g,F,1 of Mat.1. (4) There is no fitting applied to the partial dispersion characteristic curve for the analytical evaluation on the relationship among parameters, i.e. (n d,1 , v d,1 , and P g,F,1 ) and (n d,2 , v d,2 , and P g,F,2 ) between two materials. New method: a theoretical framework for general design formalism. In contrast, we present a new method of analytical evaluation based on theoretical formulas in this study. More exactly, two characteristic curve formulas of the near 100% of diffractive efficiency are derived from the theories of Cauchy's equation, Abbe number, and partial dispersion as well as the diffractive theory of Fresnel lens. Besides, to achieve the purposes of the perfect parameters match between two optical materials in TM-DFL and the objectives of the general analyses on the optical behavior of TM-DFL, it completely solves the above-mentioned disadvantages of the numerical computation-based methods of mapping and fitting. For the optical theory of DFL, the previous theoretical Eqs. (1)-(9) completely describe the relationship among the diffractive efficiency, the wavelength of the incident light, and material refractive index, where Eqs. (1)-(4) provide the design of the geometric shape of Fresnel lens, Eqs. (5)-(7) provide the focal length and diffractive efficiency after the incident light interactive with DFL, and Eqs. (8)-(9) provide the overall evaluation of diffractive efficiency. For DFL, 2π phase shift is the necessary condition for reaching 100% diffractive efficiency which can be acquired when h�n( 0 ) = 0 , where h is the FZ height, Δn( 0 ) = n( 0 )-1 is the refractive index difference, n( 0 ) is the material refractive index of DFL at = 0 , is the wavelength of the incident light, and λ 0 is the design wavelength. Further, "1" in the Δn is the refractive index of air. That is, the incident light directly contacts the air after going through DFL. Since the refractive index of air is almost irrelevant to , it is no way to satisfy the condition Δn() ∝ . Consequently, the diffractive efficiency drops off significantly when λ deviating away from λ 0 . Therefore, TM-DFL is used to effectively overcome the efficiency issue caused by the refractive index of air. For this purpose, Eq. (3) is modified as below to generate a 2π phase shift when TM-DFL is used to replace DFL. where n 1 ( 0 ) and n 2 ( 0 ) are the value of two material refractive indexes in TM-DFL at = 0 . Moreover, the detuning factor α in Eq. (7) also needs a modification as in Eq. (10). Practically, the two optical materials used for TM-DFL have to satisfy the following requirements, including (1) optically transparent in the visible spectrum, (2) practical in mass production, and (3) Δn(λ)∝ . As the transparent materials existing in nature can hardly satisfy all the above conditions, especially for (3). Consequently, an optical material with www.nature.com/scientificreports/ an artificially tailorable refractive index, such as nanocomposite, is necessary for the realization of TM-DFL. The core of this research is to build up a theoretical foundation for the design of TM-DFL with the near 100% of diffractive efficiency by connecting theories of Cauchy's equation, Abbe number, partial dispersion, and the diffractive theory of Fresnel lens all together to derive the equation of Abbe characteristic curve and partial dispersion characteristic curve as below. Solution for coefficients of Cauchy's equation. In general, the refractive index n(λ) of transparent optical materials in the spectrum of visible light can be calculated by Cauchy's Eq. (16). where n(λ) is the refractive index depending on the light wavelength, λ is the wavelength of light in vacuum, and A, B, C are coefficients. Moreover, the dispersion of transparent optical materials can be defined with Abbe number and partial dispersion, as below. where n d , n F , n C , and n g are the refractive indices of materials at the wavelengths of the Fraunhofer d, F, C and g spectral lines (referring to wavelength λ d = 587.56 nm, λ F = 486.13 nm, λ C = 656.28 nm, and λ g = 435.83 nm). First, n d , n F , n C , n g and λ d , λ F , λ C , λ g are substituted into Eq. (16) to obtain the following equations. Coefficients A, B, and C are solved as below. where Coefficients A, B, C of Cauchy's equation in Eqs. (23)-(25) clearly present the refractive index as a function of n d , v d , P g,F , and wavelengths of Fraunhofer d, F, C, and g spectral lines. www.nature.com/scientificreports/ Derivation of Abbe characteristic curve. As mentioned above, Eq. (6) is a general equation of diffractive efficiency while Eq. (10) defines the detuning factor α of TM-DFL which is a function of refractive index difference Δn(λ). For all wavelengths in the spectrum of visible light, the FZ height satisfying Δn(λ)h=λ is the key to direct the diffraction energy of all wavelengths toward 100% of diffractive efficiency at the diffracted order m = 1. By substituting α = 1 and λ 0 = λ d into Eqs. (10) and (15) where v d,1 is the Abbe number of Mat.1. Accordingly, both formulas (33) and (35) depict the same Abbe characteristic curves for TM-DFL with the same calculated results. However, formula (35) provides a clearer scope to know how n d,2 is affected by n d,1 and v d,1 of Mat.1. More accurately, formula (35) can be considered as a general formula of n d,2 as a function of v d,2 , n d,1 , v d,1 , λ F , λ C , and λ d . A general form of a function of n d,2 is defined as below. Unlike the conventional methods based on numerical computation, formulas (33) and (35) can accurately and immediately calculate and draw the Abbe characteristic curve in the Abbe diagram without the need for numerous numerical computations. The general behavior of n d,2 in Eq. (36) will be elaborated in the later discussion. Derivation of partial dispersion characteristic curve. According to the definition in Eq. (18), let the partial dispersion P g,F,2 of Mat.2 be defined as below. where n g,2 , n F,2 , and n C,2 are the refractive indices of Mat.2 at the wavelengths of the Fraunhofer g, F, and C spectral lines. Substituting Eq. (31) into Eq. (37), it obtains where λ gF = λ g − λ F and n d,12 = n d,1 − n d,2 . Similarly, substituting Eq. (34) into Eq. (38), it obtains Accordingly, both formulas (38) and (39) depict the same partial dispersion characteristic curves for TM-DFL with the same calculated results. However, formula (39) provides a more clear scope to know how P g,F,2 is affected www.nature.com/scientificreports/ by n d,1 , v d,1 and P g,F,1 of Mat.1. More accurately, formula (39) can be considered as a general formula of P g,F,2 since it is a function of n d,1 , v d,1 , P g,F,1 , n d,2 , v d,2 , λ F , λ g , λ d . A general form of a function of P g,F,2 is defined as below. Unlike the conventional methods based on numerical computation, formulas (38) and (39) can accurately and immediately calculate and draw the partial dispersion characteristic curve in the partial dispersion diagram without the need for numerous numerical computations. The general behavior of P g,F,2 in formula (39) will be elaborated in the later discussion. Usually, as mentioned previously, the analytical evaluation is a way to look into the general physical behavior of a system, such as n d2 and P g,F,2 in formula (36) and (40), by applying the partial differential to the system at each parameter, such as parameters of optical materials of TM-DFL, so to understand the system response Δn d2 and ΔP g,F,2 , given as below. Clearly, It is no necessary to apply the same process of partial differential to both n d2 and P g,F,2 against the light wavelength λ d , λ F , λ C , and λ g because it is no reason to change the definition of Fraunhofer line. In summary, in contrast to the conventional method 26 , formulas (33), (35), (38), and (39) presented in this study can obtain the Abbe and partial dispersion characteristic curves of TM-DFL without numerous computations. Further, an analytical evaluation for getting more insight into the general physical behavior of TM-DFL is elaborated on below. Results Hereafter, based on an example of TW-DFL 26 , we first present a quantitative result in comparison with the one obtained by the conventional method. According to the example, the optical parameters of Mat.1 are first selected and set to n d,1 = 1.8, v d,1 = 60, and P g,F,1 = 0.55. Then, by the numerical computation based mapping method, the Abbe and partial dispersion diagrams are produced to generate the Abbe and partial dispersion characteristic curves. Finally, the maximum achieved diffractive efficiency can be found at η = ~ 99.1%, n d,2 = 1.7, v d,2 = 18.4 for the Abbe characteristic curves, and η = ~ 99.9%, v d,2 = 15.2, P g,F,2 = 0.3 for the partial dispersion characteristic curves. Meanwhile, the Abbe characteristic curve is fitted by Eq. (11) to obtain the coefficients a = 1.839, b = 0.9919, c = 0.8782. In contrast, in our studies, the Abbe and partial dispersion characteristic curves are obtained by our formulas (35) and (39) respectively, as shown in Fig. 5a and b. There are two Abbe characteristic curves on the same Abbe diagram shown in Fig. 5a, where the green solid line is our result guaranteed by the near 100% diffractive efficiency at each point on the curve while the yellow dotted line is the fitting result of above-mentioned research 26 . Apparently, the smaller v d,2 the larger difference shows the qualitative difference between the two methods. Also, a quantitative difference of the mean value is calculated to 0.0064. In the industrial measurement of the refractive index, this value of 0.0064 is large enough to be easily measured (note: the measurement precision of the Abbe refractometer in the market is 0.0002). In other words, the mapping and fitting methods cause non-negligible errors. Regarding the partial dispersion characteristic curve depicted on the partial dispersion diagram in Fig. 5b, it is no way to do the analytical comparison since no fitting data provided by the above-mentioned research 26 . Discussions In summary, a theoretical formula-based analytical method is proposed in our studies to improve the disadvantages of the numerical computational-based mapping and fitting method. More definitely, the theory of Cauchy's equation, Abbe number, partial dispersion, and the diffractive theory of Fresnel lens are blended into optically connecting two different nanocomposite materials in TM-DFL for achieving the near 100% diffractive efficiency with all wavelengths in the visible spectrum at the first diffracted order. In addition to perfectly matching optical parameters between two materials without numerous computations, it also satisfies the objective of general analysis for TM-DFL in both quantitative and qualitative evaluations. The major features of the optical behavior of TM-DFL in our study are elaborated below. on n d,1 and v d,1 only, but independent on P g,F,1 . As shown in Fig. 6a, the Abbe characteristic curves of Mat.2 in the Abbe diagram is calculated by formula (35) for n d,2 (v d,2 ) at n d,1 = 2.0, 1.9, 1.8, 1.7, 1.6, 1.5, and v d,1 = 50, 40, 30. When v d,2 is fixed, it shows a feature: the larger n d,1 , the larger n d,2 . When n d,1 is fixed, n d,2 (v d,2 ) is split into a subset of lines at v d,1 = 30, 40, 50, for showing another feature: the less v d,1 , the larger n d,2 . Apparently, the Abbe characteristic curves of Mat.2 is nothing to do with P g,F,1 because P g,F,1 is not included in formula (35). Feature 2: the general behavior of P g,F,2 (v d,2 ): dependent on P g,F,1 and v d,1 only, but independent on n d,1 . The partial dispersion characteristic curve of Mat.2 in the partial dispersion diagram is calculated by formula (39). Despite n d,1 being an explicit parameter in formula (39), the final calculation is irrelevant (40) P g,F,2 ≡ P g,F,2 (n d,1 , v d,1 , P g,F,1 , n d,2 , v d,2 , d , g , F ) www.nature.com/scientificreports/ to n d,1 . To analytically prove the independence of n d,1 , first we need to prove ΔP g,F,2 /Δn d,1 = 0 at all Δn d,1 and ΔP g,F,1 = Δv d,1 = Δv d,2 = 0, as below. By taking the partial derivative of P g,F,2 in formula (39) to n d,1 and n d,2 , it obtains Fig. 6b and c respectively. Accordingly, P g,F,2 (v d,2 ) depends on P g,F,1 and v d,1 only, but not depend on n d,1 . is dependent on P g,F,1 and v d,1 only but independent on n d,1 where P g,F,2 is constant when n d,1 is varied in the wide range from 2 to 1.5. Let's move back to the partial dispersion characteristic curve P g,F,2 in Fig. 6b. In the previous discussion, we know that P g,F,2 depends on P g,F,1 and v d,1 only, but not depend on n d,1 . Here, we are going to prove one more feature of the linearity in P g,F,2 . Referring to Table 1, there are three columns, represented as P g,F,2 (i) being a simple form used to replace the term of P g,F,2 (v d,2 (i)), show the data of P g,F,2 at different row i which is calculated www.nature.com/scientificreports/ by formula (39) with the related material parameters n d,1 , v d,1 , P g,F,1 , n d,2 , v d,2 , included in formula (39). Another three columns, represented as S(i), show the data of the local slope directly calculated by Eq. (48) with the related material parameters n d,1 , v d,1 , P g,F,2 , n d,2 , v d,2 included in Eq. (48). Further, more three columns, represented as ΔP g,F,2 (i)/Δv d,2 (i), show the local slope by directly dividing ΔP g,F,2 (i) = P g,F,2 (i) − P g,F,2 (i + 1) by Δv d,2 (i) = v d,2 (i) − v d,2 (i + 1). As a result, the quantitative calculations in Table 1 illustrate the linearity of P g,F,2 when the equality of ΔP g,F,2 (i)/Δv d,2 (i) = S(i) = constant is satisfied at all i = 1, 19. 29), n 1 ( ) of Mat.1 and n 2 ( ) of Mat.2 can be calculated by both the predetermined n d,1 , P g,F,1 , v d,1 and the calculated n d,2 (v d,2 ), P g,F,2 (v d,2 ) respectively. Then, the detuning factor α( ) is calculated by substituting n 1 ( ) and n 2 ( ) into Eq. (10). Finally, according to Eqs. (6) and (8), the diffractive efficiency η m is calculated to achieve 99.95% (corresponding to the term "near 100%" used in our research) at the diffracted order m = 1 and wavelength from λ 1 = 400 nm to λ 2 = 700 nm. Regarding the difference in 0.05% diffractive efficiency, it is reasonable to infer that the error of 0.05% is caused by the miss of the higher approximation terms in the Cauchy Eq. (16). For the exact 100% diffractive efficiency, it can be simply obtained in the following way. Following the same treatments mentioned above, after n 1 ( ) of Mat.1 being obtained, the FZ height h is calculated first by substituting n d,1 and n d,2 into Eq. (30), then n 2 ( ) is obtained according to Eq. (31). Finally, do the same works again to get α( ) = 1 and η(λ) = 100% at all the wavelength in the visible light spectrum. Feature 5: the optical behavior of convergence and divergence. In general, for the conventional DFL, the diffractive efficiency is determined by the FZ height h, the refractive index difference Δn, and the incident light wavelength λ while the optical focusing power is determined by the Δn and the curvature 1/R of the surface relief of DFL. Let's take up the example used in Fig. 5a and b to further probe into the focusing power related to TM-FDL. As shown in both Figs, the Abbe characteristic curves n d,2 (v d.2 ) and the partial dispersion characteristic curve P g,F,2 (v d,2 ) are calculated according to formula (35) and (39) respectively. Also, following the previous treatments given in Feature 4 above, an FZ height h is plotted with the respect to v d.2 , as shown in Fig. 7a, by employing Eq. (30) for matching h with Δn(λ) at λ = λ d , i.e. h(v d,2 ) = λ d /(n d,1 -n d,2 (v d,2 )) where n d,1 ≡ n 1 (λ d ) and n d,2 ≡ n d,2 (v d,2 ), to guarantee the near 100% diffractive efficiency at the first diffracted order and all wavelengths of the incident light in the visible spectrum. Interestingly, there exists a singularity of h at v d,2 = 60 where v d,2 = v d,1 and n d,2 = n d,1 . Consequently, the optical behavior of TM-DFL is categorized into three regions as below. (1) Transparent region: As shown in Fig. 7a, TM-DFL becomes optical transparent when v d,2 = v d,1 = 60 and n d,2 = n d,1 = 1.8. In other words, both FZ height h and focal length f 0 approach to infinity when TM-FDL is composed of two same optical materials, (2) Focusing region: As shown in Fig. 7b and c, TM-DFL is equipped with the focusing power when v d,2 < v d,1 , h > 0, n d,2 < n d,1 , and P g,F,2 < P g,F,1 (3) Divergent region: As shown in 7(b) and 7(c), TM-DFL is equipped with the divergent power when v d,2 > v d,1 , h < 0, n d,2 > n d,1 , and P g,F,2 > P g,F,1 Conclusions In our studies, we develop a theoretical framework to obtain a general formalism for the design of TM-DFL in broadband imaging applications. Unlike the existed approach of the numerical computation based methods of mapping and fitting, the optical theories related to Cauchy's equation, Abbe number, and partial dispersion, as well as the diffraction theory of Fresnel lens, have been perfectly blended into a new foundation for working out a TM-DFL with a precise material matching that can theoretically achieve a near 100% diffractive efficiency. The derivation of Equations for the calculations of n d,2 (v d,2 ) and P g,F,2 (v d,2 ) is elaborated. Also, physical behaviors of n d,2 (v d,2 ) and P g,F,2 (v d,2 ) are illustrated and proved, including (1) the independence of P g,F,1 in n d,2 (v d,2 ), (2) the independence of n d,1 in P g,F,2 (v d,2 ), (3) the linearity and constant slope of P g,F,2 (v d,2 ), (4) beyond 0.05% of the theoretical error in the calculation of diffractive efficiency, and (5) the optical behavior of convergence and divergence. We believe that our new approach will be an effective and precise way to achieve a near 100% diffractive efficiency for the design of TM-DFL. www.nature.com/scientificreports/ , c) TM-DFL is equipped with the focusing power when v d,2 < v d,1 , h > 0, n d,2 < n d,1 , and P g,F,2 < P g,F,1 , while TM-DFL is equipped with the divergent power when v d,2 > v d,1 , h < 0, n d,2 > n d,1 , and P g,F,2 > P g,F,1 .
7,644
2021-07-29T00:00:00.000
[ "Physics", "Engineering" ]
Dynamics Simulation and Experimental Investigation of Q-Switching in a Self-Mode-Locked Semiconductor Disk Laser Q-switching in a mode-locked laser not only makes the amplitude of every single output pulse unequal, but also limits the time width and peak power of output pulses. This paper investigates the Q-switching in a self-mode-locked semiconductor disk laser numerically and experimentally. By using the delay differential equations for passively mode-locking, conditions of Q-switching in a self-mode-locked semiconductor disk laser are numerically analyzed for the first time. Meanwhile, based on the experimental results, the causes of Q-switching tendency including the change of nonlinear refractive index and the change of soft aperture, are also discussed. Some possible measures to suppress Q-switching instability, i.e., to obtain stable continuous-wave mode-locking in a self-mode-locked semiconductor disk laser are proposed. Because of its intrinsic time characteristics of semiconductor carriers, SDLs are particularly suited for the generation of ultrashort pulses with pulse duration from picosecond to femtosecond. Since the first mode-locked SDL was reported [8], performances of such devices have been greatly improved [9], [10]. The maximum average output power has been increased to 6.4 W [11], the minimum pulse width has been decreased to 60 fs [12], and the repetition rate directly arisen from the oscillator has also been pushed to 101.2 GHz [13]. At the same time, many groups have done excellent researches on the mechanism of mode-locking in a SDL, including the effects of dispersion [14], the shaping of soliton-like pulse [15], [16], the nonequilibrium carrier dynamics and so on [17]. In addition to the above mentioned SDLs which were passively mode-locked using a semiconductor saturable absorption mirror (SESAM), researchers have found another method for mode-locking in SDLs, known as self-mode-locking (SML) [18], [19]. Some experiments of SML need to insert a slit or another Kerr medium in the laser resonator [20], [21], while other reports of SML do not need to place any additional elements [22], [23]. Currently, it is widely accepted that the SML phenomenon in a SDL originates from the Kerr effect of the semiconductor gain medium [24]. The Kerr effect can be expressed as n = n 0 +n 2 I, where n 0 is the refractive index without light, I is the light intensity, and n 2 is the nonlinear refractive index. It has been realized that the spatial distribution of light intensity (e.g., Gaussian distribution) will form a so-called Kerr-lens. Along with a soft or hard aperture, an equivalent saturable absorber can be produced. For SDLs, the inter-band and intra-band relaxation time of carriers are in the order of nanosecond and picosecond [25]. So, in the numerical analysis later in this paper, we would consider the Kerr effect along with an aperture in self-modelocked SDL as a SESAM. In view of that the carrier relaxation time in the order of nanoseconds can effectively suppress the Q-switching tendency, there is no in-depth study on the Q-switching phenomenon in mode-locked SDLs yet. However, in a passively mode-locked SDL, if the gain of laser is not sufficient, or, the modulation depth of the saturable absorber is relatively large, it will still cause undersaturation of the absorber and result in the Q-switching. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ Particularly, for a self-mode-locked SDL which is initiated by the combined action of a Kerr-lens and a soft aperture, on one hand, the change of pump intensity will lead to the change of nonlinear refractive index of gain medium. (i.e., the change of focal length of the Kerr-lens). On the other, the size of soft aperture formed by the overlap of laser and pump spot is also very sensitive to the adjustment of laser cavity. Both of them may stimulate the Q-switching tendency in a self-mode-locked SDL. In this work, the Q-switching instability in a self-modelocked SDL is numerically simulated using the delay differential equations for passively mode-locking. Then, according to the experimental results, the Q-switching phenomenon in a self-mode-locked SDL is further discussed and analyzed. Some possible measures for suppressing Q-switching instability, i.e., for obtaining stable continuous-wave (CW) mode-locking, are also proposed. II. THEORETICAL MODEL AND NUMERICAL SIMULATIONS Dynamics of mode-locking can be well explained by Haus master equations [26]. This analytical approach is widely used because of its insight into the underlying mechanism of modelocking. However, because it's approximation of small change of the laser pulse shape within one round-trip, dynamics of some mode-locked lasers (such as fiber laser and semiconductor laser) can no longer be described satisfactorily by the master equation. Here we use another model proposed by A. G. Vladimirov et al for passively mode-locked semiconductor lasers [27], [28]. It includes a set of ordinary delay differential equations. This model avoids the approximations of small gain and loss per cavity round-trip and weak saturation, and is closer to actual condition of semiconductor laser devices. The delay differential equations (DDEs) are expressed as: where A is the electric field amplitude of laser, γ is the bandwidth of spectral filter, κ describes the total nonresonant linear intensity losses per cavity round-trip, α g and α q are the linewidth enhancement factors for gain medium and saturable absorber, respectively. g and q indicate the time dependent saturable gain and absorption, g 0 and q 0 denote the initial value of g and q, γ g and γ q are the relaxation rates in the gain and absorber, and s is the ratio of saturation energy of gain and absorber. It should be noted that the delay parameter τ equal to the cavity round-trip time, and all times in the above equations are normalized by the cavity round-trip time. In order to compare with the experimental results later, in the numerical simulation below, we choose a linear laser cavity. The resonator is composed of a distributed Bragg reflector (DBR) at the bottom of gain chip and an external flat-concave end mirror with a radius of curvature of 150 mm. The high-reflection coated end mirror has a reflectivity of R 1 = 99.9% at laser wavelength 980 nm, and the reflectivity of DBR for laser wavelength is about R 2 = 99%. In the above delay differential equations, κ is defined as the intensity transmission of output mirror, i.e., the fraction of power remaining in the cavity after each round-trip. So, we have κ= √ R 1 R 2 = 0.9945. The active region of gain chip in our experiment is consisted of 15 InGaAs/GaAs multiple quantum wells that designed for a target emitting wavelength of 980 nm. Like reference [14], the value of gain bandwidth can be selected to be 10 nm and the corresponding bandwidth of spectral filter to be γ = 200. The cavity length used in experiment is about 140 mm, resulting in a cavity round-trip time of τ r ∼1 ns. Considering that the typical value of gain recovery time can be chosen as 10 ns, and it should be normalized by τ r as mentioned before, the relaxation rate of gain will be γ g = 0.1. As for the value of g 0 , it is determined through referring to the similar situation in literatures [27], [28] and [29]. Selection of the parameters of saturable absorber is a little more complicated, because there is no a real saturable absorber in a self-mode-locked SDL, but an equivalent saturable absorber composed of the Kerr-lens and the soft aperture. By using the transmission formula of light field and the split-step Fourier method, we calculate the difference of intra-cavity round-trip loss of laser with and without Kerr-lens (see Fig. 5). Then the loss difference is regarded as the modulation depth of the equivalent saturable absorber, and parameter q 0 in the delay differential equations is determined. It can be said that SML in a SDL depends on the equivalent saturable absorber, which relies on the Kerr-lens, i.e., the nonlinear refractive index. Since the nonlinear refractive index n 2 in semiconductor gain medium is caused by carrier density, and the typical value of recovery time of carrier is about 10 ps. Therefore, the recovery time of the equivalent saturable absorber is chosen to be 10 ps. After normalized by the cavity round-trip time of 1 ns, the value of relaxation rate in absorber is γ q = 100. Parameter s is defined as s = E sat,g /E sat,q , and the saturation energy of gain is selected to be E sat,g = 50 nJ (corresponding to 100 μm spot radius) [14]. Considering that the equivalent saturable absorber is consisted of the Kerr-lens and a soft aperture, the parameter E sat,q in this paper is estimated using the critical power of Kerr effect, which can be expressed as [30], [31] For λ = 980 nm, n 0 = 3.5, n 2 = 1×10 −16 m 2 /W [32], [33], and pulse width of 5 ps, the above P cr corresponds to a saturation energy of about 2 nJ, so the estimated value of the saturation parameter is s = 25. All parameters used in the simulations below are summarized and listed in Table I. Linewidth enhancement factors for gain medium and saturable absorber are not included in this work. The parameter g 0 will increase with increased pump intensity. q 0 will also change with the various pump intensity, because pump intensity can change the nonlinear refractive index of gain medium, thus change the focal length of Kerr-lens and have an influence on the value of q 0 . In addition, q 0 will be obviously affected by the radium of soft aperture. Except g 0 and q 0 , other parameters remain a fixed value in the numerical simulation. We numerically solve equations (1)-(3) using the famous Runge-Kutta method. Fig. 1 shows the time-dependent field amplitude A, gain g and absorption q when g 0 and q 0 are chosen to be 2.0 and 0.5, respectively. Because the time has been normalized by the cavity round-trip time τ r in the equations, the unit of time axis in figure is τ r . We can see in Fig. 1(a)(containing 100 intracavity cycles) that the pulse becomes steady and reaches a stable CW mode-locking after about 60 round-trips. Fig. 1(b) is an enlargement of Fig. 1(a). Fig. 1(c) clearly shows the process of pulse formation. At a certain time, the absorption of the absorber begins to decline due to saturation. When the absorption drops below the gain, the net gain window opens and the pulse shape begins. Then the gain also gradually tends to saturation and begins to decrease. Once the gain is equal to the absorption, the amplitude of the pulse reaches the maximum value. This is followed by the respective recovery of absorber and gain medium, as well as the similar formation process of the next pulse. According to the research results reported by C. Honninger et al, the stability condition against Q-switching in a passively mode-locked laser can be express as [34] where E p is pulse energy and ΔR is modulation depth. Obviously, for given E sat,g and E sat,q , excessive ΔR or relatively small E p will trigger unwanted Q-switching. For SML described by equations (1)-(3), the influence of g 0 on Q-switching instability is similar to that of E sat,g , and the effect of q 0 is same as that of ΔR. Fig. 2 shows the calculated pulse evolution in a self-mode-locked SDL with q 0 = 1.5 and various g 0 from 1.8 to 2.6. The insets in the upper left corner are the calculation results on a larger time scale, which can show whether the pulse reaches a stable state more accurately. As can be seen from Fig. 2(a), the pulse reaches a stable Q-switched mode-locking after about 500 round-trips, and the period of Q-switching envelope is about 55 τ r . In Fig. 2(b), the time required to reach stable Q-switched mode-locking is shorter (about 150 round-trips), and the period of Q-switching envelope is shorter too (about 40 τ r ). It takes about 100 and 50 round-trips to come to steady Q-switching in Fig. 2(c) and (d), respectively. In general, from (a) to (d), with the gradually increased g 0 , the time required for establishing stable Q-switched mode-locking becomes shorter and shorter, and the depth of Q-switching envelope turns smaller and smaller. In other word, the Q-switching gradually disappears with increased g 0 . In Fig. 2(e), we can see that about 80 intracavity round-trips later, the pulses have developed into an ideal CW mode-locked pulse train. Fig. 2(f) shows the time-dependent A, g and q when the g 0 and q 0 are 1.8 and 1.5, respectively. As shown in Fig. 2(f), Q-switching is originated from the undersaturation of the absorber in the development of mode-locking. When the absorber is partially saturated, the gain begins to saturate immediately after it. Then, the net gain window is formed and the pulse is established. However, the amplitude of the net gain window is small at this time, and the corresponding pulse intensity is not big. As pump continuing, the peak value of established pulse will be higher and higher. Only when the absorber is fully saturated, the amplitude of net gain window reaches the maximum value and the pulse peak gets the highest magnitude. The time required for absorber from partial saturation to full saturation determines the time width of Q-switching, which obviously decreases with increased g 0 . It is also clear that the undersaturation of absorber is resulted from the relatively small g 0 , or relatively weak pumping intensity. This has been fully illustrated by the fact that the Q-switching in Fig. 2(a)-(e) gradually disappears with increased g 0 . In the DDE model, g and q describe the saturable gain and loss in the cavity, respectively, and κ describes the total nonresonant linear intensity loss in a round-trip. Thus, the threshold condition for lasing is given by g = q-ln(κ). We plot the calculated Qswitching, the CW mode-locking and the lasing thresholds with various (q 0 , g 0 ) and fixed (κ, γ, γ g , γ q , s) in Fig. 3. It can be concluded from Fig. 3 that for a self-mode-locked SDL using Kerr effect, Q-switching will exist in a certain range between the CW operation and CW mode-locking. For a given g 0 , larger the q 0 is, wider the Q-switching region will be. III. EXPERIMENTAL RESULTS AND DISCUSSIONS The gain chip used in experiment is epitaxially grown in reverse sequences as: the AlGaAs etch stop layer with high Al Fig. 3. Computed Q-switching threshold, CW mode-locking threshold and the lasing threshold with various (q 0 , g 0 ) and fixed (κ, γ, γ g , γ q , s). composition, the GaAs protect layer, the AlGaAs window layer with high barrier, the active region, the DBR and the antioxidant GaAs cap layer. There are 15 InGaAs/GaAsP quantum wells in the active region. The content of In in InGaAs well layer is designed to meet the target laser wavelength of 980 nm, and the content of P in GaAsP barrier layer must be adequate to compensate the strain but not too much to absorb the pumping energy. The DBR is composed of 30 pairs alternate AlGaAs layers with high Al and low Al composition, and the designed center wavelength and high-reflectivity bandwidth of it are 980 nm and 100 nm respectively. According to the test data given by manufacturer, the reflectivity of DBR at 980 nm wavelength is about R 2 = 99%. When the grown wafer is split to small chips with 4 mm×4 mm dimension, the epitaxial end face is metalized with titaniumplatinum-aurum sequentially. Then the chip is bonded to a copper heatsink, and the substrate is removed using chemical etch. A high-reflectivity coated (for 980 nm wavelength) plane-concave mirror with 150 mm curvature radius is employed as the end mirror to form a linear cavity, and the reflectivity of the end mirror at 980 nm wavelength is R 1 = 99.9%. The pump source is a 808 nm fiber-coupled semiconductor diode laser with 30 W output power, and the core diameter of its pigtail fiber is 200 μm. Fig. 4 shows the experimental diagram. SML can be achieved by introducing a saturable absorber, which is formed by the combination of the Kerr medium plus a soft aperture. In experiment, two key conditions are required to start the SML. Firstly, the intracavity light with high enough circulating power is needed to ensure that there are desired noises with high enough peak power to generate the primary Kerr effect. Secondly, an aperture with appropriate size is necessary to provide a certain difference between the cavity loss of modelocking and continuous operation, so to initiate and stabilize mode-locking. Among them, the enhancement of intracavity circulating optical power can be easily realized by increasing the pump intensity. In practice, more care should be taken to choose an aperture with proper size. In order to estimate the appropriate size of soft aperture, we firstly calculated the nonlinear refractive index n 2 of gain medium under different pump spot sizes (keeping the pump power a constant). Data of nonlinear refractive index n 2 under various pump power density provided by reference [32] are used in the calculation, and the simulated results are shown by yellow dotted line in Fig. 5. Then, the focal length of Kerr-lens is carried out based on the formula where ω is the radium of light spot, I p is the peak light intensity, and L is the length of Kerr medium. The calculated focal length f of Kerr-lens can be seen from the blue solid line in Fig. 5. Finally, by using the split-step Fourier method, the amplitude of the light field is computed when it completes a round-trip in the cavity, and the difference of the round-trip loss experienced by the laser beam with and without Kerr-lens is carried out, as shown by the green solid line in Fig. 5. In the calculation, the length of Kerr medium is equal to the thickness of active region in gain chip, the pump power is set to be 10 W, and the value of pulse peak power is 50 kW. As mentioned before, the transmittance of output coupler is 99.9%. Supposing a pulse width of 5 ps and a pulse repetition rate of 1 GHz, the above 50 kW peak power corresponds to an intracavity circulating power of 250 W and an output power of 250 mW, which is roughly consistent with the reality of our experiment. Obviously, a smaller pump spot is favorable for a focused beam because it will introduce more loss to an ordinary beam and provide larger difference between the intracavity loss with and without Kerr-lens. However, a smaller pump spot also means a higher pump power density. When the pump power density is increased, n 2 will change from negative to positive, and its absolute value will firstly decrease and then increase [32]. This will result in a corresponding change of the focal length of Kerrlens. So, as shown in Fig. 5 that the difference of round-trip loss with and without Kerr-lens has a peak value when the pump spot diameter is about 220 μm, whose pump density corresponds to a n 2 about −1.0 × 10 −16 m 2 /W. Further reducing of the pump spot will lead to a gradual decrease of the loss difference. In our experiment, the cavity length is set to be about 140 mm, corresponding to a 240 μm diameter laser spot on gain chip. To achieve better transverse mode and stability of mode-locking, the laser cavity can be adjusted slightly around 140 mm. We use a 1:1.3 imagine lens pair to collimate and focus the pump beam on gain chip, and the pump spot size can be finely tuned by slightly changing the imagine distance. The soft aperture is formed by the overlap of pump and laser spot on gain chip. A Thorlabs DET08C InGaAs Biased Detector (with 5 GHz bandwidth, 70 ps rise time, 110 ps fall time, and 800-1700 nmw wavelength range) is used to receive the mode-locked pulse train, and the signal is then sent to a Tektronix MSO68B 6 Series Mixed Signal Oscilloscope (with 10 GHz bandwidth and 50 Gs/s sampling rate) for observation. The electric signal is also sent to a Rigol DSA875 spectrum analyzer (with 7.5 GHz bandwidth and 100 Hz-1 MHz resolution bandwidth) to record the repetition rate of pulse train. The time width of the mode-locked pulse is measured using a Femtochrome FR-103XL autocorrelator (with >4 Hz repetition rate, >175 ps scan range and <5 fs resolution ratio), and the laser spectrum is obtained from an Ocean Optics MAYA2000PRO-NIR spectrometer (with 780-1180 nm wavelength range and 0.18 nm resolution ratio). It has been found that the pump threshold of laser is about 0.522 W. When the pump power is increased to 1.036 W, we have the Q-switched mode-locking in the laser, and the output pulse train is show in Fig. 6(a). The period of Q-switching envelope decreases from 27 ns in Fig. 6(a) to 9 ns in Fig. 6(b) when the pump power is increased from 1.036 to 2.000 W. Stable CW mode-locking can be produced when the pump power reaches 2.126 W, as shown in Fig. 6(c). It can be concluded that with higher pump power, the period of Q-switching envelope becomes smaller, until it finally transitions to the ideal CW mode-locking, and this is consistent with the previous numerical analysis. A close view of the pulse train in Fig. 6(b) is redrawn in Fig. 7(a), and its corresponding radio frequency (RF) spectrum is shown in Fig. 7(b). The periods of Q-switching and modelocking in Fig. 7(a) are approximately T Q = 9 ns and T ML = 1 ns, and their corresponding RF spectrum are f Q = 0.11 GHz and f ML = 1.03 GHz, respectively. f Q is mainly affected by pumping intensity, while f ML is only determined by cavity length. In Fig. 7(b), the first signal f Q = 0.11 GHz is obviously arisen from the period of envelope of Q-switching, which is about 9 ns. The third signal f ML = 1.03 GHz certainly corresponds to the period of mode-locked pulse train and is in coordination with 145 mm cavity length. For the signals on the left and right sides of f ML , the frequency intervals between them and f ML are strictly equal to f Q . So, it can be concluded that they are the difference-frequency and sum-frequency signals of f ML and f Q . Fig. 7(b) also shows the second harmonic of f ML , and the difference-frequency and sum-frequency signals of 2 f ML and f Q . It should be noted that reference [35] mentioned a RF spectrum similar as in Fig. 7(b). However, the authors believed that those subpeaks beside f ML and 2 f ML should be attributed to the frequency spacing of transverse modes. Slowly increasing the pump power and slightly adjusting the cavity length and pump spot size can weaken the Q-switching gradually and make it disappear at last. That is, the laser will transit to a stable CW mode-locking state as described before. Fig. 8 shows the RF spectrum and autocorrelation measurement of the CW mode-locked pulse train with 20°C temperature and 5 W pump power. In Fig. 8(a), the frequency spectrum of Qswitched signal f Q and its difference-and sum-frequency signals with mode-locked signal f ML are completely disappeared. The amplitude of f ML is also significantly increased with a signalto-noise ratio exceeding 40 dB, indicating an ideal CW modelocking. The fundamental frequency of 1.1 GHz in Fig. 8(a) corresponds a cavity length about 140 mm, and higher harmonics up to fourth harmonics are also shown. As can be seen from the autocorrelation trace of the output pulses in Fig. 8(b), the duration of mode-locked pulse is about 5.0 ps by using Gaussian fit. Along with the 1.3 nm spectral width of laser shown in the right side, it can be estimated that the time-bandwidth product of CW mode-locked pulse is approximately 1.5, over three times of 0.441 (the value of Fourier-transform-limited Gaussian pulse). This means an obvious frequency chirp is included in laser pulse. Since there are no other elements that could cause the frequency chirp in the laser cavity, the above frequency chirp clearly comes from the semiconductor gain medium. It should be noted that in a semiconductor gain medium, Kerr effect exists in both spatial and temporal domains. Kerr effect caused by the spatial distribution of light intensity (e.g., Gaussian distribution) will form a so-called Kerr-lens, and can start the self-mode-locking. On the other hand, the Kerr effect caused by a time-varying light intensity I(t) of pulse will result in a time-varying nonlinear refractive index n(t) = n 0 +n 2 I(t). This time-varying nonlinear refractive index will change the phase of pulse and produce new frequency components, thus generate frequency chirp. With condition of 20°C temperature and about 220 μm pump spot diameter, output powers of the self-mode-locked laser under various pump powers are shown in Fig. 9. It can be seen from Fig. 9 that the lasing threshold, the Q-switching threshold, and the CW mode-locking threshold are 0.522 W, 1.036 W and 2.126 W, respectively. The output power increases almost linearly with the risen pump power, until it begins to decrease when the pump power is beyond 6.755 W. The linear fit shows a slope efficiency (SE) of about 12.2% of the laser, while the maximum output power is 0.716 W, and the maximum optical-to-optical efficiency is about 11.4%. Fig. 10 shows the stabilities of output power under CW (435 mW) and Q-switched (140 mW) mode-locking. The values of output power are recorded every 10 minutes, and the measurement time is 4 hours. It can be seen from Fig. 9 that for both CW and Q-switch mode-locking, the values of standard deviation are relatively small, and the stabilities of output power are also relatively satisfactory. IV. CONCLUSION In summary, we have investigated the Q-switching in a selfmode-locked SDL numerically and experimentally. By solving the DDEs for passively mode-locking, it has been found that the undersaturation of saturable absorber is the direct cause of Q-switching. More specifically, relatively large modulation depth of saturable absorber or relatively small pump intensity are the main reasons for Q-switching in a self-mode-locked SDL. The period and the depth of Q-switching envelope will decrease with increased pump intensity until the laser finally transitions to CW mode-locking. In the theoretical calculation, on one hand, a smaller pump spot is favorable for a focused beam because it will introduce more loss to an ordinary beam and provide larger difference between the intracavity loss with and without Kerr-lens. On the other, a smaller pump spot also means a higher pump power density and will result in a corresponding change of the focal length of Kerr-lens. So, the difference of round-trip loss with and without Kerr-lens has a peak value. A proper pump spot with diameter neither too large nor too small should be selected carefully to start the mode-locking. We have experimentally verified the above conclusions, and the achieved self-mode-locked laser has a maximum output power of 0.716 W and a slope efficiency of about 12.2%
6,393.8
2022-10-01T00:00:00.000
[ "Physics" ]
Peptides Derived from (RRWQWRMKKLG)2-K-Ahx Induce Selective Cellular Death in Breast Cancer Cell Lines through Apoptotic Pathway The effect on the cytotoxicity against breast cancer cell lines of the substitution of 26Met residue in the sequence of the Bovine Lactoferricin-derived dimeric peptide LfcinB (20-30)2: (20RRWQWRMKKLG30)2-K-Ahx with amino acids of different polarity was evaluated. The process of the synthesis of the LfcinB (20-30)2 analog peptides was similar to the original peptide. The cytotoxic assays showed that some analog peptides exhibited a significant cytotoxic effect against breast cancer cell lines HTB-132 and MCF-7, suggesting that the substitution of the Met with amino acids of a hydrophobic nature drastically enhances its cytotoxicity against HTB-132 and MCF-7 cells, reaching IC50 values up to 6 µM. In addition, these peptides have a selective effect, since they exhibit a lower cytotoxic effect on the non-tumorigenic cell line MCF-12. Interestingly, the cytotoxic effect is fast (90 min) and is maintained for up to 48 h. Additionally, through flow cytometry, it was found that the obtained dimeric peptides generate cell death through the apoptosis pathway and do not compromise the integrity of the cytoplasmic membrane, and there are intrinsic apoptotic events involved. These results show that the obtained peptides are extremely promising molecules for the future development of drugs for use against breast cancer. Introduction Cancer is considered to be the biggest public health problem worldwide [1], being the second most common cause of death in the world. In 2018, 8.2 million people died from this disease, and 14 million new cases are reported annually [2], with more than 60% occurring in Asia, Africa, and South America. In 2012, the most frequently diagnosed cancers in women were breast, colon, rectum, lung, cervix, and stomach, while for men they were lung, prostate, colon, rectum, stomach, and liver [3]. Breast cancer is the most common cancer type diagnosed worldwide [4]. There are 2.1 million cases annually, and 627,000 deaths of women due to this disease were reported in 2018 [5]. There are therapeutic options such as chemotherapy [6], radiotherapy [7], hormone therapy [8], and surgery [9] that have managed to mitigate this disease; however, these are invasive procedures with serious adverse effects that significantly affect the quality of life of the patients [10]. It is imperative to find new, more selective and less invasive therapeutic agents. In this context, several peptides derived from Bovine Lactoferricin (LfcinB) have been shown to have a selective cytotoxic effect against various cancer cell lines [11][12][13][14]. Some studies have shown that LfcinB-derived peptides such as dimers and tetramers containing the minimal motif RRWQWR exhibit a cytotoxic effect on oral squamous and breast cancer cell lines with IC 50 values between 5 and 15 µM and do not exhibit a cytotoxic effect against non-tumorigenic cell lines such as PCS-201-012 and Het-1A [15,16]. The tetrameric peptide LfcinB (20)(21)(22)(23)(24)(25) 4 : (( 20 RRWQWR 25 ) 2 -K-Ahx-C) 2 has exhibited a selective cytotoxic effect against MDA-MB 468, MDA-MB 231, and MCF-7 breast cancer cells and also induces cell death in MCF-7 cells through the apoptosis pathway [16,17]. Additionally, the dimeric peptide LfcinB (20-30) 2 : ( 20 RRWQWRMKKLG 30 ) 2 -K-Ahx has exhibited a significant selective cytotoxic effect against MDA-MB 468 and MDA-MB 231 breast cancer cells. The cytotoxic effect against both cell lines was near 100% (cellular viability~0%) when the peptide concentration was 100 µg/mL (30 µM), suggesting that this peptide could be considered to be promising for the development of therapeutic agents for treating breast cancer. The cytotoxic effect of analog peptides against breast cancer cell lines HTB-132 and MCF-7 and the non-tumorigenic cell line MCF-12 was evaluated. It was possible to identify analog peptides with higher selective cytotoxic effect against the breast cancer cell lines evaluated, these being peptides considered to be promising molecules for developing drugs for use against this type of cancer. From the starting peptide 26 [M], five new analog peptides were synthesized, in which the 26 Met amino acid of the sequence was replaced by amino acids of a different nature: (i) Lys (basic residue), (ii) Asp (acid side chain), (iii) Ala and Leu (aliphatic side chains), and (iv) Phe (aromatic side chain). All the peptides were synthesized following the manual synthesis protocol established in our laboratory. The synthesis of analog peptides was similar to that of the original dimeric peptide 26 [M], indicating that the substitution of amino acids does not affect the synthesis of the peptides. The peptides were obtained with high purity and had the expected mass, which was determined via MALDI-TOF MS ( Table 1). The cytotoxic effect of the peptide 26 [M] against breast cancer cell lines HTB-132 and MCF-7 was evaluated ( Figure 1). The HTB-132 cell line corresponds to triple-negative breast cancer that is one of the most aggressive types [18], and the MCF-7 cell line is a hormone-responsive cancer from a luminal type A breast cancer, which is the most diagnosed worldwide [19]. As is seen in Figure 1A, the peptide 26 [M] exhibited a concentration-dependent cytotoxic effect against the HTB-132 cell line in the 1-100 µg/mL range of concentration, reaching a cytotoxic effect close to 55% at the maximum evaluated concentration. While the peptide 26 [M] does not exhibit a significant cytotoxic effect against the MCF-7 cell line ( Figure 1D), this behavior is consistent with other studies, where the MCF-7 line has been found to be less susceptible to treatments with LfcinB-derived peptides and to drugs such as doxorubicin [20]. has been found to be less susceptible to treatments with LfcinB-derived peptides and to drugs such as doxorubicin [20]. a Purity was calculated from the purified dimer chromatographic profile, Figure S3. The cytotoxic effect of analog dimeric peptides against the HTB-132 cell line was evaluated ( Figure 1A-C). In a manner similar to peptide 26 [M], its analogs exhibited a concentration-dependent cytotoxic effect against HTB-132 cells. All the analog peptides exhibited a significant cytotoxic effect against this cell line; however, the 26 [D] and 26 [K] peptides in which 26 Met was replaced by charged amino acids (Asp and Lys, respectively) exhibited a lower cytotoxic effect against HTB-132 cells than the original dimeric peptide ( Figure 1A). On the other hand, the 26 [L] and 26 [F] peptides exhibited a greater cytotoxic effect against cancer cells than the original peptide 26 [M], indicating that when the Met was replaced with an amino acid containing a side chain of a hydrophobic-aliphatic or aromatic nature, the cytotoxic effect increased ( Figure 1B). Figure 1C shows the cytotoxic effect of the peptides at 100 μg/mL. The analog peptides 26 [L] and 26 [F] exhibited the highest cytotoxic effect against HTB-132 cells. The maximum cytotoxic effect was observed when the peptide concentration was 200 μg/mL. Peptides 26 [L] and 26 [F] decreased cell viability to 17% and 11%, respectively. According to our results, peptides 26 [L], 26 [A], and 26 [F] were chosen for evaluating their cytotoxic effect against the MCF-7 cell line ( Figure 1D-F). As can be seen, these three peptides had a greater cytotoxic effect against MCF-7 cells than the peptide 26 [M], which is similar to the cytotoxic effect of these peptides against HTB-132 cells. These peptides exhibited the same pattern of cytotoxic effect against both cell lines: the cytotoxic effect of peptides increased in the following order: 26 26 [F] exhibited the highest cytotoxic effect against both cell lines. MCF-7 cells were more susceptible to analog peptides than HTB-132 cells, while the original peptide did not exhibit a cytotoxic effect against MCF-7 cells ( Figure 1D). The IC50 values of peptide 26 [F] for HTB-132 and MCF-7 cells were 13 μM and 6 μM, respectively, while for peptide 26 [L] the IC50 values were 15 μM and 20 μM, respectively. These results allow us to consider the dimeric peptides 26 [F] and 26 [L] to be promising for the development of drugs for use against breast cancer, which is in agreement with previous The cytotoxic effect of the peptides 26 [L] and 26 [F] against MCF-7 cells were evaluated at 2, 24, and 48 h of treatment ( Figure 2A). Peptides 26 [L] and 26 [F] exhibited a significant cytotoxic effect after 2 h of treatment when the peptide concentration was higher than 100 μg/mL. The cytotoxic effect of the peptides was concentration-dependent in all cases, and there were no significant differences in the cytotoxic effect among the evaluated treatment times. Furthermore, the cytotoxic effect of both peptides against MCF-7 cells was sustained for up to 48 h. Regardless of the treatment time, the maximum cytotoxic effect was observed when the peptide concentration was 200 μg/mL, which for peptide 26 [L] was close to 85% (cell viability 15%), while for peptide 26 [F] it was approximately 90% (cell viability 10%) (Figure 2A). When cells were treated with peptide concentrations below 100 μg/mL the cytotoxic effect reached its maximum value at 24 h, decreasing when cells were treated for 48 h. This behavior is possibly due to the fact that peptide concentration in the culture medium decreased allowing cell proliferation. On the other hand, for cells treated with a peptide concentration equal to or higher than 100 μg/mL the cytotoxic effect was maintained up to 48 h. The cytotoxic effect of analog dimeric peptides against the HTB-132 cell line was evaluated ( Figure 1A-C). In a manner similar to peptide 26 [M], its analogs exhibited a concentration-dependent cytotoxic effect against HTB-132 cells. All the analog peptides exhibited a significant cytotoxic effect against this cell line; however, the 26 [D] and 26 [K] peptides in which 26 Met was replaced by charged amino acids (Asp and Lys, respectively) exhibited a lower cytotoxic effect against HTB-132 cells than the original dimeric peptide ( Figure 1A). On the other hand, the 26 [L] and 26 [F] peptides exhibited a greater cytotoxic effect against cancer cells than the original peptide 26 [M], indicating that when the Met was replaced with an amino acid containing a side chain of a hydrophobic-aliphatic or aromatic nature, the cytotoxic effect increased ( Figure 1B). Figure 1C shows the cytotoxic effect of the peptides at 100 µg/mL. The analog peptides 26 [L] and 26 [F] exhibited the highest cytotoxic effect against HTB-132 cells. The maximum cytotoxic effect was observed when the peptide concentration was 200 µg/mL. Peptides 26 [L] and 26 [F] decreased cell viability to 17% and 11%, respectively. According to our results, peptides 26 [L], 26 [A], and 26 [F] were chosen for evaluating their cytotoxic effect against the MCF-7 cell line ( Figure 1D-F). As can be seen, these three peptides had a greater cytotoxic effect against MCF-7 cells than the peptide 26 [M], which is similar to the cytotoxic effect of these peptides against HTB-132 cells. These peptides exhibited the same pattern of cytotoxic effect against both cell lines: the cytotoxic effect of peptides increased in the following order: 26 26 [L] and 26 [A] exhibited a similar cytotoxic effect, while the peptide 26 [F] exhibited the highest cytotoxic effect against both cell lines. MCF-7 cells were more susceptible to analog peptides than HTB-132 cells, while the original peptide did not exhibit a cytotoxic effect against MCF-7 cells ( Figure 1D). The IC 50 values of peptide 26 [F] for HTB-132 and MCF-7 cells were 13 µM and 6 µM, respectively, while for peptide 26 [L] the IC 50 values were 15 µM and 20 µM, respectively. These results allow us to consider the dimeric peptides 26 [F] and 26 [L] to be promising for the development of drugs for use against breast cancer, which is in agreement with previous reports that suggest that the cytotoxic effect of a molecule can be considered relevant when it has IC 50 values below 25 µM [21]. The cytotoxic effect of the peptides 26 [L] and 26 [F] against MCF-7 cells were evaluated at 2, 24, and 48 h of treatment ( Figure 2A). Peptides 26 [L] and 26 [F] exhibited a significant cytotoxic effect after 2 h of treatment when the peptide concentration was higher than 100 µg/mL. The cytotoxic effect of the peptides was concentration-dependent in all cases, and there were no significant differences in the cytotoxic effect among the evaluated treatment times. Furthermore, the cytotoxic effect of both peptides against MCF-7 cells was sustained for up to 48 h. Regardless of the treatment time, the maximum cytotoxic effect was observed when the peptide concentration was 200 µg/mL, which for peptide 26 [L] was close to 85% (cell viability 15%), while for peptide 26 [F] it was approximately 90% (cell viability 10%) (Figure 2A). When cells were treated with peptide concentrations below 100 µg/mL the cytotoxic effect reached its maximum value at 24 h, decreasing when cells were treated for 48 h. This behavior is possibly due to the fact that peptide concentration in the culture medium decreased allowing cell proliferation. On the other hand, for cells treated with a peptide concentration equal to or higher than 100 µg/mL the cytotoxic effect was maintained up to 48 h. To determine if the cytotoxic effect of the peptide is selective for breast cancer cells, the human immortalized non-tumor epithelial cell line MCF-12 was treated with the peptides 26 [F] and 26 [L]. This cell line has been used as a control of normal cells in previous studies of breast cancer [22,23]. The peptides 26 [L] and 26 [F] exhibited a lower cytotoxic effect against MCF-12 cells than against breast cancer cell lines HTB-132 and MCF-7 ( Figure 2B). The peptide 26 [L] exhibited a selective cytotoxic effect in a peptide concentration range between 6 and 100 µg/mL, The IC 50 values for peptide 26 [L] in HTB-132 (10 µM/32 µg/mL) and MCF-7 (20 µM/26 µg/mL) was less than the IC 50 value in the non-tumorigenic cell line MCF-12 (>100 µg/mL). The peptide 26 [F] exhibited the greatest selectivity for MCF-7 cells when the peptide concentration was between 6 and 50 µg/mL. The IC 50 values for peptide 26 [F] in HTB-132 (13 µM/43 µg/mL) and MCF-7 (6 µM/19 µg/mL) was less than the IC 50 value in the non-tumorigenic cell line MCF-12 (21 µM/ 70 µg/mL). When the peptide concentration was 6 µM, the cell viability for MCF-7 cells was 50%, while at the same peptide concentration, the cell viability for MCF-12 cells was near 100%. These results are consistent with previous studies, where the selectivity of LfcinB analogs against other non-tumorigenic cell lines has been demonstrated, which implies that these peptides are promising molecules for subsequent ex vivo or in vivo studies and the future development of drugs for use against breast cancer [15][16][17]24]. To determine if the cytotoxic effect of the peptide is selective for breast cancer cells, the human immortalized non-tumor epithelial cell line MCF-12 was treated with the peptides 26 [F] and 26 [L]. This cell line has been used as a control of normal cells in previous studies of breast cancer [22,23]. The peptides 26 [L] and 26 [F] exhibited a lower cytotoxic effect against MCF-12 cells than against breast cancer cell lines HTB-132 and MCF-7 ( Figure 2B). The peptide 26 [L] exhibited a selective cytotoxic effect in a peptide concentration range between 6 and 100 μg/mL, The IC50 values for peptide 26 [L] in HTB-132 (10 μM/32 μg/mL) and MCF-7 (20 μM/26 μg/mL) was less than the IC50 value in the nontumorigenic cell line MCF-12 (>100 μg/mL). The peptide 26 [F] exhibited the greatest selectivity for MCF-7 cells when the peptide concentration was between 6 and 50 μg/mL. The IC50 values for peptide 26 [F] in HTB-132 (13 μM/43 μg/mL) and MCF-7 (6 μM/19 μg/mL) was less than the IC50 value in the non-tumorigenic cell line MCF-12 (21 μM/ 70 μg/mL). When the peptide concentration was 6 μM, the cell viability for MCF-7 cells was 50%, while at the same peptide concentration, the cell viability for MCF-12 cells was near 100%. These results are consistent with previous studies, where the selectivity of LfcinB analogs against other non-tumorigenic cell lines has been demonstrated, which implies that these peptides are promising molecules for subsequent ex vivo or in vivo studies and the future development of drugs for use against breast cancer [15][16][17]24]. Cell morphology was monitored before and after treatment with 26 [L] or 26 [F] at different times, using an inverted microscope coupled to an AxioCam ICc1 camera ( Figure 3). The untreated cells had normal morphologic characteristics as polygonal, flattened, and elongated cells with long and defined axons (data not shown) [25]. After 5 min of treatment with the peptide, most cells retained their morphology; however, some began to appear rounded and shrunken. After 90 min, the cells treated with the peptide ( 26 [L] or 26 [F]) took on a rounded shape, and shrinkage was observed, going from an average size of 100.27 μm to 39.38 μm. However, the cells maintained a defined cell morphology, and membrane integrity apparently was not compromised, suggesting that the Cell morphology was monitored before and after treatment with 26 [L] or 26 [F] at different times, using an inverted microscope coupled to an AxioCam ICc1 camera (Figure 3). The untreated cells had normal morphologic characteristics as polygonal, flattened, and elongated cells with long and defined axons (data not shown) [25]. After 5 min of treatment with the peptide, most cells retained their morphology; however, some began to appear rounded and shrunken. After 90 min, the cells treated with the peptide ( 26 [L] or 26 [F]) took on a rounded shape, and shrinkage was observed, going from an average size of 100.27 µm to 39.38 µm. However, the cells maintained a defined cell morphology, and membrane integrity apparently was not compromised, suggesting that the cytotoxic effect of the peptides could involve apoptotic processes [26]. Also, this behavior was observed at 4 h of treatment. At 24 h of treatment, the appearance of small vacuoles was observed, presumably associated with late apoptosis events. Finally, it was observed that the effect of the peptide was sustained up to 48 h since at that time no recovery of cell morphology was observed. These results indicate that the cytotoxic effect of the peptides on MCF-7 cells evaluated in MTT assays could be associated with these observed morphologic changes. With the aim of establishing if the cytotoxicity of peptide 26 [F] is associated with loss of cytoplasmic membrane integrity, MCF-7 cells were treated with the peptide in the presence of propidium iodide (PI) and SYTO9 ( Figure 4A). When the cells were treated with the peptide at 15 µM (IC 50 ), it was determined that only 10% of the cell population had their cytoplasmic membrane affected, which is in accordance with the morphological changes that showed that the membrane continues to be preserved after treatment, indicating that the type of cell death may be being mediated by apoptotic pathways. cytotoxic effect of the peptides could involve apoptotic processes [26]. Also, this behavior was observed at 4 h of treatment. At 24 h of treatment, the appearance of small vacuoles was observed, presumably associated with late apoptosis events. Finally, it was observed that the effect of the peptide was sustained up to 48 h since at that time no recovery of cell morphology was observed. These results indicate that the cytotoxic effect of the peptides on MCF-7 cells evaluated in MTT assays could be associated with these observed morphologic changes. With the aim of establishing if the cytotoxicity of peptide 26 [F] is associated with loss of cytoplasmic membrane integrity, MCF-7 cells were treated with the peptide in the presence of propidium iodide (PI) and SYTO9 ( Figure 4A). When the cells were treated with the peptide at 15 μM (IC50), it was determined that only 10% of the cell population had their cytoplasmic membrane affected, which is in accordance with the morphological changes that showed that the membrane continues to be preserved after treatment, indicating that the type of cell death may be being mediated by apoptotic pathways. To determine the type of cell death associated with the cytotoxic effect of the 26 [F] peptide on MCF-7 cells, they were incubated with the peptide (15 μM) for 24 h in the presence of annexin V and PI fluorophores ( Figure 4B). As can be seen, 54% of the cell population was dyed with both fluorophores (cells in an apoptotic process) and 21% was dyed with only PI (cells in a necrotic process). These results indicate that the cytotoxic effect of the peptide against MCF-7 cells generates a higher cell population involved in late apoptotic events. These results are in agreement with those obtained previously here: changes in the cellular morphology such as shrinking, the appearance of dendritic bodies, and low affectation of the integrity of the cytoplasmic membrane, which could be related to death via apoptosis. The mitochondrial membrane depolarization in MCF-7 cells treated with peptide 26 [F] (15 μM) in the presence of cationic fluorophore JC-1 was evaluated ( Figure 4C). The results showed that the peptide induces depolarization of the mitochondrial membrane in 19% of the population, a value even higher than that exerted by the positive control of apoptosis (ActD 15 μM). These results suggest that peptide 26 [F] induces cell death in MCF-7 cells via intrinsic apoptosis, which is in agreement with the results previously obtained in the prior flow cytometry assays using PI/annexin V fluorophores. This result is similar to another one found in previous studies with a tetrameric molecule derived from LfcinB [17]. In addition, they are in accordance with results obtained by other authors that suggest that BLF, LfcinB, and synthetic peptides derived from LfcinB exhibited a selective, fast, and concentration-dependent cytotoxic effect against diverse human cancer cell lines, including breast cancer cell lines, apoptosis being the action mechanism proposed [11,12,[27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. Discussion In previous reports, we showed evidence that the polyvalence of LfcinB sequences significantly enhances the cytotoxic effect in oral and breast cancer cell lines: dimeric and tetrameric peptides containing the minimal motif exhibited a selective, rapid, and concentration-dependent cytotoxic effect against breast cancer cell lines. In addition, it was possible to establish the viability of the synthesis of these polyvalent peptides via SPPS, which allows modifications to be made in the sequence in order to identify new peptides with a greater cytotoxic effect against breast cancer cell lines [16]. Within this context, the change in the 26th position ( 26 Met) of the dimeric peptide LfcinB (20-25)2 was evaluated. To determine the type of cell death associated with the cytotoxic effect of the 26 [F] peptide on MCF-7 cells, they were incubated with the peptide (15 µM) for 24 h in the presence of annexin V and PI fluorophores ( Figure 4B). As can be seen, 54% of the cell population was dyed with both fluorophores (cells in an apoptotic process) and 21% was dyed with only PI (cells in a necrotic process). These results indicate that the cytotoxic effect of the peptide against MCF-7 cells generates a higher cell population involved in late apoptotic events. These results are in agreement with those obtained previously here: changes in the cellular morphology such as shrinking, the appearance of dendritic bodies, and low affectation of the integrity of the cytoplasmic membrane, which could be related to death via apoptosis. The mitochondrial membrane depolarization in MCF-7 cells treated with peptide 26 [F] (15 µM) in the presence of cationic fluorophore JC-1 was evaluated ( Figure 4C). The results showed that the peptide induces depolarization of the mitochondrial membrane in 19% of the population, a value even higher than that exerted by the positive control of apoptosis (ActD 15 µM). These results suggest that peptide 26 [F] induces cell death in MCF-7 cells via intrinsic apoptosis, which is in agreement with the results previously obtained in the prior flow cytometry assays using PI/annexin V fluorophores. This result is similar to another one found in previous studies with a tetrameric molecule derived from LfcinB [17]. In addition, they are in accordance with results obtained by other authors that suggest that BLF, LfcinB, and synthetic peptides derived from LfcinB exhibited a selective, fast, and concentration-dependent cytotoxic effect against diverse human cancer cell lines, including breast cancer cell lines, apoptosis being the action mechanism proposed [11,12,[27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. Discussion In previous reports, we showed evidence that the polyvalence of LfcinB sequences significantly enhances the cytotoxic effect in oral and breast cancer cell lines: dimeric and tetrameric peptides containing the minimal motif exhibited a selective, rapid, and concentration-dependent cytotoxic effect against breast cancer cell lines. In addition, it was possible to establish the viability of the synthesis of these polyvalent peptides via SPPS, which allows modifications to be made in the sequence in order to identify new peptides with a greater cytotoxic effect against breast cancer cell lines [16]. Within this context, the change in the 26th position ( 26 Met) of the dimeric peptide LfcinB (20-25) 2 was evaluated. It should be noted that the change of 26 Met to 26 Lys increased the positive net charge of the peptide to +14. However, the cytotoxic effect did not increase, suggesting that the positive charge of the peptide is not the only requirement for exerting the cytotoxic effect against the HTB-132 cells. On the other hand, the fact that the incorporation of hydrophobic residues instead of Met increased the cytotoxic effect against breast cancer cell line HTB-132 and MCF-7 is in agreement with previous reports that suggest that a hydrophobic amino acid such as Trp is a relevant residue in peptide activity because it is involved in membrane disruption and/or cell internalization [43,44]. Interestingly, the change from Met to Leu or Phe significantly increased the cytotoxic effect against this breast cancer line. Both these residues have a lower polarity than Met and are flanked by positively-charged side chains of 25 Arg and 27 Lys, which could increase peptide amphipathicity. This is in accordance with reports suggesting that the amphipathicity of LfcinB-derived sequences is relevant for antibacterial and anticancer activity [16,38]. Interestingly, our results indicate that position 26 of the dimeric 20 RRWQWRMKKLG 30 sequence is relevant for the cytotoxic effect. Our results are exceptionally relevant because these dimeric peptides exhibited a selective cytotoxic effect against breast cancer cell lines better than BLF, which also exhibited inhibitory effects on the growth of four breast cancer cell lines, T-47D, MDA-MB-231, Hs578T, and MCF-7, but not for the normal breast cell line MCF-10-2A [34,39,41,45]. LfcinB showed considerable potential as an anti-cancer agent because the peptide is able to trigger apoptosis in a wide range of breast cell lines (MDA-MB-435, MDA-MB-231, T-47D, and MCF-7) without damaging normal human cells such as lymphocytes, erythrocytes, endothelial cells, fibroblasts, and breast epithelial cells [37,40,46]. Over the last decade, great efforts have been made to identify short LfcinB-derived peptides with increased antibacterial and/or anti-cancer activity [46]. Dimeric peptides 26 [F] and 26 [L] containing the minimal motif (RRWQWR) exhibited a significant cytotoxic effect against the tested breast cancer cell lines, which agrees with previous studies that showed that short synthetic peptides containing this motif also exhibited a cytotoxic effect against breast cancer cell lines [47]. The peptide RRWQWR exhibited a low cytotoxic effect against breast cancer cell lines MDA-MB-231 and MDA-MB-468 [16,37,38]. However, when this motif was modified or included in longer sequences, the cytotoxic effect against human cancer cell lines increased. In this context, we wish to emphasize the polyvalent peptides that contained the RWQWRWQWR, RRWQWRMKKL, and RRWQWR sequences, which showed significant cytotoxic activity against breast and oral cancer cell lines [15][16][17]24]. In addition, we have demonstrated the viability of the synthesis of dimeric peptides 26 [F] and 26 [L], which is easier, faster, and lower-cost than BLF or LfcinB, this being an advantage for drug development. On the other hand, both dimeric peptides contain the unnatural amino acid Ahx, which can confer greater stability to proteolytic degradation and increase the effective peptide concentration in the body. LFB and LfcinB are candidates for alternative cancer treatment with the same advantages but without the side effects that characterize standard therapies [45]. A major limitation of the therapeutic efficacy for cancer is usually the systemic bio-distribution, which often leads to reduced bioavailability of the drug delivered to the cancer cells, and consequently to a reduction in the therapeutic index [45]. In this investigation we identified two dimeric peptides, (RRWQWRFKKLG) 2 -K-Ahx and (RRWQWRLKKLG) 2 -K-Ahx, that exhibited a selective cytotoxic effect against MCF-7 cells, which may be associated with apoptotic processes. Their synthetic viability, their dimeric structure, and the unnatural amino acid are advantages that allow us to consider them to be promising molecules for further studies in the development of therapeutic alternatives for breast cancer. Position 26th of the LfcinB (20-30) plays a critical role in the cytotoxic effect of the dimeric peptide LfcinB (20-30) 2 against breast cancer cell lines. The insertion of hydrophobic amino acids in this position dramatically improves the anticancer activity against breast cancer cell lines HTB-132 and MCF-7, resulting in faster action times (less than 90 min) sustained up to 48 h. In addition, the peptides were selective compared to the non-tumorigenic line MCF-12. The most promising peptide was that in which the M of position 26 was replaced by F. In this peptide, an IC 50 of 6-13 µM was found, as well as selectivity against the MCF-12A cell lines. It was found via flow cytometry that 26 [F] does not compromise the integrity of the cytoplasmic membrane; it exerts its effect through apoptosis, and intrinsic apoptosis events are involved in the type of cell death. Solid Phase Synthesis SPPS The peptides were synthesized using solid-phase peptide synthesis (SPPS-Fmoc/tBu) [48,49]. Briefly, a Rink amide resin (0.46 meq/g) was used as a solid support. On this support, an elongation of the peptide sequences was performed by successive steps of (i) deprotection of the alpha-amino group, removing the Fmoc group under basic conditions with 2.5% 4-methylpiperidine (% v/v), (ii) activation of the Fmoc-amino acid with DCC and 6-Cl-HOBt in DMF and DCM (3:1) and adding 2 drops of Triton-X, and (iii) coupling the amino acid to the growing solid support chain by performing a microwave-assisted reaction (time reaction 10 s (5×), 200 W). The peptides were subsequently cleaved from the resin by adding TFA/H 2 O/TIS/EDT (92.5/2.5/2.5/2.5 % v/v) and stirring for 8 h, and then the peptides were precipitated using diethyl ether at −20 • C, dried at room temperature, and characterized via RP-HPLC and MALDI-TOF MS (detailed information available in Supplementary Materials, Figure S2). RP-HPLC Characterization For the analysis of the peptides (1 mg/mL) by RP-HPLC, solvent A: 0.05% TFA in water and solvent B: 0.05% TFA in ACN was used as the mobile phase. As a stationary phase, a Chromolith ® C-18 monolithic column (50 × 4.6 mm) was used. An elution gradient of 5% to 50% of solvent B was used for 8 min at a flow rate of 2.0 mL/min, injection volume 10 µL, and 210 nm for detection. An Agilent Series 1260 chromatograph was used. The chromatographic profile of purified dimers is presented in Supplementary Materials, Figure S3. Peptide Purification by Solid Phase Extraction (SPE) For the purification of the peptides, 5 g RP-SPE columns (particle size: 40-60 µm) were used [50]. The peptide was dissolved in solvent A, and the sample was seeded and then eluted with solutions containing different percentages of solvent B. The fractions containing the pure peptide were collected and lyophilized. The final products were stored at −4 • C. MALDI-TOF Mass Spectrometry Analysis The molecular weight of the peptides was determined via MALDI-TOF mass spectrometry (microFlex, Bruker). One microliter of the purified peptide solution (0.5 mg/mL) was mixed with 18 µL of matrix (α-cyano-4-hydroxycinnamic acid, 5 mg/mL), and then 1 µL of the mixture was seeded on the plate sample holder. The laser power ranged between 2700 and 3000 V, and 200 laser shots were made. The MALDI TOF MS spectra of the purified dimers are presented in Supplementary Materials, Figure S3. Cell Culture For all cell lines, the medium used was Dulbecco's Modified Eagle's Medium (DMEM)/Nutrient Mixture F-12 Ham. For the HTB-132, MCF-7, and MCF-12A lines, the medium was supplemented with 10% fetal bovine serum (SFB), and 1.5 g/L NaHCO 3 and NaOH were added up to pH 7.4, amphotericin (200 µg/mL), and 1% penicillin and streptomycin. For the primary culture cells of bovine mammary epithelium and fibroblasts, in addition to the above, hydrocortisone (250 µg/mL) was added. All media were filtered through a 0.22 µm membrane. Viability Test by MTT Briefly, the cells were seeded with a complete medium in 96-well plates at a rate of 10,000 cells and 100 µL per well, and adhesion to the plates was allowed for 24 h. Subsequently, the complete medium was removed and an incomplete medium was added for synchronization for another 24 h. The cells were then incubated at 37 • C for 2, 24, or 48 h with 100 µL of peptide at the concentrations to be evaluated (200, 100, 50, 25, 12.5, 6.25 and 3.1 µg/mL). Next, the peptide was removed from the box, and 100 µL of incomplete medium with 10% 3-4,5-dimethylthiazol-2-yl-2,5-diphenyltetrazole (MTT) bromide was added and incubated for 4 h. The medium was replaced with 100 µL of isopropanol (IPA), and after 30 min of incubation at 37 • C, the absorbance was measured at 575 nm. As a negative control, an incomplete culture medium with 10% MTT was used, and as a positive control, cells without MTT treatment were used [51]. Evaluation of the Integrity of the Cytoplasmic Membrane Using SYTO9/PI The cells with complete medium were seeded and synchronized in boxes of 24 wells in a concentration of 4 × 10 4 cells/well in a volume of 400 µL/well, and adhesion to the plate was allowed for 24 h [52]. Then the complete medium was replaced by incomplete medium for synchronization of the cells for an additional 24 h. Subsequently, the culture medium was replaced with 400 µL of a solution containing the peptide to be evaluated at a concentration equivalent to its IC 50 and incubated for 2 or 24 h. The cells were harvested by adding 200 µL of trypsin and incubating for 10 min. The trypsin was quenched with 200 µL of complete medium, and the cells were centrifuged at 2500 rpm for 10 min. The supernatant was discarded, and the pellet was washed with 100 µL PBS and centrifuged under the same conditions previously described. The supernatant was discarded, and the cells were stained with 30 µL of a solution from the LIVE/DEAD ® FungaLigthTM commercial kit (Invitrogen) containing the mixture of fluorochromes (0.5 µL of SYTO9 and/or 0.5 µL of propidium iodide (PI) with 99 µL of PBS). Subsequently, the cells were incubated at room temperature and in the dark for 20 min and centrifuged, and the supernatant was discarded. The pellet was resuspended in 100 µL of PBS and analyzed via flow cytometry in a BD Accuri C6 device. The events were recorded using the green channel (FL1) on the abscissa axis and the red (FL3) on the ordinate axis. Negative control: untreated cells marked with both fluorophores; positive control: cells treated with actinomycin 10 µg/mL for 24 h [52]. To define the working population, a control of untreated and unstained cells was used. Two controls were used to compensate: (i) untreated cells stained only with PI, and (ii) untreated cells stained only with Syto9. Determination of the Type of cell Death (Apoptosis/Necrosis) The cells were seeded and synchronized in boxes of 24 wells at a concentration of 4 × 10 4 cells/well in 400 µL/well, adhesion and synchronization were allowed in the same way as in Section 4.8, and the culture medium was replaced with medium containing the peptide to be evaluated and incubated for 2 or 24 h. Subsequently, the cells were harvested with trypsin, centrifuged at 2500 rpm for 10 min, washed with PBS, and resuspended in 10 µL of staining buffer with the fluorochromes (10 mM Hepes pH 7.4; 10 mM NaCl and 2.5 mM CaCl 2 containing 1 µL of PI fluorochrome and 1 µL of Annexin V). Cells with the fluorochromes were incubated at 37 • C in the dark for 15 min and resuspended in 80 µL of staining buffer without fluorochromes for analysis via flow cytometry. The positive control of necrosis was: cells treated with EDTA 15 mM for 60 min, and apoptotic: cells treated with Actinomycin at 10 µM for 24 h. Negative control: cells without treatment; compensation controls: (i) cells stained only with Annexin, and (ii) only with PI; population control: unstained and untreated cells. Determination of Mitochondrial Membrane Depolarization For this test, the MitoProbeTM JC-1 Assay kit (M34152 from Termofisher, Walthan, MA, USA) was used according to the supplier's recommendations. The cells seeded and synchronized in a box of 24 wells (4 × 10 4 cells/well) were incubated with 400 µL of peptide to be evaluated at 2 and 24 h. The cells were then trypsinized and collected by centrifugation at 400 g × 5 min. The resulting pellet was stained by adding 100 µL of JC1 "working solution" (1:100 JCI solution reconstituted in DMSO: buffer assay 1×) and incubated at 37 • C for 20 min. Subsequently, the cells were washed twice with 1× buffer assay and finally resuspended in 100 µL of 1× buffer assay for reading on the cytometer. Negative control: cells without treatment; positive control: cells treated with 10 µM actinomycin for 24 h.
8,750
2020-06-01T00:00:00.000
[ "Medicine", "Chemistry" ]
Evaluation Strategies of Nanomaterials Toxicity The revolutionary development of nanoscience during the last years has increased the number of studies in the field to evaluate the toxicity and risk levels. The design of different nanomaterials together with biological components has implemented the advances in biomedicine. Specifically, nanoparticles seem to be a promising platform due to their features, including nanoscale dimensions and physical and chemical characteristics than can be modified in function of the final application. Herein, we review the main studies realized with nanoparticles in order to understand and characterize the cellular uptake mechanisms involved in biocompatibility, toxicity, and how they alter the biological processes to avoid disease progression. Introduction The nanoscience revolution that sprouted throughout the 1990s is becoming part of our daily life in the form of cosmetics, food packaging, drug delivery systems, therapeutics, or biosensors, among others [1]. It has been estimated that the production of nanomaterials would increase in 2020 by 25 times what it is today. This is due to the wide range of applications that they have in numerous fields, ranging from commercial products such as electronic components, cosmetics, household appliances, semiconductor devices, energy sources, food color additives, surface coatings, and medical products such as biological sensors, drug carriers, biological probes, implants, and medical imaging. Despite this future dependence on nanomaterials, studies regarding their safe incorporation in our lives are very limited [2]. Recently, several studies suggested that nanoparticles (NPs) could easily enter into the human body [3]. This is mainly because their nanoscale dimensions are of a similar size to typical cellular components. Moreover, proteins-NPs may bypass natural mechanical barriers, possibly leading to adverse tissue reactions. As a result, the particles might be taken up into cells. Generally, NPs of different physical and chemical properties may enter the cells by different mechanisms, such as phagocytosis, macropinocytosis, endocytosis, or directly by "adhesive interactions" [4]. In order to understand the exact cellular influences of NPs, a thorough characterization of individual nanoparticles is necessary. Nanoparticles can get into the human body through various ways, such as skin penetration, inhalation, or injection, and due to their small size and diffusion abilities; they have the potential to interact with cells and organs. In addition to involuntary exposure to NPs by means of contacting nanomaterials-based products, there are cases where nanoparticles would interact with the human body for biomedical purposes. In case of using nanoparticles for targeted-drug delivery, NPs are required to traverse the cell membranes and interact with specific components. Hence, the success rate of drug delivery is based on the biocompatibility of NPs. Research has shown that different physicochemical properties of NPs result in different cellular uptake. Currently, it has been described that several factors play a critical role in toxicity (Fig. 1); such as (i) size and surface, very important for liposomes, silicon microparticles, quantum dots, polymeric NPs, or gold NPs; (ii) concentration, crystallinity, and mechanical strength, toxicity is directly related to these parameters [2]; (iii) chemical attributes, the development of hydrophilic polymer functionalization (i.e. polyethylene glycol, polycarboxybetaine) at the surface of NPs enhances the systemic circulation; however, the response of the immune system is also related with this hydrophilic coating. The discovery of Enhanced Permeation and Retention (EPR) effect and its combination with hydrophilic polymers is related to the accumulation of NP-based carrier systems in tumor tissues followed by the release of the drug either in the proximity to the tissue. However, EPR effect is commonly inconsistent due to the heterogeneity associated with the tumor tissue. For this reason, novel nanomedicines are being designed and developed in order to target only a particular cell, tissue, and organ by linking an affinity reagent to the NP, which is targeting a specific biomolecule differentially expressed at the tissue or cells of interest. Although some concerns have been raised about poor systemic circulation, enhanced clearance by the mononuclear phagocyte system and limited tissue penetration has been shown to improve the cellular uptake and efficacy of their payload in comparison with passively targeted counterparts. This improvement in cellular uptake is a key point because mostly of the targets present intracellular location. Bearing this in mind, the characterization of endocytosis pathways plays a critical role in designing efficient intracellular trafficking, subcellular targeting, and nanomedicines with ideal features (biocompatibility, low-toxicity, and lowimmune response) [4]. Herein, we present a comprehensive review on recent developments and outline future strategies of nanotechnology-based medicines. Specifically, the trials in vivo/in vitro, requested by The National Cancer Institute, that evaluate NP toxicity for nanomedicines are detailed below. They can be sorted in two large groups: biocompatibility and immunological studies. Biocompatibility studies Once the NPs are in biological environment, it is expected that their interaction with biomolecules, such as proteins, lipids, nucleic acids, and even metabolites, is to a large extent because of their high surface-to-mass ratio. Bearing in mind that proteins are one of the majority components in biological fluids, formation of a protein corona at the surface of NPs is expected. This protein corona may substantially influence the biological response [5]. Relation of biomolecular corona and nanoparticles toxicity Herein, we briefly describe how this biomolecular corona influences mainly in cellular uptake, toxicity, and biodistribution and targeting ability to a lesser extent. Size The size of nanomaterials has a direct and significant impact on the physiological activity. In fact, the NP size may be expanded by the biomolecular corona. Then, the NP size plays a critical role in cellular uptake, efficiency of particle processing in the endocytic pathway, and physiological response of cells to NPs. Kim and collaborators [6] thoroughly studied the sizedependent cellular toxicity of Ag NPs using different characteristic sizes against several cell lines, including MC3T3-E1 and PC12. They demonstrated that NP toxicity was precisely sizeand dose-dependent in terms of cell viability, intracellular reactive oxygen species generations, LDH release, and ultra-structural changes in the cell. In general, biodegradable NPs are less cytotoxic than non-biodegradable ones [7]. Apart from the nature of NP coating, particle size can also affect the degradation of the polymer matrix. With the decrease of particle size, the surface area-to-volume ratio increases greatly, leading to an easier penetration and release of the polymer degradation products. Even though it can be assumed that the smaller the NP size, the more likely it can enter into cells and cause potential damages, the mechanisms of toxicity are very complicated, so the size factor cannot be viewed as the only influence parameters. Yuan and collaborators studied the effect of size of hydroxyapatite NPs on the antitumor activity and apoptotic signaling proteins. They studied the effect of particle size on cell apoptosis, the Hep62 cells (incubated with and without hydroxyapaptite NPs), presented morphological changes related to apoptosis which were related to the size of the NPs [8]. Nanomaterial and shape The structure and shape influence in the toxicity of nanomaterials (Fig. 2). Commonly, nanomaterials have different shapes and structures such as tubes, fibers, spheres, and planes. For instance, several studies compared cytotoxicity of multi-wall carbon nanotubes vs. singlewall carbon nanotubes or graphene [9,10], obtaining results that suggest a strong influence of the shape and toxicity. Furthermore, other authors have evaluated the toxicity of nanocarbon materials vs. NPs [11]. Concentration of nanomaterial In 2013, a research was carried out to inspect the cytotoxicity of a cisplatin derivative, known as PtU2. Minor toxicity was detected when this compound was conjugated with 20 nm gold NPs (Au-NPs). Cisplatin is one of the most used anticancer agents and its conjugation with Au-NPs gives it benefits thanks to Au characteristics: biocompatibility, inactivity, non-toxicity, and stability. In this way, the compound becomes a powerful tool for the treatment of solid tumors. In the present trial, osteosarcoma cell line (MG-63) was treated with different concentrations of AuNPs, PtU2 and a combination of both, PtU2-AuNPs. Firstly, one of the aims was the determination of the carrier activity. In order to achieve this, the metal content (gold and platinum) was measured in cells and supernatants separately. The results showed that metal uptake capacity from cells is the same for AuNPs or AuNPs conjugated with PtU2. Then, the cytotoxicity was evaluated by Annexin V-FITC assay by flow cytometry. As a result of MG-63 incubation with the two compounds, higher cytotoxicity was detectable after 48 hours of culture in cells treated with PtU2-AuNPs. To sum up, PtU2-AuNPs are more effective inciting cellular toxicity on the same culture conditions [12]. Relation of biomolecular corona and cellular uptake Due to protein nature of the biomolecular corona, it is important to distinguish between specific and nonspecific cellular uptake. Specific uptake is regulated by membrane receptors that are internalized by interaction with specific ligands. In turn, nonspecific uptake is considered a random process without control by the cell [5]. Overall, nonspecific uptake seems to be decreased in the presence of a corona whereas specific uptake seems to be promoted by protein corona because a misfolding of corona proteins triggers NPs uptake by specific cells that otherwise would not have done so or because there is a protein in the corona able to target a specific receptor expressed in the cell line used. So far, all the performed studies suggest how important cell line specificity is for this protein corona effect. However, a more extensive revision of literatures is recommended because in many occasions some inconsistencies of cellular uptake of NPs have been found, particularly regarding incubation conditions or fluidic For example, several studies for cellular uptake of differential macrophage-like cell line (dTHP1) have different outcomes. In such a way, Yan and colleagues [13] did not observe any changes in effective association and internalization in the presence of serum. However, these cells present phagocytosis activity when unfolded BSA is presented in the protein corona; in this case, phagocytosis is mediated by Scavenger receptor subclass A. Effect of protein corona on biodistribution Despite the knowledge about the influence of NP PEGylation on biodistribution, the characterization and consequences of a biomolecular corona formed in vivo has not been investigated yet. Hence, it has been described that, independently of the nature of the NPs, pre-coating with proteins, such as serum albumin, or apolipoprotein E, increases the blood circulation time and reduces the clearance speed. This effect is explained by a reduction in opsonization and phagocytosis; meanwhile, liver is the main organ for NP accumulation and the protein used for pre-coating seems to be distributed in other organs (i.e. albumin targeting and apolipoprotein E target lungs and brain, respectively) [14]. Different assays for evaluation of cytotoxicity/biocompatibility In general, the mechanisms of toxicity are very complicated. Several studies have been developed for biological characterization of nanomaterials which are vital to guarantee the safety of the material that will be in contact with food or humans. Here, a brief description of the most conventional assays to evaluate cytotoxicity/biocompatibility is reported. Cytotoxicity analysis In order to determine the viability of cells exposed to NPs, toxicity tests in vitro are very useful to understand the toxic mechanisms [2]. Some of these tests are listed: Alamar Blue Assay, MTT, LDH leakage assay [2,15], and quick cell [16]. First and second approaches constitute an index of intrinsic cytotoxicity. On the one hand, 3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyl tetrazolium bromide assay (mitochondrial toxicity, MTT, assay) is based on the transformation of tetrazolium salt by mithocondrial succinate dehydrogenases in metabolically active cells generating purple formazan crystals [17]. This oxidation-reduction reaction can be only produced in presence of dehydrogenase enzymes, so it is a good way to determine the activity of mitochondria [2]. Thus, the number of living cells is proportional to the amount of formazan produced. Cells with culture medium and NPs are seeded in 96-well plates and then 20 µL/well of MTT, with a final concentration of 5 mg/mL added to each well. This compound must be incubated for 4 h at 37ºC and 5% CO 2 . After the incubation, the solution is removed and the crystals that have been formed are dissolved by DMSO. Finally, the optical density is measured at 595 nm expressing the percent cell viability [3,17]. This method also allows the measurement of cell survival and proliferation. Although MTT is the most accepted assay method [2], there is another test to evaluate the nanotoxicity, the resazurin assay (Alamar Blue, AB, assay). This study is based on the reduction of blue, nonfluorescent resazurin to pink, fluorescent resorufin by living cells [18]. This reduction is mediated by mitochondrial enzymes located in the mitochondria, cytosol, and microsomal fractions [17,18]. The decrease in the magnitude of resazurin reduction indicates loss of cell viability. The AB assay is commonly used with a final concentration of 10% (w /v). Then, plates are exposed to an excitation wavelength of 530 nm and emission at 590 nm to determine if any of the dyes interact with the compound. Lastly, the fluorescence is read 5 hours later and the percent viability is calculated [17]. Moreover, AB assay has many advantages: it is a simple, rapid and versatile test and reveals a high correlation with other methods to evaluate nanotoxicity [18]. Sometimes, problems with interference between NPs and this type of assay arise [17], so care must be taken with the dyes used. The confidence degree of toxicity studies significantly depends on this interaction. Few researchers have found that carbon nanotubes can interact with dyes such as AB and neutral red [2]. According to the analysis carried out by Hamid R and collaborators, AB assay and MTT are advisable to identify the cytotoxic compound. However, the AB assay is homogenous and presents more sensitivity that can detect densities as low as 200 cells per well [17]. In turn, cell death is also determined by evaluating the activity of the enzyme lactate dehydrogenase (LDH). LDH is an enzyme generally located in the cytosol, but it is quickly released when cellular damage is produced. In this way, the LDH release assay allows the assessment of the membrane integrity of cells by measuring this enzyme in the extracellular medium. This method, like MTT, uses the measure of a color compound absorbance to determine the cell viability that can be affected by the uptake of NPs [19]. The Quick Cell Proliferation Colorimetric Assay Kit works in a similar way. This is based on the cleavage of the tetrazolium salt to formazan by mitochondrial dehydrogenases. An increase in the activity of these enzymes is connected with cellular proliferation. The formazan dye produced by viable cells can be quantified by a spectrophotometer by measuring the absorbance of the solution at 440 nm. Moreover, the Quick Cell Assay has several advantages in face of MTT because it is a new simple method, requiring no washing, no harvesting, and no solubilization steps, and it is more sensitive and faster too [20]. Assays for studying cell death by effects of nanomaterials The cytotoxicity analysis can be complemented by other studies. Here, we present different methods to determine cell death or apoptosis, including trypan blue (TB) and propidium iodide (PI) protocols. TB exclusion test marks which cells are viable. This is because live cells have intact cell membranes and certain dyes, such as TB or PI, cannot entry into them [21]. In dead cells, the membrane is ruptured and the dye is able to cross it and stain the cytoplasm of blue. In 2014, Mendes and colleagues [22] published a work where their aim was to investigate different diameters of iron oxide NPs. Four cell lines were incubated with NPs to assess the material toxicity and the possible size dependence. Cell viability was measured using the MTT and TB tests. For the dye exclusion assay, the cells were seeded in 6-well flat-bottom plates and incubated for 12 or 48 h with a NP suspension at 10 µg/mL concentration. Then, 20 µL of each suspension was mixed with 0.4% TB to count the number of living and dead cells. This method was used because with the MTT it was not clear if NPs caused cell death or whether they only reduced the cellular metabolic activity. The results showed that cells incubated with the carbon-coated iron oxide NPs tend to decrease their mitochondrial activity (indicated by MTT test) rather than die (indicated by the dye exclusion test). In conclusion, cytotoxicity analysis showed no apparent difference between the diameters studied, whereas there are clear differences in particle uptake. On the other hand, Alshatwi and collaborators [23] have published a work where the toxicity of platinum NPs is evaluated. The objective in this project is to investigate the effects of platinum NPs on cell viability, nuclear morphology, and cell cycle distribution on SiHa cells (a cervical cancer cell line). To study the nuclear morphology, SiHa cells were incubated with platinum NPs for 24 hours. Then, cell nucleus were stained by 1mg/mL PI and examined under a fluorescence inverted microscope. In treated cells, nuclear fragmentation, chromatin condensation, and nuclear swelling were observed. The nuclear fragmentation is a hallmark of late apoptosis. In the same way, PI was also used to determine the cell cycle stage of treated cells by a flow cytometer. The results showed that these NPs induced a G2/M phase cell cycle arrest due to DNA damage. Briefly, this investigation suggests that platinum NPs inhibit cell proliferation because they induce cell death via apoptosis. Moreover, the NPs also have effect by reducing cell viability and causing DNA fragmentation and G2/M cell cycle arrest. That is why; they can be a potential therapy agent in the cervical cancer treatment. Secondly, we describe two different ways to evaluate the cell death induced by apoptosis. Apoptosis or program cell death occurs in the normal physiology during development and aging to keep a balance between proliferation and cell death [24,25]. It is also a defense mechanism and it is important for removing damaged cells and decreasing the damage on neighbor cells. Frequently, cell apoptosis is usually evaluated using a caspase-3 activation assay. For instance, Xun et al. put into effect a work where they tried to study the effect of silica NP size (7, 20, and 50 nm) on cytotoxicity. The cell line HepG2, a human hepatoma model, was selected for the study. HepG2 cells were treated with silica NPs of 20 nm (SNP20) at concentrations of 160 µg/ mL and 320 µg/mL for 24 and 48 hours, respectively. Then, caspase-3 assay buffer and caspase-3 lysis buffer were added into the cell culture. After reaction, the fluorescence intensity was detected under a fluorescence plate reader. Caspase-3 is an essential molecule in the final phase of apoptosis induced by diverse stimulus. Results obtained in this analysis showed an increase of caspase-3 activity about 2-3 fold higher in cells treated with SNP20 than that of controls after 24 hours of incubation and about 3-5 fold after 48 hours. About this evidence, silica NPs could activate caspase-3 and downregulate procaspase-9, indicating an activation of caspase-9 in HepG2 cells. That is, these NPs can change apoptotic protein expression and greatly increase apoptosis in mitochondria-dependent pathways in hepatoma cells. In addition, Annexin V-FITC/PI assay was used in this study to quantify cell apoptosis. This test allows distinguishing between normal, apoptotic, and necrotic cells. HepG2 cells and normal L-02 hepatic cell lines exposed to SNP20, at the same two concentration used before, were stained using Annexin V-FITC and PI and analyzed by flow cytometry. Apoptotic cells undergo changes in the distribution of their membrane lipids. Phosphatidylserine is a phospholipid commonly presented inside the membrane, whereas during apoptosis, processes are expressed on the cellular surface. In this way, Annexin V, which has a high affinity for phosphatidylserine, is used as a marker of early apoptosis. However, PI is used to distinguish necrotic cells from apoptotic cells. This is an agent which is intercalated in the DNA of dead cells when losing the membrane integrity. As a result, almost no apoptotic cells were detected in controls and treated L-02 cells and in control HepG2 cells. On the contrary, many apoptotic cells were found in HepG2 treated with SNP20, indicating that apoptosis induced by NPs is dose-dependent [25]. Annexin V is a method commonly used for assessing cellular apoptosis. For example, Ashokkumar and collaborators as well as Grudzinski et al. employed this procedure in their studies. In the first one, the aim was to evaluate whether gold NPs are able to induce apoptosis in cancer cells. HepG2 cell line was used for the investigation and these were treated for 24 hours with gold NPs. After that, cells were stained with Annexin V and the level of apoptosis was quantified as a percentage of Annexin V positive cells. Finally, the results showed that HepG2 treated with gold NPs undergo cell apoptosis whereas untreated cells did not show it [26]. In the second investigation, they tried to study the cytotoxicity of carbon-encapsulated iron nanoparticles (CEIN) in murine glioma cells (GL261). These cells were divided into two groups: one was treated at two different concentrations during 24 hours, whereas the other group was the control group (untreated cells). Then, both groups were stained with Annexin V and the analysis was performed by flow cytometer. The results indicated that the samples treated with the higher concentration of CEIN induced some pro-apoptotic and necrotic events in the glioma cell line. As a summary, this work have supposed a huge progress because it is the first report which clearly displays that CEINs with surface modifications with acidic groups cause murine glioma cell-specific cytotoxicity [27]. Immunological studies Besides the fact that NPs play an important role in medicine area and their properties can be used to improve traditional treatments and diagnostic agents [28], there are many biocompatibility studies about size, shape, charge, solubility, and surface modification of NPs. However, the interphase related to interactions between NPs and immune system is still not well understood. According to literature, NPs can activate and/or suppress immune response and the compatibility with this system is determined by its surface chemistry. Therefore, NPs can be designed to avoid immunotoxicity and reach desirable immunomodulation [29,30]. Preclinical data shows that NPs are not more immunotoxic than conventional drugs, so NPs employed like drug carriers can provide advantages, such as the reduction of systemic immunotoxicity. For instance, NPs can release the drug in a specific tissue in order to not alter safe tissues and they may keep drugs away from blood cells. Moreover, NPs can also decrease drug immunotoxicity by raising their solubility. However, NPs are generally picked up by phagocytic cells of the immune system, such as macrophages. This incident can produce immunostimulation or immunosuppression, which may promote inflammatory or autoimmune disorders. For example, granuloma formation was observed in the lungs and skin of animals treated with carbon nanotubes [29]. Next, we briefly describe immunostimulation and immunosuppression linked to NPs uptake. Immunosuppression There are not many studies about this area for NPs because most of the researches focused on the inflammatory properties of NPs [29]. One of the studies about immunosuppression has revealed that inhalation of carbon nanotube (CNT) results in a reduction of immune system in mice. This is produced through a mechanism that involves the release of TFG-β1 from lungs. Then, TFG-β1 goes into circulation and increases the expression of two molecules whose function is to inhibit T-cell proliferation [31]. Other NPs that can produce immunosuppression are zinc oxide (ZnO) particles. They are able to induce immunosuppression in vitro and in vivo in function of the different size and charge. ZnO NPs suppress innate immunity such as natural killer cell activity. Moreover, the CD4 + /CD8 + ratio, a marker for matured T-cells, serum levels of T helper-1 cytokines (interferonγ and IL-12p70) and pro/anti-inflammatory were slightly reduced. In the opposite sense, no significant changes were detected in T-and B-cell proliferation [32]. Immunostimulation Biological therapeutics, where NPs are included, are able to activate the immune system. In other words, nanomaterials are identified by this system and innate or adaptive immune responses are produced. We briefly describe several effects of NPs on cytokine secretion, immunogenicity and the mechanism through which nanoparticles are recognized by the immune system. On the one hand, many immunostimulatory reactions, driven by NPs, are mediated by the release of inflammatory cytokines. Cytokines are signaling molecules inducted by different types of nanomaterials: gold, dendrimers, or lipid nanoparticles, among others. Moreover, NP size is an important factor for determining whether antigens loaded into NPs stimulate type I (interferon-γ) or II (IL-4) [29,30]. For example, a study carried out from peripheral blood mononuclear cells of non-atopic women showed that palladium NPs improved the release of IFN-γ [33]. In other study about THP1-macrophages, the results showed that chitosan-DNA nanoparticles did not produce pro-inflammatory cytokines, whereas the secretion of metalloproteinase 9 and 2 was increased in cell supernatants [34]. This kind of analysis is often evaluated by enzyme-linked immunosorbent assay (ELISA). Antibodies and an enzyme-mediated color change are used to determine the presence and relative concentrations of particular cytokines present in the tissue or cell culture media. [35,36]. ELISA is based on the concept of an antigen binding to its specific antibody, which allows identification of small quantities of molecules such as cytokines [36]. In turn, NPs induce antibody response (immunogenicity). NPs raise a special interest in this area because immunogenicity is improved by stimulating the production of antibodies [30]. Plasma B cells are responsible for making antibodies, specialized proteins, in response to an antigen [29]. A recent in vivo study about a novel dengue nanovaccine (DNV) has demonstrated that the vaccine can stimulate humoral and cell-mediated immune responses. This vaccine is composed by the dengue virus type 2 inactivated. Moreover, the adjuvant chitosan together with NPs including cell wall components from Mycobacterium bovis were used to improve the action of the DNV. Mice treated with this compound showed an increase of cytokine levels and a strong anti-dengue IgM and IgG antibody response. The release of IFN-γ produced by CD4 + and CD8 + T cell was also incremented. In conclusion, these results demonstrated that the DNV can be an important vaccine candidate for treatment of dengue disease [37]. Finally, we briefly mention the mechanism through which NPs are phagocyted into the cells. Macrophages are responsible for the first line of defense in the organism. They detect and uptake foreign molecules and synthesize mediators which warn the immune system about infection. Raw 264.7, a mouse leukemic monocyte macrophages cell line, is the model line used for the phagocytosis assays. For instance, Raw 264.7 was utilized in a new in vitro research about the effect of silica and gold NPs in macrophages. The results showed that silica and gold NPs decreased the ability of phagocytosis in 50%, while surface markers and cytokine secretion were not disturbed due to the particles [38]. To evaluate this analysis, different methods can be used depending on the composition of the nanomaterial. These procedures include confocal microscopy, optical and fluorescence microscopy, transmission electron microscopy (TEM), or scanning electron microscopy (SEM) [38,39]. Conclusions and perspectives Bearing in mind the importance and relevancy of the NPs in biomedical field, a better understanding of their effects on the human body is therefore required. According to the points described above, the physicochemical properties of nanomaterials play a critical role in toxicity. Thus, the alteration of these properties could be used to modify the toxicity and/or biocompatibility of these materials. On the other side, it is also necessary to obtain the maximum amount information about the interaction of biological interactions of NPs with cells, tissues, and proteins. In fact, this could be a critical parameter for the future application of nanomaterials in the biomedical area. In this review, special attention has been paid to the protein corona because it plays a critical role in toxicity and biocompatibility. Many studies have been performed; however, further studies are needed to know how to exploit the benefits of the corona in vivo; mainly, because it seems quite complicated to predict the composition of proteins corona and its biological consequences. Despite immense progress on the evaluation of toxicity and biocompatibility of nanomaterials, from this comprehensive review it is pointed out that further experimentation is still ongoing in this field to obtain a better and optimal understanding of the interaction between nanomaterials and the human body.
6,546.4
2015-07-15T00:00:00.000
[ "Materials Science", "Medicine", "Environmental Science" ]
Fröhlich resonance splitting in hybrid GaN nanowire-Ag nanoparticle structures Plasmonic nanoparticles (NPs) have attracted significant attention due to their unique optical properties and broad optoelectronic and photonic applications. We investigate modifications of emission in hybrid structures formed by 60 nm silver NPs and GaN planar nanowires (NWs). Bare GaN NWs exhibit photoluminescence (PL) spectra dominated by broad bands peaking at ∼3.44 eV and ∼3.33 eV, attributed to basal plane stacking faults. In hybrids, two new narrow PL lines appear at 3.36 and 3.31 eV, resulting in PL enhancement at these energies. While the 3.36 eV line in hybrid structures can be explained using the Fröhlich resonance approximation based on the electric dipole concept, the appearance of two features at 3.36 and 3.31 eV indicates the splitting of resonance lines. This phenomenon is explained in framework of theoretical model based on the interaction of the dipole with its charge image, taking into account the quadrupole moment of the silver sphere and the quadrupole field of the charge image. A good agreement is obtained between the calculated Fröhlich resonance frequencies and the experimental PL lines in hybrid structures. Introduction Nowadays, plasmonic nanoparticles (NPs) gain significant importance in various fields of science and technology due to their unique optical properties and potential applications [1].The properties of plasmonic NPs provide a platform for exploring innovative optoelectronic and photonic devices with enhanced capabilities.Their ability to confine and manipulate light at the nanoscale enables the development of plasmonic waveguides [2], nanoantennas [3], and nanoscale optical circuits [4].Plasmonic NPs can be integrated into devices such as solar cells [5], light-emitting diodes [6], and photodetectors [7] to enhance their performance or enable new functionalities. Surface plasmons, the collective oscillations of free electrons, occur at the interface between metal and dielectric [8].The incident electromagnetic wave induces the movement of electrons which oscillate at a surface plasmon resonance (SPR) frequency.This specific frequency depends on metal properties and the dielectric constant of the adjacent material.The optical properties demonstrated by metal NPs are due to a phenomenon called localized SPR, when the light interacts with particles that are significantly smaller than the incident wavelength and when the oscillation of free electrons is limited by the volume of NPs.For a metal NP in a dielectric medium, collective oscillations of free electrons in the metal NPs result in the appearance of surface modes with spectral positions determined by the Fröhlich condition for the permittivity of the metal and the matrix. Noble metal NPs, particularly Ag and Au, support plasmonic resonances that can be tuned throughout the UV-vis-NIR region [9][10][11].A requirement for the localized SPR is a large negative real part and small imaginary part of dielectric function, hence a number of other metals (Li, Na, Al, In, Ga, Pt and Cu) should theoretically support plasmon resonances [12,13].However, most of these metals are either unstable or difficult to work with, or they are prone to surface oxidation as Cu, which can significantly affect optical properties.Among different metals, silver exhibit high refractive index sensitivity and higher figure of merit [9] and is considered as one of the most important materials for plasmonics that can support a strong surface plasmon at a desired resonance wavelength in a broad spectrum from 300 to 1200 nm [14].Given the significance of modifying and enhancing spontaneous emissions, ongoing efforts are directed towards exploring diverse semiconductor nanostructures with unique optical properties, ranging from Tamm plasmon photonic crystals and periodic Bragg structures to meso-resonators and hybrid structures [15][16][17][18][19][20]. Previously, we have studied optical properties of hybrid structures formed by Ag NPs and GaN planar nanowires (NWs) [21], when GaN planar NWs were grown along the [10 10] in-plane crystallographic directions.In this case, the GaN NWs have demonstrated almost ideal geometric shape and high crystalline quality [22].The effect of small Ag NPs on the emission properties reveals in the appearance of an additional narrow line at ∼3.36 eV in photoluminescence (PL) spectra in the vicinity of the Ag NPs, which was explained using a theoretical model based on the Fröhlich resonance approximation and the effective medium approach.The experimental results were well fitted by considering an effective medium composed of air and 10% GaN. In this work, we study a hybrid structure formed using Ag NPs and GaN planar NWs grown along the [11 20] in-plane crystallographic direction and develop a model to explain the experimental results.In this configuration, the GaN NWs possess structural defects such as stacking faults with associated broad emission bands at ∼3.44 eV and ∼3.33 eV.In these hybrid nanostructures, we observe the appearance of two strong and narrow emission lines at ∼3.36 eV and 3.31 eV.To explain the results and the observation of two additional resonance peaks in the experimental luminescence spectra, we have developed a theoretical model.The calculations are based on the method of images of charges and take into account the influence of the substrate on polarizability tensor considering the quadrupole moment of the sphere and the quadrupole field of the image. Results and discussion The morphology of samples has been studied using scanning electron microscopy (SEM) and cathodoluminescence (CL).The GaN planar NWs grown by selective area metal-organic vapor phase epitaxy (MOVPE) in the [11 20] direction are shown in figure 1.While the growth process was conducted with identical parameters and on the same template as in the case of the GaN planar NWs formed along the [10 10] axis, the morphology of the NWs grown along the [11 20] crystallographic axis exhibits shape imperfection as depicted by SEM image in figure 1(a). The difference in quality between planar NWs oriented in different directions can be attributed to the dependence of the lateral and vertical growth rates on orientation.The shape of NWs is determined by slow-growing semi-polar {11 01} facets [23].In addition, the emission, as revealed by the anchromatic CL map (figure 1(b)), is non-uniform across the facets, where the darker intensity contrast corresponds to areas with the presence of stacking faults. The size and shape of the Ag NPs were characterized by a scanning transmission electron microscope (STEM) detector in situ SEM at 30 kV electron beam voltage (figure 2(a)).The particles exhibit a shape close to spherical, with a diameter of ∼60 nm.Optical absorption spectra were measured in the temperature range of 5-295 K.We did not observe a notable dependence of absorption on temperature, as illustrated in figure 2(b), which is consistent with previous findings [24].The broad absorption band of the Ag NPs, centered at ∼460 nm, is likely to overlap at the high-energy tail with the GaN emission.In addition, reflection spectra of the Ag NPs on GaN templates have been measured (see figure S1 in the supplementary information). The hybrid structure Ag NPs/GaN NWs is shown in figure 3(a).To provide a closer look at the distribution of the Ag NPs, the region marked by a red square is magnified and shown in figure 3(b).While the size of the NPs varies, it is important to note that all of them possess dimensions significantly smaller than the wavelength of light ∼400 nm. The properties of the near-bandgap (NBG) emission of the hybrid Ag NPs/GaN NWs were examined using µ-PL and time-resolved PL (TRPL) techniques.Figure 4(a) displays the typical time-integrated PL spectrum at 5 K for the bare GaN NW (depicted by the red line), while figure 4(b) shows the spectrum measured for the hybrid Ag NPs/GaN NW structures.The NBG spectrum of the bare GaN planar NWs possessing structural defects such as stacking faults, reveals the presence of three distinct features.In addition to the excitonic line X at ∼3.47 eV, there are two characteristic bands labeled as SF1 at ∼3.44 eV and SF2 at ∼3. 32-3.33 eV.These bands are associated with basal-plane stacking faults of intrinsic type I1 and I2, respectively [25].Several PL spectra measured at randomly chosen places are shown in figure S2 (supplementary data) After the deposition of Ag NPs, notable changes occur in the PL spectrum.Two distinct and narrow lines appears at ∼3.36 eV and 3.31 eV, labeled as FR1 and FR2, respectively.The relative intensity of these lines can vary, as illustrated in figure 4(b), where spectra were captured from different points across the structure.In some cases, the dominance of FR1 at 3.36 eV is evident (curve 3), while in other cases, FR2 at ∼3.31 eV exhibits a stronger intensity (curves 1 and 2).More spectra taken in hybrid structure are presented in figure S3 (supplementary data). It is clear from TRPL image in figure 5(a) that no additional lines are present in the case of the bare GaN NWs.On the contrary, TRPL image in figure 5(b) measured for the hybrid Ag NPs/GaN NWs at place 1 (see curve 1 in figure 4(b)) exhibits that PL emissions peaking at ∼3.36 eV and 3.31 eV have different temporal behaviors compared to the SF-related emissions.The narrower FR1 line partly overlaps with the broader SF2 emission, however, the PL decay characteristics of these features are distinct as visible from the TRPL images in figure 5.The decay of the PL intensity is illustrated in figure 6(a), where TRPL spectra are plotted with a delay time of ∼43 ps and are shifted vertically for clarity.The spectrum at the bottom was taken at the time of the incoming laser pulse when luminescence related to X, SF1 and FR1 and FR2 appears.Clearly, the PL lifetime is longer for the FR1and FR2 lines compared to the excitonic and SF1 emissions, which decay very rapidly.The PL decay curves are plotted in figure 6(b) for energy positions corresponding to the X (blue), FR1 (green) and FR2 (cyan) lines, respectively.The increase in temperature leads to a rapid thermalization of the emission processes.In figures 7(a) and (b) we compare TRPL measurements taken at 5 K and 75 K for place 3 (see PL spectrum 3 in figure 4(b)).The intensity of the FR-related lines is rapidly reduced and almost disappears above 80 K, indicating a rather weak localization potential.To demonstrate in detail that the intensity of the FR1 line is indeed decreases more rapidly compared to the SF1 and SF2 bands, we show PL spectra measured at different temperatures in figure 7(c).Simultaneously, the decrease in PL lifetime for the FR1 line is not as rapid as for the SF2 line with increasing temperature, as it is depicted in figure 7(d).The SF emission can be associated with recombination of electrons and weakly localized holes in the type II quantum well [26].Even with a slight increase in temperature, the hole becomes less localized, and the recombination time is affected by the increasing contribution of nonradiative recombination channels.However, the SF-related emission itself still appears in PL spectra at elevated temperatures.In contrast, for the FR1 line, the recombination time is less sensitive to increasing temperature, but the line vanishes above ∼75 K.This further illustrates that the origin of the FR lines is not related the GaN exciton or stacking faults. Modeling Here, we do not account for the potential oxidation of the Ag NPs, which typically does not occur under normal environmental conditions.Previously, we addressed this issue by calculating the effects of the oxidation layer, revealed only a minor shift in the energy position of the resonance, without causing the splitting of the lines. As a result of light interaction with a metal NP on the surface of a semiconductor substrate, the field's amplitude near the NP significantly increases in the substrate, which correlates with the intensity of spontaneous emission [27,28]. To explain the experimental results and the appearance of two resonance lines in the vicinity of the metal NPs, i.e. the splitting of the Frölich resonance, we suggest the following theoretical model.Let us consider a sphere with permittivity ε i , lying in a half-space with permittivity ε 1 , above a boundary with a half-space having permittivity ε 2 .A schematic representation of the system is shown in figure 8.When an electric field is applied to the sphere, it polarizes the spherical object in a certain way.To describe this process, we can employ the method of images of charges.The core idea behind the image method is to select an effective system of point charges that provides the required boundary conditions.These charges are called 'image charges' .The principles of constructing image charges are similar to the construction of point sources in a mirror image.When a vector quantity is reflected from an interface plane, the vector component parallel to the plane is preserved, while the perpendicular component changes direction.This leads to the fact that the problem becomes anisotropic, and instead of a scalar polarizability that characterizes the response of the system to the action of the field, a polarizability tensor appears [29,30]: where p is a polarization vector, α is a polarizability tensor, E 0 is the electric field and z is the direction perpendicular to the interface plane.We transit from the static approximation to the case when an electromagnetic wave is incident on the dielectric interface.This is a so-called quasi-static approximation, which is valid when the size of the polarized object is much smaller than the wavelength.Then, the s-polarized wave (with respect to the dielectric interface, that is, the plane of the substrate) will have an electric field along the interface.Consequently, it will induce polarization (and scattering) corresponding to the polarizability coefficients α xx = α yy = α || .At the same time, the p-polarized wave has an electric field both along and across the interface plane, thereby exciting the polarization of the system in accordance with both polarizability coefficients α || and α ⊥ .The simplest approximation considers the influence of the substrate on polarizability through the image dipole.This approach assumes that the electric field of the image dipole in the part of the space, where the sphere is located, can be considered as uniform.This approximation is valid when the distance from the sphere's center to the substrate is significantly larger than the radius of the sphere.However, if the NP is placed on a substrate, then this approximation is, in general, not justified. To obtain a more accurate approximation, it is necessary to consider the deviation of the induced field of the substrate from uniformity in the region where the spherical NP is located.This can be achieved if we take into account the quadrupole moment of the sphere and the quadrupole field of the image.The method described earlier [31] allows to consider the effect of the substrate on polarizability in arbitrary multipole order.However, it was shown by a comparative calculation using the quadrupole approximation and considering higher multipoles, that accounting for orders higher than the quadrupole does not lead to a significant improvement in accuracy. To describe our results, we consider the quadrupole moment of the metal sphere on the substrate and obtain the following equations for the components of the polarizability tensor [31]: Figure 8. Silver sphere on the surface of a nanowire.Schemes the operation of the imaging method for different relative positions of pairs of negative and positive charges with respect to the interface plane.(For the case when the substrate is optically denser than the medium in which the nanosphere is located.). where χ is an auxiliary parameter that defines the dielectric contrast between the substrate and the external space, and it is equal to zero in the case of a spherical NP embedded in a homogeneous medium.R is the nanosphere radius, h is the distance from the center of the sphere to the interface.The Fröhlich resonance of a metal sphere occurs when the polarizability of the sphere reaches its peak value.The polarizability peak is reached near the frequencies at which the denominator of the expression approaches zero.In this situation, we have different expressions for polarization depending on the direction of the field, with different denominators that will approach zero at different points.This leads to the splitting of the resonance into two peaks. The scattering cross-section is given: where k is the wave vector in the medium.The dielectric constant for silver was calculated according to the Drude model (hω p = 8.98 eV, ε ∞ = 4.96, ℏγ = 18 meV), and the dispersion of GaN was taken according to [32].The spectrum of the scattering cross-section is significant as it determines the intensity of the electric field confined in the vicinity of a metal NP situated on a semiconductor NW.This intensity is directly proportional to the scattering cross-section of the NP.As a result, a direct correlation exists between specific features within the scattering cross-section spectrum and the PL spectrum of NWs [20,32,33]. In the figure 9, we show the calculated scattering cross-section for a silver NP placed on a GaN substrate.The blue color represents the scattering cross-section for the normal component of the field, and the orange color shows the tangential component.The obtained resonance peaks have frequencies of 3.31 eV and 3.36 eV, which are in very good agreement with the experimental features in the PL spectrum.The positions of the experimental peaks FR1 and FR2 are marked with dotted lines. Conclusions We have investigated the optical properties of hybrid structures formed by the plasmonic Ag NPs with a size of 60 nm and planar GaN NWs grown by selective area MOVPE in the [11 20] crystallographic direction.The bare GaN NWs demonstrated low-temperature PL bands with maxima at ∼3.44 eV and ∼3.33 eV, denoted as SF1 and SF2, associated with emissions related to the basal plane stacking faults of intrinsic type I1 and I2, respectively.However, in the presence of the Ag NPs, we observed the appearance of two distinct and narrow emission lines at ∼3.36 eV and 3.31 eV, denoted as FR1 and FR2, respectively.TRPL measurements have shown that these lines exhibited different PL lifetimes compared to both the exciton and stacking faults emissions.Increasing the temperature to ∼75 K resulted in rapid thermalization of the FR1 and FR2 lines.We proposed a theoretical model to explain these experimental results and the splitting of the Frölich resonance into two peaks in the presence of Ag NPs on GaN.This model considered the influence of the GaN substrate on polarizability tensor through the image dipole and the quadrupole moment of the sphere and the quadrupole field of the image.The calculated scattering cross-section of Ag NPs on a GaN substrate showed resonance peaks at 3.31 eV and 3.36 eV, in good agreement with the experimental PL spectrum. Methods We utilized selective-area MOVPE to create planar GaN structures.This approach involved an initial growth of a 3 µm thick GaN layer on a (0001) sapphire substrate.Subsequently, a 5 nm thin Si 3 N 4 mask layer was deposited.The growth process occurred at a temperature of 1000 • C. The mask's pattern was etched using a focused ion beam (FIB) instrument in situ SEM.Planar NWs in this study were grown within a pattern consisting of rectangular trenches with a 500 nm width, aligned in the [11 20] direction.After a subsequent MOVPE overgrowth, this resulted in broader planar NWs with an approximate width of 4 µm.Details on the FIB processing and used current were published previously [22]. As plasmonic NPs, we employed commercially available spherical Ag NPs from Sigma-Aldrich with a size of ∼60 nm as determined by transmission electron microscopy by supplier.We applied a ZEISS GeminiSEM 560 instrument equipped with STEM detector for control of NPs size and shape at electron beam voltage of 30 kV.The selection of the Ag NPs with 60 nm in diameter was based on an overlap between the emission of GaN and the absorption spectra of the Ag NPs.To create hybrid structures, Ag particles were deposited onto the GaN NWs using a drop-deposition method. CL measurements were performed using a cold stage that enabled a low temperature of 5 K in a MonoCL4 system integrated with a Zeiss Sigma 300 SEM.The spatial resolution was ∼200 nm, using an electron beam acceleration voltage of 5 kV.PL was measured using a µ-PL setup allowing a spatial resolution of ∼1 µm.PL and TRPL measurements were carried out employing the third harmonic of a 76 MHz Ti:sapphire femtosecond pulsed laser with an excitation wavelength of λ e = 266 nm.Detection was facilitated by a Hamamatsu syncroscan streak camera with a temporal resolution of ∼20 ps.During PL measurements, sample temperature was regulated using an Oxford Microstat, enabling a temperature range between 5 K and 295 K. Figure 1 . Figure 1.(a) SEM and (b) panchromatic CL images of four GaN planar NWs formed in the [11 20] in-plane crystallographic direction. Figure 2 . Figure 2. (a) STEM image of the Ag NPs.(b) Absorption spectra of Ag NPs measured at different temperatures. Figure 3 . Figure 3. (a) SEM image of the hybrid structure.(b) SEM image of Ag NPs on the surface of GaN NWs taken from the region indicated by the square in (a). Figure 4 . Figure 4. PL spectra for (a) the bare GaN NW and (b) the hybrid structure.Spectra 1, 2 and 3 were acquired from different places on the Ag NPs/GaN NWs. Figure 5 . Figure 5. TRPL images for (a) the bare GaN NW and (b) the hybrid structure. Figure 6 . Figure 6.(a) Time-resolved PL spectra for place 1 (curve 1 in figure 4(b)) are shown with a time step of ∼43 ps.(b).PL decay curves measured at energies corresponding to different PL emissions.Red lines show fitting by a single exponential decay law. Figure 7 . Figure 7. (a) and (b) TRPL images measured at 5 and 75 K, respectively.(c) PL spectra measured at place 3 (see spectrum 3 in figure 4) at different temperatures.(d) Estimated PL decay time as a function of temperature for different emissions measured at place 3. Figure 9 . Figure 9.The splitting of the Fröhlich resonance for a silver particle on a GaN surface.The blue color indicates the scattering cross-section for the normal component of the field, and the red color represents the tangential component.The positions of the experimental peaks are marked with dotted lines.
5,062.4
2024-05-09T00:00:00.000
[ "Materials Science", "Physics" ]
Spatiotemporal Variability of Chlorophyll-a and Sea Surface Temperature, and Their Relationship with Bathymetry over the Coasts of UAE The catastrophic implication of harmful algal bloom (HAB) events in the Arabian Gulf is a strong indication that the study of the spatiotemporal distribution of chlorophyll-a and its relationship with other variables is critical. This study analyzes the relationship between chlorophyll-a (Chl-a) and sea surface temperature (SST) and their trends in the Arabian Gulf and the Gulf of Oman along the United Arab Emirates coast. Additionally, the relationship between bathymetry and Chl-a and SST was examined. The MODIS Aqua product with a resolution of 1 × 1 km2 was employed for both chlorophyll-a and SST covering a timeframe from 2003 to 2019. The highest concentration of chlorophyll-a was seen in the Strait of Hormuz with an average of 2.8 mg m−3, which is 1.1 mg m−3 higher than the average for the entire study area. Three-quarters of the study area showed a significant correlation between the Chl-a and SST. The shallow (deep) areas showed a strong positive (negative) correlation between the Chl-a and SST. The results indicate the presence of trends for both variables across most of the study area. SST significantly increased in more than two-thirds of the study area in the summer with no significant trends detected in the winter. Introduction The Arabian Sea is one of the most essential bodies of water not only for the local economy, but also for the global one, because it serves as a route to a significant portion of the world's oil supply. The ecosystems of the Arabian seas (Arabian Gulf (thereafter AG), Gulf of Oman (thereafter GO), and Arabian Sea) are fragile, and susceptible to pollution. Among these pollutants are algal blooms, particularly red tide [1][2][3]. The bloom's growth and biomass depend on the availability of nutrients in the surface layer. Therefore, the processes by which the nutrients reach the surface are of crucial importance. The main source of nutrients to the surface layer is the deep water, which is rich in nutrients [4]. The transfer of these deep nutrients is affected by wind-induced or thermohaline upwelling, vertical diffusion, deepening of the surface layer, and vertical overturning [4]. In the Arabian Sea, the transfer of nutrients is related to the summer (southwest) and winter (northeast) monsoon seasons. The distinct direction of the summer monsoon from the southwest, which is almost parallel to the Oman coastline in the northern Arabian Sea, produces a of phytoplankton. Another factor that affects the growth of phytoplankton is the amount of nutrients loaded with the freshwater from river discharges. Jutla [29] found a positive correlation between seasonal river discharges, SST, and Chl-a and vice versa in the coastal Bay of Bengal region. Seawater current is also one of the main factors that drive the Chl-a concentration in the water bodies. Kouketsu [30] and Chu [31] suggest that in the Kuroshio Extension the cyclonic eddies are related to high area-averaged Chl-a concentration and anti-cyclonic eddies are often related to low area-averaged Chl-a. The existence of large spatial and temporal gaps in in situ measurements hamper the complete understanding of Chl-a behavior. Our work utilizes satellite data that provide regular long-term temporal and spatial continuity to comprehend the pattern and change of Chl-a characteristics in both space and time. The main goal of this study is to examine the spatiotemporal variability of Chl-a and other oceanography variables over the AG and GO for the period span between 2003 and 2019. The spatiotemporal analysis elucidates the impact of the SST on the growth of phytoplankton over the region. Additionally, we investigated the variability of both SST and Chl-a over the coastal areas of the UAE using the empirical orthogonal function (EOF). The objectives of this research are to (i) conduct frequency analysis of the mode of the variability in SST and Chl-a and their relationship with regional wind circulations, and (ii) investigate the presence of trends in both variables and their seasonal decomposition. Study Area The study area is shown in Figure 1. The area covers the AG and GO along the UAE coasts (1318 km). The AG is located in the Middle East between latitude 24.0 • N and 30.0 • N and longitude 48.0 • E and 56.5 • E. The AG is separated from the northern Indian Ocean by the Strait of Hormuz and the GO [3,6]. The AG is 990 km long with a maximum width of 338 km and an average depth of 36 m for much of the Arabian coast and 60 m depth along the Iranian coast [32,33]. The GO is situated between 22.0 • N to 26.0 • N and 56.5 • E to 61.7 • E. The GO is 320 km wide between Ra's Al-Remote Sens. 2021, 13, x FOR PEER REVIEW SST is one of the main factors that affects the growth of phytoplankton in especially at an optimum temperature when the correlation is significantly h However, as Nurdin [28] reported, an excessive increase in SST would hinder th of phytoplankton. Another factor that affects the growth of phytoplankton is the of nutrients loaded with the freshwater from river discharges. Jutla [29] found a correlation between seasonal river discharges, SST, and Chl-a and vice versa in th Bay of Bengal region. Seawater current is also one of the main factors that drive concentration in the water bodies. Kouketsu [30] and Chu [31] suggest that in t shio Extension the cyclonic eddies are related to high area-averaged Chl-a conc and anti-cyclonic eddies are often related to low area-averaged Chl-a. The existence of large spatial and temporal gaps in in situ measurements ha complete understanding of Chl-a behavior. Our work utilizes satellite data that regular long-term temporal and spatial continuity to comprehend the pattern an of Chl-a characteristics in both space and time. The main goal of this study is to the spatiotemporal variability of Chl-a and other oceanography variables over the GO for the period span between 2003 and 2019. The spatiotemporal analysis e the impact of the SST on the growth of phytoplankton over the region. Addition investigated the variability of both SST and Chl-a over the coastal areas of the U the empirical orthogonal function (EOF). The objectives of this research are to (i) frequency analysis of the mode of the variability in SST and Chl-a and their rela with regional wind circulations, and (ii) investigate the presence of trends in bo bles and their seasonal decomposition. Study Area The study area is shown in Figure 1. The area covers the AG and GO along coasts (1318 km). The AG is located in the Middle East between latitude 24.0°N an and longitude 48.0°E and 56.5°E. The AG is separated from the northern Indian O the Strait of Hormuz and the GO [3,6]. The AG is 990 km long with a maximum 338 km and an average depth of 36 m for much of the Arabian coast and 60 m dep the Iranian coast [32,33]. The GO is situated between 22.0°N to 26.0°N and 56.5°E The GO is 320 km wide between Ra's Al-Ḥadd in Oman and Gwādar Bay on the P Iran border. It is 560 km long and connects with AG through the Strait of Horm Although the AG is located entirely north of the Tropic of Cancer, its climate is tr the summer and temperate in the winter (Reynolds, 1993). The climate of the AG main seasons: winter (December to March) and summer (June to September), transition periods, fall (October to November), and spring (April to May) [35]. In mer, the air temperature reaches up to 51°C with an average of 41 °C, while in w air temperature drops to as low as 15 °C [33]. Due to the surrounding arid clima oration surpasses the combination of precipitation and runoff resulting in hypers ter mass production [36]. The climate of the GO and the northern Arabian Sea i cantly influenced by the summer and winter monsoons driven by land-sea la differences. The summer monsoon occurs from July to September and the winter m from November to April [5,7]. The SST is a considerably variable in both Gulfs d effects of the surrounding landmass and air temperatures [37]. add in Oman and Gwādar Bay on the Pakistan-Iran border. It is 560 km long and connects with AG through the Strait of Hormuz [34]. Although the AG is located entirely north of the Tropic of Cancer, its climate is tropical in the summer and temperate in the winter (Reynolds, 1993). The climate of the AG has two main seasons: winter (December to March) and summer (June to September), and two transition periods, fall (October to November), and spring (April to May) [35]. In the summer, the air temperature reaches up to 51 • C with an average of 41 • C, while in winter the air temperature drops to as low as 15 • C [33]. Due to the surrounding arid climate, evaporation surpasses the combination of precipitation and runoff resulting in hypersaline water mass production [36]. The climate of the GO and the northern Arabian Sea is significantly influenced by the summer and winter monsoons driven by land-sea latent heat differences. The summer monsoon occurs from July to September and the winter monsoon from November to April [5,7]. The SST is a considerably variable in both Gulfs due to the effects of the surrounding landmass and air temperatures [37]. Chl-a Data To characterize the spatial and temporal distribution of algal blooms along the coast of the UAE, daily remotely sensed Chl-a concentration and SST were obtained for the period between 2003 and 2019. Level 2 product with a spatial resolution of 1 × 1 km 2 of the Chl-a concentration from MODIS onboard Aqua satellite were downloaded from the NASA MODIS standard products at https://oceancolor.gsfc.nasa.gov/cgi/browse.pl. These data are in the netCDF-4 format (.nc), which contains multi-object files [38]. The Band Select of Data Conversion tool from Sentinel Application Platform (SNAP) was used to extract the products of both Chl-a and SST and mosaicked using the Geospatial Data Abstraction Library (GDAL) merge tool by pyQGIS. The Chl-a data contain gaps mainly due to the inability of the sensors to perpetrate through clouds. From the study period (1 January 2003 to 31 December 2019), out of 6208 days, 6149 daily imageries were available in the archive. Out of the available daily images, 217 images were found to be covered by clouds for more than 75% of the study area. The temporal distribution showed that the daily images were missing around 5% of the study area every day before January 2018 ( Figure 2B). The spatial distribution suggested that the area that failed to be covered consistently was the northwestern tip ( Figure 2A). The areal coverage drops to below 75% during only a few days for a small number of months. Due to a lack of sufficient data, the areas with a dataset that are missing more than 75% (~5% of the study area) of their observations were masked out before the analysis was conducted. The areal average amount of missing data over the entire Arabian Sea was recorded as 16.3%. The month of July had most of the missing data, including on seven occasions wherein the daily images failed to cover more than 25% of the study area. Moradi [39] also suggested that the data of July included the highest missing values in the region followed by August and June. Chl-a Data To characterize the spatial and temporal distribution of algal blooms along the coast of the UAE, daily remotely sensed Chl-a concentration and SST were obtained for the period between 2003 and 2019. Level 2 product with a spatial resolution of 1 × 1 km 2 of the Chl-a concentration from MODIS onboard Aqua satellite were downloaded from the NASA MODIS standard products at https://oceancolor.gsfc.nasa.gov/cgi/browse.pl. These data are in the netCDF-4 format (.nc), which contains multi-object files [38]. The Band Select of Data Conversion tool from Sentinel Application Platform (SNAP) was used to extract the products of both Chl-a and SST and mosaicked using the Geospatial Data Abstraction Library (GDAL) merge tool by pyQGIS. The Chl-a data contain gaps mainly SST Data Level 2 product with a spatial resolution of 1 × 1 km 2 of the SST from MODIS onboard Aqua satellite was downloaded from the NASA MODIS standard products at https://oceancolor.gsfc.nasa.gov/cgi/browse.pl. For the MODIS data, thermal channels 31 (10.780 to 11.280 µm) and 32 (11.770 to 12.270 µm) are particularly suited to estimate the surface temperature [40]. The MODIS sea surface temperature data have been widely validated for open waters and therefore are widely accepted as accurate [37,[41][42][43][44][45]. SST Data Level 2 product with a spatial resolution of 1 × 1 km 2 of the SST from MODIS onboard Aqua satellite was downloaded from the NASA MODIS standard products at https://oceancolor.gsfc.nasa.gov/cgi/browse.pl. For the MODIS data, thermal channels 31 (10.780 to 11.280 μm) and 32 (11.770 to 12.270 μm) are particularly suited to estimate the surface temperature [40]. The MODIS sea surface temperature data have been widely validated for open waters and therefore are widely accepted as accurate [37,[41][42][43][44][45]. The SST data have relatively better coverage than the Chl-a data described above. The study spanned for a period (1 January 2003 to 31 December 2019) of 6208 days, out of which 6144 daily images were available in the archive. The notable missing data from the The SST data have relatively better coverage than the Chl-a data described above. The study spanned for a period (1 January 2003 to 31 December 2019) of 6208 days, out of which 6144 daily images were available in the archive. The notable missing data from the archive is that only 10 days of data were available for the months of November and December 2014. However, only 19 daily images out of the available 6144 imageries had a missing area of more than 75% of the study area (mainly due to clouds). The temporal distribution of the missing data shows that the cloud coverage is much higher in the winter months (from November to April), as shown in Figure 3B. The areal coverage drops to below 75% during only a few days for a small number of months. The spatial distribution of the missing data indicates that the northwestern tip of the study area is the area with the most missing data (~6%). The areal average of missing data was 2.4%. The amount Remote Sens. 2021, 13, 2447 6 of 25 of missing data decreases as you move from the northwestern to the southeastern corner ( Figure 3A). Interestingly, from January 2018 to December 2019, there was no significant missing data (images covered more than 98% of the study area). That is likely due to an enhancement of the product processing algorithm. cember 2014. However, only 19 daily images out of the available 6144 imageries had a missing area of more than 75% of the study area (mainly due to clouds). The temporal distribution of the missing data shows that the cloud coverage is much higher in the winter months (from November to April), as shown in Figure 3B. The areal coverage drops to below 75% during only a few days for a small number of months. The spatial distribution of the missing data indicates that the northwestern tip of the study area is the area with the most missing data (~6%). The areal average of missing data was 2.4%. The amount of missing data decreases as you move from the northwestern to the southeastern corner ( Figure 3A). Interestingly, from January 2018 to December 2019, there was no significant missing data (images covered more than 98% of the study area). That is likely due to an enhancement of the product processing algorithm. Bathymetry Data The bathymetric data, the Global Relief Model referred to as ETOPO1, which is an improved model of the ETOPO2v2 Global Relief Model, were used in the study. The data are developed by the National Geophysical Data Center (NGDC) of the National Oceanic and Atmospheric Administration (NOAA). The ETOPO1 has two versions-Ice Surface and Bedrock. The Ice Surface version includes the top of the ice sheets (Antarctica and Greenland), while the Bedrock version depicts the base of the ice sheets [46]. For this study, Remote Sens. 2021, 13, 2447 7 of 25 the Bedrock version was used. The vertical datum is referenced from the mean sea level and the World Geodetic System of 1984 (WGS 84) datum was used as a horizontal datum. The spatial resolution of the data is one arcminute with global coverage. The bathymetry of the study area shows a shallow AG and a much deeper GO (Figure 1). Filling Missing Data Cloud cover significantly obscures surface information; therefore, it is very important to retrieve the Chl-a and SST under overcast skies. The biggest challenge in retrieving the Chl-a and SST is to eliminate cloud contamination. Figure 4 presents how the missing data values and gaps of Chl-a and SST are filled. To fill the missing values of Chl-a, the daily Chl-a data for the study period (2003 to 2019) are used to develop the monthly composites of each Julian day. Then, the missing data values and gaps of the daily data are filled with the corresponding monthly composite values. MODIS cloud-free data composite image (SST, monthly composite product) was employed to fill in these missing pixels' values of SST. This method was used because the amount of missing data is not as significant relative to the other regions of the world where cloud cover is a major issue. For example, Li [47] found that only 2872 daily snapshots were useful out of 3653 imageries in the Gulf of Maine. However, in this study, only 217 days of Chl-a had missing data covering more than 75% of the study area out of 6149 obtained daily images. The northwestern part of the study area was found to have significant gaps in the data. For this reason, the area which covers~5% of the total study area was masked from the analysis. The average missing data across the study period for Chl-a was around 10%. Conversely, the average missing data of SST is~2% of the study area after excluding the northwestern part of the study area. The bathymetric data, the Global Relief Model referred to as ETOPO1, which is an improved model of the ETOPO2v2 Global Relief Model, were used in the study. The data are developed by the National Geophysical Data Center (NGDC) of the National Oceanic and Atmospheric Administration (NOAA). The ETOPO1 has two versions-Ice Surface and Bedrock. The Ice Surface version includes the top of the ice sheets (Antarctica and Greenland), while the Bedrock version depicts the base of the ice sheets [46]. For this study, the Bedrock version was used. The vertical datum is referenced from the mean sea level and the World Geodetic System of 1984 (WGS 84) datum was used as a horizontal datum. The spatial resolution of the data is one arcminute with global coverage. The bathymetry of the study area shows a shallow AG and a much deeper GO ( Figure 1). Filling Missing Data Cloud cover significantly obscures surface information; therefore, it is very important to retrieve the Chl-a and SST under overcast skies. The biggest challenge in retrieving the Chl-a and SST is to eliminate cloud contamination. Figure 4 presents how the missing data values and gaps of Chl-a and SST are filled. To fill the missing values of Chl-a, the daily Chla data for the study period (2003 to 2019) are used to develop the monthly composites of each Julian day. Then, the missing data values and gaps of the daily data are filled with the corresponding monthly composite values. MODIS cloud-free data composite image (SST, monthly composite product) was employed to fill in these missing pixels' values of SST. This method was used because the amount of missing data is not as significant relative to the other regions of the world where cloud cover is a major issue. For example, Li [47] found that only 2872 daily snapshots were useful out of 3653 imageries in the Gulf of Maine. However, in this study, only 217 days of Chl-a had missing data covering more than 75% of the study area out of 6149 obtained daily images. The northwestern part of the study area was found to have significant gaps in the data. For this reason, the area which covers ~5% of the total study area was masked from the analysis. The average missing data across the study period for Chl-a was around 10%. Conversely, the average missing data of SST is ~2% of the study area after excluding the northwestern part of the study area. Empirical Orthogonal Function (EOF) Analysis The primary application of EOF is that it helps in understanding the spatial patterns of variability in spatiotemporal data by examining the EOF coefficient maps. Secondly, it can be used to reduce the dimension of the components by using the optimal number of components that explain the majority of the variability in the spatiotemporal dataset [48,49]. The EOFs analysis was conducted over the coastal areas of the UAE, the AG Coast, Strait of Hormuz, and GO Coast ( Figure 1) to examine the spatial variability of the SST and Chl-a concentration. The raw data matrix F is arranged in a matrix format M × N, where M is the time series dimension and N is the space dimension. The covariance matrix R is calculated using Equation (1). Then, the eigenvalues are solved with Equation (2), which provides the information about the amount of variability explained by each component [48]. The highest three components were selected for this study as they explain more than three-quarters of the variability in both Chl-a and SST. ∆ is a diagonal matrix containing eigenvalues λ i of R where i is the length of the time series ranges from 1 to p (size of M). The column vectors of C are the eigenvectors of R that corresponds to the eigenvalues λ i which contain information about the spatial distribution of the eigenvalues. The raw data can be reconstructed from the EOFs and the eigenvectors using the following: The amount of variability explained by one EOFs component a can be estimated as a fraction of the total variability using Equation (4). Correlation Analysis The Pearson correlation coefficient (PCC) statistical tool was used to evaluate the impact of SST on Chl-a concentration. If the value approaches 1/−1, it indicates that the relationship is strongly positive/negative, and if the coefficient is closer to 0, it indicates that the relationship between the variables is weak. Cross-correlation was conducted to assess the possible lag time of the impact of the SST over the formation of the Chl-a concentration. The mathematical formula used is obtained from Pearson [50]: where n is the sample size, x i and y i are records of the variables (SST and Chl-a in this case), x and y are the average values of the variables, and S x and S y are the standard deviations of the variables. Correlated Seasonal Mann-Kendal Trend Test The corrected seasonal Mann-Kendal trend test was used to investigate the presence of a significant trend in the data. This test was a modified version of the original Mann-Kendal test to accommodate seasonally correlated data. The adjustment was used by Hirsch [51] and Libiseller [52] to reduce the seasonal autocorrelation in the dataset. The Mann-Kendall scores are first computed for each month separately as follows: where sgn() is a sign function obtaining the sign of real number, x ij and x ik are monthly series values for the periods k and j, respectively, and i represent the month. The variance for each month is given by: Remote Sens. 2021, 13, 2447 where g i is the number of tied groups for the ith month and t ip is the number of observations in the pth group for the ith month. Then, the Mann-Kendall score and variance for the entire series are computed as follows: where S i is the Mann-Kendall score of an individual month and m, the number of months in this study, is 12. Similarly, Var(S ) is the variance of individual months. The seasonal adjusted Mann-Kendall test statistics for the series (Z SK ) is given by: Finally, for the areas with a significant trend, the magnitude of the trend was computed using a linear model (y = α + βx). Moreover, trend analysis was conducted over the summer and winter months separately to assess the influence of seasonality. Spatiotemporal Distribution of Chl-a and SST The spatial distribution of the long-term average of Chl-a concentration was unevenly distributed across the Arabian Sea. The Chl-a concentration was high in the coastal areas and the Strait of Hormuz ( Figure 5A). The coastal hotspots of Chl-a concentration are usually created due to the loading of nutrients with the discharge from the Wadis and artificial loading of nutrients from agricultural and aquaculture activities around the shores [3,5]. The areal average concentration of Chl-a in the Strait of Hurmuz was 2.8 mg m −3 , whereas the areal average concentration across the entire study area was 1.7 mg m −3 . The seasonal distribution shows that February and March are the months with the highest Chl-a concentration, especially in the Strait of Hormuz and GO (Appendix A). The main reasons for such a high concentration of Chl-a in the Strait of Hormuz are seasonal upwelling, mixing of the AG and GO, and the high concentration of pollutants and river discharge from the northern coast [53]. Over the study period of 17 years, 2008 and 2009 showed peak concentration of Chl-a with an average concentration of 2.4 mg m −3 and 2.1 mg m −3 , respectively (Appendix C). This period includes the red tide events that were reported by Richlen [54]. Moreover, the seasonal mean distribution showed a distinct pattern between winter and summer. The winter had a higher concentration of Chl-a, which was clearly observed in the Strait of Hormuz ( Figure 5C). However, in summer, the coastal areas exhibited a relatively high concentration of Chl-a ( Figure 5E). Unlike Chl-a concentration, the spatial distribution of the average SST for the period 2003 to 2019 shows a uniform linear increase in the west-east direction, as shown in Figure 5B. The GO experienced an average SST of about 26 • C, which makes it the hottest region in the study area. The areal average SST over the entire study area was around 25 • C. A difference of~3 • C was observed between the hottest region (GO) and the coldest region (northwestern AG) in the long-term average of SST. The monthly distribution of SST showed very little spatial variability (Appendix B). The annual average suggests that the hottest years were 2018 and 2019 with an average SST of 26. 3 • C and 26. 4 • C, respectively (Appendix D). The AG and GO experience different winter and summer temperature patterns. The southern AG was warmer than GO in the summer ( Figure 5F), whereas the GO was warmer than AG in the winter ( Figure 5D). difference of ~3 °C was observed between the hottest region (GO) and the coldest region (northwestern AG) in the long-term average of SST. The monthly distribution of SST showed very little spatial variability (Appendix B). The annual average suggests that the hottest years were 2018 and 2019 with an average SST of 26. 3 °C and 26. 4 °C, respectively (Appendix D). The AG and GO experience different winter and summer temperature patterns. The southern AG was warmer than GO in the summer ( Figure 5F), whereas the GO was warmer than AG in the winter ( Figure 5D). The temporal distribution of the Chl-a concentration suggests that different parts of the Arabian Sea express different seasonal variability. The shape of the seasonal cycle appears to be a smooth sinusoidal curve with relatively smaller amplitude in the case of the AG coast of UAE and a more pointed shape with a higher variability for both the Strait of Hormuz and the GO. The UAE's coast across the AG experiences small seasonal variability with the peak concentration seen in November and the lowest concentration observed in May ( Figure 6A). The highest variability is seen in the time series of the GO with the peak concentration observed in February and the lowest reported in May ( Figure 6C). In the summer of 2012 (April, May, and June), the entire region experienced the lowest concentration of Chl-a ( Figure 6). After that point, the Chl-a concentration was above normal in winter and below normal in the summer, especially in the Strait of Hormuz and the GO. peak concentration observed in February and the lowest reported in May ( Figure 6C). In the summer of 2012 (April, May, and June), the entire region experienced the lowest concentration of Chl-a ( Figure 6). After that point, the Chl-a concentration was above normal in winter and below normal in the summer, especially in the Strait of Hormuz and the GO. The temporal distribution of SST reveals different behaviors among the three sample regions. The UAE's AG coast showed a smooth sinusoidal seasonal cycle with the highest variability between winter and summer ( Figure 7). The SST ranges from 19 °C in January to as high as 31 °C in September. The GO showed a bimodal seasonal cycle with peaks in June (30 °C) and September (30 °C) and February as the coldest month with 21 °C (Figure 7). This bimodal cycle is due to the decrease of the temperature during the southwest monsoon that occurs from June to September [55]. The time series showed that the Arabian Sea, in general, experienced cooler than usual winters between 2005 and 2008. Towards the end of the study period (2014 to 2019), the summers became hotter than the typical summer. The northeast monsoon is the main reason for cool SST across the entire AG and the GO from November to March [56]. The temporal distribution of SST reveals different behaviors among the three sample regions. The UAE's AG coast showed a smooth sinusoidal seasonal cycle with the highest variability between winter and summer ( Figure 7). The SST ranges from 19 • C in January to as high as 31 • C in September. The GO showed a bimodal seasonal cycle with peaks in June (30 • C) and September (30 • C) and February as the coldest month with 21 • C (Figure 7). This bimodal cycle is due to the decrease of the temperature during the southwest monsoon that occurs from June to September [55]. The time series showed that the Arabian Sea, in general, experienced cooler than usual winters between 2005 and 2008. Towards the end of the study period (2014 to 2019), the summers became hotter than the typical summer. The northeast monsoon is the main reason for cool SST across the entire AG and the GO from November to March [56]. Variability in Chl-a and SST The best way to display the EOFs components as meaningful indicators is to represent them as homogeneous correlation maps. The homogenous correlation map of the EOFs' first component is the correlation of the raw data with the expansion coefficient of the first component of EOFs [57]. With regard to the mode of variability of the Chl-a concentration, the first three components captured 74% of the variability on average. The first component, which explains 42% of the variability, had a strong relationship in the Strait of Hormuz and GO, as shown in Figure 8A. The first component is related to the northeast monsoon winds that move heat from the surface of the Arabian Sea, which occurs from early November to March. Lower solar radiation and increased salinity create convective mixing that drives upward transport of nutrients [2]. The availability of nutrients with optimal atmospheric conditions results in excessive growth of phytoplankton biomass. The second component, with an average variability of 24%, had the reverse impact of the first component with the western coast of the UAE affected significantly more than the rest of the area ( Figure 8B). The spikes of Chl-a concentration over the coast of UAE (AG) at the end of 2004, 2007, and 2018 to 2019 were related to the second component ( Figure 8D). The third component, responsible for 8% of the variability, was highly related to the Strait of Hormuz, which captured the peaks in Chl-a concentration observed in 2008 to 2009 ( Figure 8D). The 2008 to 2009 algal blooms were catastrophic to the infrastructure of the countries in the AG, especially in the water supply system and tourism industry. The blooms dissipated in August 2009 about nine months after they first appeared on the coast [54]. Variability in Chl-a and SST The best way to display the EOFs components as meaningful indicators is to represent them as homogeneous correlation maps. The homogenous correlation map of the EOFs' first component is the correlation of the raw data with the expansion coefficient of the first component of EOFs [57]. With regard to the mode of variability of the Chl-a concentration, the first three components captured 74% of the variability on average. The first component, which explains 42% of the variability, had a strong relationship in the Strait of Hormuz and GO, as shown in Figure 8A. The first component is related to the northeast monsoon winds that move heat from the surface of the Arabian Sea, which occurs from early November to March. Lower solar radiation and increased salinity create convective mixing that drives upward transport of nutrients [2]. The availability of nutrients with optimal atmospheric conditions results in excessive growth of phytoplankton biomass. The first three EOFs modes of SST captured more than 96% of the variability in the dataset. The first EOF component, which represents the annual seasonal component of SST, accounts for more than 94% of the variability, as shown in Figure 9A. The entire study area showed a strong homogeneous correlation coefficient of more than 0.9. This means that the EOF first component is highly influenced by the annual periodicity that reaches its peak in summer and its lowest in winter. The spatial variability is very small, indicating that the seasonal variability is uniform across the entire area. This shows that the annual variability of the SST (that represents 95% of the total variability) showed a very small spatial variability across the coasts of the UAE. This is evident in the fact that the entire UAE coast demonstrates an interquartile range of only 0.7 • C in the long-term average SST. However, the second EOF component of SST, accounting for 1.07% of the total variance, showed a significant spatial variability ( Figure 9B). The UAE's AG coast is positively correlated, whereas the GO coast is negatively correlated. The second component seems to capture the impact of the southwest monsoon with a spatial variability that is oriented in the east-west direction. The southwest monsoon decreases the temperature of the GO, causing a bimodal cycle. The southwest monsoon does not have a significant impact on the SST of the AG; on the contrary, SST increases during that period. This result is in line with the findings of Nandkeolyar [56]. The third component of SST also revealed a significant spatial variability, whereas the Strait of Hormuz is negatively correlated and the rest of the coasts are positively correlated. The spatial variability is oriented in the north-south direction ( Figure 9C). The first three EOFs modes of SST captured more than 96% of the variability in the dataset. The first EOF component, which represents the annual seasonal component of SST, accounts for more than 94% of the variability, as shown in Figure 9A. The entire study area showed a strong homogeneous correlation coefficient of more than 0.9. This means that the EOF first component is highly influenced by the annual periodicity that reaches its peak in summer and its lowest in winter. The spatial variability is very small, indicating that the seasonal variability is uniform across the entire area. This shows that the annual variability of the SST (that represents 95% of the total variability) showed a very small spatial variability across the coasts of the UAE. This is evident in the fact that the entire UAE coast demonstrates an interquartile range of only 0.7 °C in the long-term average SST. However, the second EOF component of SST, accounting for 1.07% of the total variance, showed a significant spatial variability ( Figure 9B). The UAE's AG coast is positively correlated, whereas the GO coast is negatively correlated. The second component seems to capture the impact of the southwest monsoon with a spatial variability that is oriented in the east-west direction. The southwest monsoon decreases the temperature of the GO, causing a bimodal cycle. The southwest monsoon does not have a significant impact on the SST of the AG; on the contrary, SST increases during that period. This result is in line with the findings of Nandkeolyar [56]. The third component of SST also revealed a significant spatial variability, whereas the Strait of Hormuz is negatively correlated and the rest of the coasts are positively correlated. The spatial variability is oriented in the north-south direction ( Figure 9C). Correlation of Chl-a and SST The correlation coefficient of Chl-a and SST indicates that around 75% of the study area exhibits significant correlation as shown in Figure 10A. The coastal area of western UAE showed a significant positive correlation which suggests that the SST affects the concentration of Chl-a. However, more than half of the study area indicated that SST nega- Correlation of Chl-a and SST The correlation coefficient of Chl-a and SST indicates that around 75% of the study area exhibits significant correlation as shown in Figure 10A. The coastal area of western UAE showed a significant positive correlation which suggests that the SST affects the concentration of Chl-a. However, more than half of the study area indicated that SST negatively influenced the Chl-a concentration in the AG and GO, whereas more than a quarter of the area showed a positive relationship. The southern coast of the AG showed a significant positive correlation coefficient of 0. 44 Additionally, crosscorrelation analysis revealed that the best correlation between Chl-a and SST was found without any lag, i.e. the largest area with a significant correlation coefficient ( Figure 10B). The spatial distribution of the correlation indicates that the UAE's AG coast showed a positive correlation between the Chl-a concentration and the SST and especially coasts near the Abu Al Abyad and Sir Baniyas islands. However, the northeastern coast of the UAE (from Dubai northward) showed a significant negative correlation covering around one-third of the study area. Along the coasts of the GO and the Strait of Hormuz, the correlation was almost uniform, with more than 80% of the area showing a negative cor relation. Detailed summary statistics of the correlation in the three regions are shown in Table 1. The spatial distribution of the correlation indicates that the UAE's AG coast showed a positive correlation between the Chl-a concentration and the SST and especially coasts near the Abu Al Abyad and Sir Baniyas islands. However, the northeastern coast of the UAE (from Dubai northward) showed a significant negative correlation covering around one-third of the study area. Along the coasts of the GO and the Strait of Hormuz, the correlation was almost uniform, with more than 80% of the area showing a negative correlation. Detailed summary statistics of the correlation in the three regions are shown in Table 1. Moreover, the relationship between the Chl-a concentration and the SST was highly dependent on the bathymetry of the seawaters. This relationship is due to the difference in gaining the solar heat between the shallow (warmer) and deeper portions (colder) of the sea. The deeper sea areas have greater thermal memory; in turn, they require a longer time to heat and never reach the optimum temperature for algal blooms (Chl-a) growth. Therefore, the surface water in the middle (deep) sea generally gains lower temperature than the surface water near the shore [16]. All the areas that showed a significant positive correlation were in the shallow coastal areas. On the contrary, the deeper areas seem to have an inverse relationship between Chl-a and SST (Figure 11). In the areas where the depth of the sea is less than 20 m below sea level, the average correlation coefficient between Chl-a concentration and SST was 0.43, whereas an average correlation of −0.34 was found in waters deeper than 40 m. Moreover, the areas that did not exhibit a significant correlation have an average depth of 33 m and a median of 28 m below the sea level. This means an increase in SST increases the concentration of Chl-a in shallow water with less than 20 m depth, while an increase in SST tends to decrease the concentration of Chl-a in the deeper waters below 40 m sea level. The full relationship between the bathymetry and the correlation of the Chl-a and SST concentration is shown in Figure 11B. the deeper waters below 40 m sea level. The full relationship between the bathymetry and the correlation of the Chl-a and SST concentration is shown in Figure 11B. The relationship between bathymetry, Chl-a, and SST follows a U-shaped curve with the shallow and very deep seas having higher Chl-a and SST, respectively, as shown in Figure 12. The shallow waters experienced the highest Chl-a concentration in terms o average and median values. The deeper areas were observed to have the highest SST (Fig ure 12). Eventually, as the depth increases, both Chl-a and SST start to decline rapidly until 20 m below sea level. Then, the Chl-a concentration remains relatively stable whil SST decreases until it reaches 24.7 °C at a depth of 70 m. Then, the SST begins to increase reaching more than 26 °C at the depth of >95 m while the Chl-a also rises but at a lowe rate, reaching 1.8 mg m −3 from 1.4 mg m −3 . The deepest region is the GO with the highes SST was also one of the areas with a high concentration of Chl-a. 26 The relationship between bathymetry, Chl-a, and SST follows a U-shaped curve with the shallow and very deep seas having higher Chl-a and SST, respectively, as shown in Figure 12. The shallow waters experienced the highest Chl-a concentration in terms of average and median values. The deeper areas were observed to have the highest SST ( Figure 12). Eventually, as the depth increases, both Chl-a and SST start to decline rapidly until 20 m below sea level. Then, the Chl-a concentration remains relatively stable while SST decreases until it reaches 24.7 • C at a depth of 70 m. Then, the SST begins to increase, reaching more than 26 • C at the depth of >95 m while the Chl-a also rises but at a lower rate, reaching 1.8 mg m −3 from 1.4 mg m −3 . The deepest region is the GO with the highest SST was also one of the areas with a high concentration of Chl-a. Figure 12. The shallow waters experienced the highest Chl-a concentration in terms of average and median values. The deeper areas were observed to have the highest SST (Figure 12). Eventually, as the depth increases, both Chl-a and SST start to decline rapidly until 20 m below sea level. Then, the Chl-a concentration remains relatively stable while SST decreases until it reaches 24.7 °C at a depth of 70 m. Then, the SST begins to increase, reaching more than 26 °C at the depth of >95 m while the Chl-a also rises but at a lower rate, reaching 1.8 mg m −3 from 1.4 mg m −3 . The deepest region is the GO with the highest SST was also one of the areas with a high concentration of Chl-a. Trend Analysis Mann-Kendal trend analysis was carried out to investigate the possibility of a significant trend in both variables (Chl-a and SST) over the span of 17 years. The Chl-a showed a decreasing trend in most areas, except on the coast of Abu Al Abyad Island, which is located in the western part of the UAE. This is mainly due to nutrient leaching from the orchards' soil and aquafarming drainage that contains nutrients useful for algae growth. Overall, 21% of the study area had a significant trend in Chl-a concentration. The majority (95%) of this area experienced a decreasing trend in the concentration of the Chl-a with an average of −0.28 mg m −3 per decade rate of decline ( Figure 13A). The decreasing trend appears to increase in areas with higher average Chl-a concentration ( Figure 13B). This suggests that the concentration of Chl-a is decreasing and at a higher rate in areas with a relatively high concentration during the last two decades. However, the areas with the highest concentration of Chl-a (Strait of Hormuz and the GO) did not experience a significant trend. In the places where the trend was increasing (Abu Abyad Island), the trend rate was increased as the average concentration increased. This is mainly due to the agricultural and aquaculture activities involved in Abu Abyad Island and the results suggest that the activities have increased over time ( Figure 13B). Mann-Kendal trend analysis was carried out to investigate the possibility of a significant trend in both variables (Chl-a and SST) over the span of 17 years. The Chl-a showed a decreasing trend in most areas, except on the coast of Abu Al Abyad Island, which is located in the western part of the UAE. This is mainly due to nutrient leaching from the orchards' soil and aquafarming drainage that contains nutrients useful for algae growth. Overall, 21% of the study area had a significant trend in Chl-a concentration. The majority (95%) of this area experienced a decreasing trend in the concentration of the Chl-a with an average of -0.28 mg m −3 per decade rate of decline ( Figure 13A). The decreasing trend appears to increase in areas with higher average Chl-a concentration ( Figure 13B). This suggests that the concentration of Chl-a is decreasing and at a higher rate in areas with a relatively high concentration during the last two decades. However, the areas with the highest concentration of Chl-a (Strait of Hormuz and the GO) did not experience a significant trend. In the places where the trend was increasing (Abu Abyad Island), the trend rate was increased as the average concentration increased. This is mainly due to the agricultural and aquaculture activities involved in Abu Abyad Island and the results suggest that the activities have increased over time ( Figure 13B). The trend test indicated that 52% of the study area, mainly located in the northern part, experienced a significant SST-positive trend ( Figure 14). The average rate of increase in these regions is estimated as 0.91 °C per decade. Most of the areas that showed an increasing trend are places with relatively lower mean SST. As the long-term average SST The trend test indicated that 52% of the study area, mainly located in the northern part, experienced a significant SST-positive trend ( Figure 14). The average rate of increase in these regions is estimated as 0.91 • C per decade. Most of the areas that showed an increasing trend are places with relatively lower mean SST. As the long-term average SST decreases, the rate of trend increases sharply, as shown in Figure 14B. This means that the cooler regions of the AG are experiencing an increase in temperature at an alarming rate. The trend in both the winter and summer seasons has been further analyzed to investigate the seasonality of the trends. The time series is divided into two six-month periods of summer and winter. Summer months (May to October) are categorized as very hot and humid and the winter months (November to April) are characterized by relatively cooler months (Appendix B). The seasonal trend analysis of the Chl-a indicated that 18% and 13% of the study area have a significant trend for the months of summer and winter, respectively. Less than 1% of the area with a significant trend showed a positive trend. The Abu Abyad Island and its surroundings showed an increasing trend in both seasons. This further supports the aforementioned reasoning that the higher concentration of Chla near the island is not related to climatological phenomena but to activities on the island. The spatial distribution showed that the trend in the winter is concentrated in the coastal area located between Qatar and UAE, whereas in summer, the trend is experienced further from the seashore ( Figure 15A,B). The rate of decline was higher during summer with an average rate of -0.41 mg m −3 compared to the average rate of -0.22 mg m −3 in winter. Similar to the results of the trend analysis, the areas with higher average Chl-a concentration (Strait of Hormuz and GO as shown in Figure 2B) did not show a significant trend in both seasons over the last two decades. The seasonal trend analysis of the SST showed that the winter months had no significant trend over the entire area ( Figure 16A). On the contrary, the summer was the dominant season of the trend. The summer months demonstrated an increasing trend in more than two-thirds of the study area ( Figure 16B). The results indicated that the summer months are becoming hotter at a rate of higher than 1.0 °C per decade in half of the area. The trend in both the winter and summer seasons has been further analyzed to investigate the seasonality of the trends. The time series is divided into two six-month periods of summer and winter. Summer months (May to October) are categorized as very hot and humid and the winter months (November to April) are characterized by relatively cooler months (Appendix B). The seasonal trend analysis of the Chl-a indicated that 18% and 13% of the study area have a significant trend for the months of summer and winter, respectively. Less than 1% of the area with a significant trend showed a positive trend. The Abu Abyad Island and its surroundings showed an increasing trend in both seasons. This further supports the aforementioned reasoning that the higher concentration of Chl-a near the island is not related to climatological phenomena but to activities on the island. The spatial distribution showed that the trend in the winter is concentrated in the coastal area located between Qatar and UAE, whereas in summer, the trend is experienced further from the seashore ( Figure 15A,B). The rate of decline was higher during summer with an average rate of −0.41 mg m −3 compared to the average rate of −0.22 mg m −3 in winter. Similar to the results of the trend analysis, the areas with higher average Chl-a concentration (Strait of Hormuz and GO as shown in Figure 2B) did not show a significant trend in both seasons over the last two decades. The trend in both the winter and summer seasons has been further analyzed to investigate the seasonality of the trends. The time series is divided into two six-month periods of summer and winter. Summer months (May to October) are categorized as very hot and humid and the winter months (November to April) are characterized by relatively cooler months (Appendix B). The seasonal trend analysis of the Chl-a indicated that 18% and 13% of the study area have a significant trend for the months of summer and winter, respectively. Less than 1% of the area with a significant trend showed a positive trend. The Abu Abyad Island and its surroundings showed an increasing trend in both seasons. This further supports the aforementioned reasoning that the higher concentration of Chla near the island is not related to climatological phenomena but to activities on the island. The spatial distribution showed that the trend in the winter is concentrated in the coastal area located between Qatar and UAE, whereas in summer, the trend is experienced further from the seashore ( Figure 15A,B). The rate of decline was higher during summer with an average rate of -0.41 mg m −3 compared to the average rate of -0.22 mg m −3 in winter. Similar to the results of the trend analysis, the areas with higher average Chl-a concentration (Strait of Hormuz and GO as shown in Figure 2B) did not show a significant trend in both seasons over the last two decades. The seasonal trend analysis of the SST showed that the winter months had no significant trend over the entire area ( Figure 16A). On the contrary, the summer was the dominant season of the trend. The summer months demonstrated an increasing trend in more than two-thirds of the study area ( Figure 16B). The results indicated that the summer The seasonal trend analysis of the SST showed that the winter months had no significant trend over the entire area ( Figure 16A). On the contrary, the summer was the dominant season of the trend. The summer months demonstrated an increasing trend in more than two-thirds of the study area ( Figure 16B). The results indicated that the summer months are becoming hotter at a rate of higher than 1.0 • C per decade in half of the area. Around 20% of the study area (almost all of them located in the northeastern tip) exhibits an increasing trend rate higher than 1.5 • C per decade. The regions that are warming at a higher rate are the areas with relatively lower average SST. Previous studies also reached a similar conclusion, which indicates that the summer months are becoming hotter at a much higher rate [56,58]. Piontkovski [58] showed that the trend of SST in June and July was more than double the trend of average annual SST. Around 20% of the study area (almost all of them located in the northeastern tip) exhibits an increasing trend rate higher than 1.5°C per decade. The regions that are warming at a higher rate are the areas with relatively lower average SST. Previous studies also reached a similar conclusion, which indicates that the summer months are becoming hotter at a much higher rate [56,58]. Piontkovski [58] showed that the trend of SST in June and July was more than double the trend of average annual SST. Summary and Conclusions The ecosystems of the Arabian seas (Arabian Gulf, Gulf of Oman, and Arabian Sea) are fragile and susceptible to pollution. Among these pollutants are algal blooms. The most effective approach for estimating the Chl-a concentration and assessing the spatiotemporal distribution of algal blooms is the employment of remotely sensed data and remote sensing techniques. This study analyzed the spatiotemporal variability of the Chl-a concentration and SST in the Arabian Gulf and the Gulf of Oman along the UAE coasts. The correlation between the Chl-a and SST is also investigated as it sheds light on the impact of the SST on the growth of phytoplankton. The variability of both Chl-a and SST is also examined using the empirical orthogonal function (EOF) analysis, which helps in understanding the impact of major wind currents in the area. The spatial distribution of the Chl-a concentration showed that the highest concentration was observed in the Strait of Hormuz with an average of 2.8 mg m −3 , which is 1.1 mg m −3 higher than the average for the entire study area. The Gulf of Oman was also the hottest region with an average of 26 °C, which is one degree hotter than the average of the total area. Moreover, SST showed a uniform gradient in the northwest to southeast direction. The summer months (May to October) were the hottest months, with an average of ~31 °C, whereas in the winter months (November to April), the SST reached as low as ~19 °C. The first EOF component of Chl-a is related to the northeast monsoon winds (November to March), which cools the sea surface. The Chl-a concentration increased in the Strait of Hormuz and the Gulf of Oman due to the availability of nutrients in addition to the optimal atmospheric conditions. The spikes in concentration over the coast of UAE (AG) at the end of 2004, 2007, and 2018 to 2019 were related to the second component. Three quarters of the study area experienced a significant correlation between the Chl-a and SST. The coastal areas of western UAE and Qatar showed a significant positive correlation, which suggests that the SST affects the concentration of Chl-a. However, more than half of the study area indicated that SST negatively influenced the Chl-a concentration in the Arabian Gulf and the Gulf of Oman, especially areas in the deep sea. Further- Summary and Conclusions The ecosystems of the Arabian seas (Arabian Gulf, Gulf of Oman, and Arabian Sea) are fragile and susceptible to pollution. Among these pollutants are algal blooms. The most effective approach for estimating the Chl-a concentration and assessing the spatiotemporal distribution of algal blooms is the employment of remotely sensed data and remote sensing techniques. This study analyzed the spatiotemporal variability of the Chl-a concentration and SST in the Arabian Gulf and the Gulf of Oman along the UAE coasts. The correlation between the Chl-a and SST is also investigated as it sheds light on the impact of the SST on the growth of phytoplankton. The variability of both Chl-a and SST is also examined using the empirical orthogonal function (EOF) analysis, which helps in understanding the impact of major wind currents in the area. The spatial distribution of the Chl-a concentration showed that the highest concentration was observed in the Strait of Hormuz with an average of 2.8 mg m −3 , which is 1.1 mg m −3 higher than the average for the entire study area. The Gulf of Oman was also the hottest region with an average of 26 • C, which is one degree hotter than the average of the total area. Moreover, SST showed a uniform gradient in the northwest to southeast direction. The summer months (May to October) were the hottest months, with an average of~31 • C, whereas in the winter months (November to April), the SST reached as low as~19 • C. The first EOF component of Chl-a is related to the northeast monsoon winds (November to March), which cools the sea surface. The Chl-a concentration increased in the Strait of Hormuz and the Gulf of Oman due to the availability of nutrients in addition to the optimal atmospheric conditions. The spikes in concentration over the coast of UAE (AG) at the end of 2004, 2007, and 2018 to 2019 were related to the second component. Three quarters of the study area experienced a significant correlation between the Chl-a and SST. The coastal areas of western UAE and Qatar showed a significant positive correlation, which suggests that the SST affects the concentration of Chl-a. However, more than half of the study area indicated that SST negatively influenced the Chl-a concentration in the Arabian Gulf and the Gulf of Oman, especially areas in the deep sea. Furthermore, the correlation coefficient and the bathymetry of the seas showed a strong relationship. The shallow areas had a strong positive correlation between the SST and Chl-a, whereas the deeper areas were inclined to have a negative correlation. Lastly, trend analysis was carried out to investigate the presence of significant trends using the correlated seasonal Mann-Kendal trend test. The Chl-a data showed the presence of a trend in just 21% of the study area, of which 95% indicated a decreasing trend. Most of the area with a decreasing trend is located in the southern region, which is closer to the coasts of the UAE and Qatar. The rate at which the trend is decreasing is also related to the average Chl-a concentration. Higher average values of Chl-a concentration are associated with a higher rate of decline, and vice versa. The SST also showed the presence of a significant trend in more than 52% of the study area. However, in this case, an increasing trend is observed. Similarly, the rate of trend showed an inverse relationship with the average SST, the higher the average SST, the smaller the rate of increase, and vice-versa. The main limitations of the study are the missing data due to cloud cover and the relatively short period of the dataset (2003 to 2019). Even though a mature technique of filling the data was followed, a relatively large amount of missing data can cause uncertainty in the results. The conclusions of this research were similar to previously conducted studies. However, the authors feel that these limitations are worth mentioning.
14,200.2
2021-06-23T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Cylindrically symmetric self-sustaining solutions in some models of nonlinear electrodynamics In this article, we discuss the extension of the Melvin solution for the geon to some models of non-linear electrodynamics with the exact form of the Lagrangian, in particular, for a conformally invariant model (CNED), whose Lagrangian depends on the second and fourth order invariants of the electromagnetic field tensor. Introduction Electrovacuum solutions of the Einstein-Maxwell equations are of considerable interest to the modern theory of gravity and cosmology in connection with their possible applications for describing compact astrophysical objects or early stages of the evolution of the Universe. One of the most interesting types of electrovacuum solutions describes a self-gravitating electromagnetic field configuration without a source, called geon [1]. A cylindrically symmetric static solution of the Einstein-Maxwell equations, describing the geon as a set of purely electric or magnetic field lines connected by their own gravitational interaction, was obtained by Melvin [2]. A number of generalizations for Melvin's solution are currently known, for instance, in [3] Melvin's solution was extended to the case of a non-zero cosmological constant, and [4] gives an extension for the string theory. A rather interesting application of Melvin's solution was found in describing magnetized black holes [5][6][7][8]. In [9] a solution of the Dirac equation in Melvin space-time is obtained, which can be applied to describe fermions in extremely magnetized neutron stars, and [10] presents a non-Abelian solution corresponding to a "hairy" black hole, whose space-time has asymptotics to the Melvin's background. The paper [11] presents the rotating Melvin-like solution for the perfect fluid with the circular a e-mail<EMAIL_ADDRESS>(corresponding author) electric current, and [12] gives a multidimensional extension coupled to the scalar fields. The search for a nonlinear generalization of electrodynamics in vacuum seems to be one of the promising problems of modern field theory. A significant part of the new results in this area is related to the study of effects in empirical models of electrodynamics based on the requirement of regularization of the field energy of point sources. Along with one of the first models leading to such a regularization -Born-Infeld electrodynamics [13], a number of alternative models with a similar property have recently been proposed [14][15][16][17][18]. Some of these models [14] lead to very peculiar solutions, such as charged black points, which are compact objects with an event horizon coinciding with a singularity. Another type of vacuum nonlinear electrodynamics models [19,20] are based on the principle of maximum correspondence to the group symmetries of Maxwell's electrodynamics, in particular, such models inherit conformal symmetry and are called conformally invariant models (CNED). For the listed models, the exact form of the Lagrangian is known, which allows to consider the problem of search a generalization for the Melvin solution to these cases of vacuum nonlinear electrodynamics. In contrast to [21], where a generalization of the Melvin solution to the case of nonlinear electrodynamics was obtained by transforming the Reisner-Nordström solution, this paper proposes integrating combinations that allows to lead the Einstein's gravity and nonlinear electrodynamics equations to a relatively simple ordinary differential equations. The integration of these equations can be carried out not only with a boundary condition asymptotic to the Melvin solution, which potentially makes it possible to obtain new solutions with a given behavior at spatial infinity. Also, in contrast to [22], we will not require the existence of a regular axis of symmetry and flat or string-like asymptotic at spatial infinity. On contrary, in the context of this paper it seems to be interesting, to consider the singular field config-urations in the models of vacuum nonlinear electrodynamics with the regularizing properties. The paper is organized as follows: In Sect. 2, we discuss equations of vacuum nonlinear electrodynamics and Einstein's gravitation in the case of cylindrical symmetry. In this section we note some conditions for the existence of solutions to these equations. Section 3 is devoted to Melvin-type solution generalization for an arbitrary conformally invariant vacuum nonlinear electrodynamics (CNED). Section 4 discusses self-sustaining longitudinal, radial, and azimuthal configurations of a pure magnetic field in the models of vacuum nonlinear electrodynamics with the regularizing properties and with the spatial asymptotics to the Melvin solution. In the last section we summarize our results. Equations of vacuum non-linear electrodynamics and gravity in the case of cylindrical symmetry Let us construct equations for the gravitational and electromagnetic fields in the case of an arbitrary Lorentz invariant model of vacuum nonlinear electrodynamics, taking into account the cylindrical symmetry of the field configuration. The action functional for arbitrary vacuum nonlinear electrodynamics (NED) coupled with Einstein's gravity has the form: where R is the scalar curvature, g is the determinant of the metric tensor, L is the Lagrangian density, depending on the invariants of the electromagnetic field tensor J 2 = F ik F ki and J 4 = F ik F kl F lm F mi . The action (1) leads to Einstein's gravity equations with the stress-energy tensor for vacuum nonlinear electrodynamics: where F (2) ik = F im F m· ·k is the second power of the electromagnetic field tensor. The equations for the electromagnetic field, following from the action (1), in the absence of charges and currents can be expressed with the Bianchi identities and the divergence free equation for an auxiliary tensor Q kn with can be expressed with the NED model Lagrangian in the form where F kn (3) = F kl F lm F mn is the third power of the field strength tensor. In following, we will focus on the study of self-sustained field configurations with cylindrical symmetry, therefore we will take into account that in cylindrical coordinates, static axisymmetric solutions of Einstein's equations are given by the Weyl metric [23,24]: where in general case the metric functions α, β, γ depends on coordinates ρ and z, however, we restrict ourselves to a particular case, assuming that the metric functions depend only on the radial coordinate ρ. For greater convenience in calculations, we will use the physical representation for the component of the electromagnetic field tensor: where E ρ ,E ϕ ,E z and B ρ ,B ϕ ,B z are the coordinate components of the electric and magnetic field. In this case, the invariants of the electromagnetic field can be calculated using the Euclidean metric, which greatly simplifies the expressions for them. For the chosen space-time line-element, the left-hand side of the Einstein equations take a form: where the prime denotes the derivative with respect to the radial coordinate. In what follows, except for the model of conformally invariant vacuum nonlinear electrodynamics (CNED), we will focus on search only pure magnetic solutions (B = 0, E = 0 and as a consequence J 2 2 = 2J 4 ) of the Eqs. (2) and (4). As follows from [21], the existence of self-consistent solutions of theses equations is possible only for certain field configurations, namely: radial, azimuthal and longitudinal fields. For radial field configuration B ρ = 0, B ϕ = 0, B z = 0, the stress-energy tensor (3) has additional symmetry, expressed as equality of some components among themselves: where, for brevity, we have introduced the notation for the coefficients For an azimuthal magnetic field B ρ = 0, B ϕ = 0, B z = 0, the relation between the stress-energy tensor components is different And, finally, for the most interesting case of longitudinal field B ρ = 0, B ϕ = 0, B z = 0 the additional symmetry of the stress-energy tensor is the following: To obtain solutions to the equations of nonlinear electrodynamics and gravitation in all three of the above cases, it is necessary to find an integrating combinations for the metric functions α,β,γ , whose substitution into the Eq. (2) ensures consistency of additional symmetries for the right and left sides of these equations. This means that the symmetry of T ·b a· , say, , and to satisfy this condition it is necessary to find some relation between the metric functions. Before proceeding to the search for such relations in the general case, let us consider in more detail the case of a longitudinal magnetic field for a special model of nonlinear electrodynamics, called conformally invariant electrodynamics (CNED), for which an analytical solution similar to the Melvin's geon can be obtained. Melvin-type solution for CNED Models of conformally invariant electrodynamics are based on the principle of closest correspondence to a number of properties of Maxwell's electrodynamics. Like Maxwell's electrodynamics, CNED models have a zero trace of the stress-energy tensor and are characterized by the same group symmetries as Maxwell's theory, but unlike it, they predict birefringence in vacuum. Another advantage of CNED is the lack of a dimensional parameter needed to describe non-linearity in other models, such as Born-Infeld electrodynamics. In the general case, the CNED Lagrangian can be represented as: where W is an arbitrary function of the dimensionless ratio η = J 2 / √ 2J 4 , varying from η = −1 for a pure magnetic field, to η = 1, for a pure electric field. When W (η) ≡ 1 the CNED corresponds to Maxwell electrodynamics. Let us consider an electric field with strength E and a magnetic field with inductance B directed along the z axis. For such a longitudinal configuration, the electromagnetic field equations (4) are easily integrated and lead to the expressions: where E 0 and B 0 are integrating constants, and here and below, the prime on the model function W corresponds to the derivative with respect to its argument η. It can be noticed, that the expression in square brackets of the second equation in (14) can be fully expressed in terms of the ratio λ = B/E, because of which the ratio of the magnetic field induction to the electric field strength is constant at any point of space. This is easy to verify, if we divide the second equation in (14) by the first equation and use the parameter λ as a new variable where λ 0 = B 0 /E 0 and the argument of the model function W (η) should be expressed in terms of λ using the expression: Equation (15) implies that for each model of conformally invariant nonlinear electrodynamics with a given function W , the ratio λ is determined only by the integration constant λ 0 and does not depend on the coordinates. To determine the explicit form of the electric and magnetic fields, it is necessary to supplement algebraic equations (14) with gravitational equations for metric functions, which can be obtained by using of (2) and (8) and the stress-energy tensor (3). We take into account that for the considered field configuration, additional symmetry for the components of the stress-energy tensor is the same as for the longitudinal case for a pure magnetic field (12), and the components themselves have the form: To ensure that the right and left sides of the Einstein equations correspond, it is necessary to introduce additional restrictions on the metric functions. Using, (8), it can be established that the symmetry of the left side of the equations (2), corresponding to the symmetry of the components (17), will be ensured if we accept: where c 0 is an arbitrary constant that can be used to normalize coordinates, and the prime over metric function α means the derivative with respect to ρ. In this case, due to the energy conservation law, there is only one independent Einstein equation, a more convenient form of which is: Using the first of the relations (14) for the electric field, the Eq. (19) can be reduced to the form of an autonomous equation: whose solution α = ln(1 + ρ 2 /R 2 ) differs from the classical Melvin solution only in the choice of the normalization constant R for the radial coordinate Thus, the geon-like solution corresponding to the longitudinal electric and magnetic fields in an arbitrary CNED model coincides in form with the classical Melvin solution, despite the significant nonlinearity of these models. However, for a number of models other than CNED, essentially non-linear properties can manifest themselves differently, and it is interesting to find out how much in this case the solutions will differ from the Melvin geon. Therefore, in the next section, we turn to a description of purely magnetic field configurations in such models. Longitudinal field Let us consider a longitudinal magnetic field B ρ = B ϕ = 0, B z = 0, E = 0 in an arbitrary model of nonlinear electrodynamics. In this case, it is easy to obtain the first integrals for the equations of the electromagnetic field (4), which have the following form where B 0 is the constant of integration with the dimension of the magnetic field induction. To ensure the symmetry of the right and left sides of the Einstein equations in the case of a longitudinal magnetic field (12), one should take a relation between the metric functions similar to (18), as a result of which the expressions (2), (8) and (12) leads to only one non-trivial linearly independent equation: Let us carry out normalization for the magnetic field induction b = B z /B 0 , the coordinate r = ρ/R, where R = 2/B 0 , and also for the coefficients depending on the choice of the nonlinear electrodynamics model U = 4π X and V = 4π Y/B 2 0 , in terms of which the Eqs. (23) and (22) can be represented as: where the prime means the derivative with respect to the normalized coordinate r . It can be easily verified that for Maxwell's electrodynamics U = 1, V = b 2 /2, and in this case, the general solution of the Eq. (24) can be obtained in the analytical form: where c 1 and c 2 are integration constants, a particular choice for which c 1 = 4 and c 2 = 0 leads to classical representation of Melvin's solution: α = ln(1 + r 2 ) and b = 1/(1 + r 2 ) 2 . Let's start searching for solutions to the Eq. (24) for some models of vacuum nonlinear electrodynamics, which lead to the regularization of the point charge field. For the models such as Born-Infeld electrodynamics [13], logarithmic [14,15], exponential [16,17] and rational electrodynamics [18], the coefficients for the Eq. (24) are presented in the Table 1. In view of the significant nonlinearity, the Eq. (24) were integrated numerically. The requirement of correspondence between the solutions of these equations in the case of nonlinear electrodynamics and the solution for Maxwell's electrodynamics (25) for large values of r → ∞ was taken as the boundary condition. Figure 1 shows the dependence of the magnetic field induction on the coordinate (solid line), obtained as a result of numerical integration for Born-Infeld electrodynamics at the value of the parameter k = 1.345. The dashed line in the figure shows Melvin's solution (25) with c 1 = 4 and c 2 = −3, which was taken as the boundary condition. Due to the nonlinear features of vacuum electrodynamics, the solution has a pronounced resonant form, corresponding to the concept of geons as particle-like solutions. The Fig. 2 shows the sharpening of the form of the soliton-like solution for magnetic field induction with increasing of the parameter k for Born-Infeld electrodynamics. The plane with the k = 0, filled in green, corresponds to Melvin's solution. With an increase in this parameter, not only a sharp increase in the amplitude occurs, but also a shift of the maximum in the Born-Infeld electrodynamics The model parameter a has the dimension of the inverse magnetic field induction, while p is dimensionless (a particular case for p = 1 is considered in [18]). In all expressions, the dimensionless parameter k = a B 0 is related to the integration constant B 0 It is interesting to note that for other models presented in the Table 1, soliton-like nature of the solution for the Eq. (24) is preserved, differing from that shown in Fig. 1 only quantitatively. Let us pass to the consideration of the radial configuration of the field. Radial field Now suppose that only the radial component of the magnetic field is nonzero: B ρ = 0, B ϕ = B z = 0, E = 0. In this case, the non-trivial equations for the electromagnetic field are linear (4) and can be easily integrated, which leads to an expression that does not explicitly depend on the choice of the nonlinear electrodynamics model: where B 0 , as previously, is an integration constant. To ensure additional symmetry of the right and left sides of the Einstein's equations, which follows from (9) for the radial field configuration, it is necessary to accept the following relations between metric functions: where the prime denotes the derivative with the respect to the coordinate ρ and the constants c 3 and c 4 can be used for normalization. For such relations between metric functions, the Einstein equations (2), with the account to the expressions (8) and (9), can be reduced to one non-trivial independent equation: 26) and (28), similarly the way it was done for the longitudinal magnetic field in the previous section, as a result of which these equations take the form: where b = B ρ /B, the derivatives are taken with respect to the coordinate r = ρ/R with R = 2/B 0 , and the coefficients V for various models of vacuum nonlinear electrodynamics are listed in the Table 1. The solution of the Eq. (29) in the case of Maxwell's electrodynamics has the form: where c 5 and c 6 are integration constants, which, in contrast to the longitudinal case (25), can be chosen so that the magnetic field induction will be singular and will tends to infinity for the two values of the radial coordinate r h = −c 6 ± 4/|c 5 | ≥ 0. It would be interesting to use such a Maxwellian solution with two singularities as an asymptotic boundary condition at r → ∞ for the Eq. (29) in the case when nonlinear models of electrodynamics are considered, and to find out whether it is possible to suppress singularities due to the regularizing properties of these models. The result of numerical integration of the equations (29) for Born-Infeld electrodynamics at k = 1, shown in the Fig. 3, indicates that the singularities in the solution are preserved (one should pay attention to the logarithmic scale on the graph). For other models of nonlinear electrodynamics listed in the Table 1, the solution differs from that presented only quantitatively. Azimuthal field Let us consider the last possible configuration corresponding to the azimuthal magnetic field B ρ = B z = 0, B ϕ = 0, E = 0. As previously, we introduce the restrictions on metric functions which allow us to comply the symmetry of the right and left sides of the Einstein equations, which follows from (11) where c 7 and c 8 are constants and prime denotes the derivative with respect to ρ. The nontrivial equation for the metric function, following from Einstein equations, taking into account of (11) and (31), takes the form while the electromagnetic field equations (4) for the azimuthal case lead to the expression: where B 0 is a constant. As before, it is convenient to normalize the values by taking: r = ρ/R, where R = c 7 = 2/B 0 , and the definitions for b, U , V are the same as for the radial and longitudinal configurations. It is also quite convenient to introduce a new variable for the metric function A = e α , in terms of which the Eqs. (32) and (33) take the form: where the prime denotes the derivative with respect to the normalized coordinate r and the coefficients U and V for vacuum nonlinear electrodynamics models are listed in the Table 1. Obviously, in the general case, one of the trivial solutions of these equations is A ≡ const, which corresponds to a uniform magnetic field. In the case of Maxwell electrodynamics U = 1 and V = b 2 /2 the first integral of the Eq. (34) is A A + 4/A = c 9 , where c 9 is an arbitrary constant. The subsequent solution for A(r ) can be obtained analytically, but it has an implicit form, which makes it difficult to analyse its properties. Therefore, we restrict ourselves to the case where c 9 = 0 for which: where c 10 is the integration constant, and when it is positive, the magnetic field induction will be singular. It is interesting to find out whether this singularity can be suppressed in the case of nonlinear electrodynamics. To do this, we use (35) as an asymptotic boundary condition at r → ∞ and perform the integration of the Eq. parameter k is shown in Fig. 4. As follows from the simulation results, the singularity is not suppressed even with an increase in the parameter k associated with the nonlinearity. Conclusion The article considered cylindrically symmetric solutions for self-gravitating electromagnetic systems in various models of nonlinear vacuum electrodynamics in the absence of field sources. For conformally invariant nonlinear electrodynamics, the case of a field configuration with longitudinally directed electric and magnetic fields was described. It was shown that in this case the solution describing the field configuration, up to the rescaling of the radial coordinate, coincides with the Melvin solution for an arbitrary choice of the CNED model. For non-linear electrodynamics models with regularizing properties, such as Born-Infeld electrodynamics, pure magnetic field configurations depending only on the radial coordinate have been investigated. For an arbitrary Lorentzinvariant model of nonlinear electrodynamics of vacuum, there was obtained equations for the metric functions of cylindrically symmetric space-time and magnetic field induction, in the case of longitudinal, radial, and azimuthal magnetic fields. For a longitudinal magnetic field, a significant enhancement of the soliton-like properties of the solution in the models of nonlinear electrodynamics was found, which is more consistent with the concept of the geon as a particle-like solution. For the cases of radial and azimuthal magnetic fields, the possibility of regularization of discontinuities in solutions arising in Maxwell's electrodynamics was considered. It was found that the considered models of nonlinear electrodynamics, despite on the fact that they are based on the condition of regularization of the field of a point charge in the flat space-time, do not lead to the elimination of the magnetic field singularities in the described self-gravitating solutions. This property cannot be attributed to the shortcomings of such models, since in some cases the singularity of solutions can be eliminated by an additional modification of the gravitational sector [25]. The study of the possibility of such a regularization is of considerable interest and will be carried out in the future.
5,334
2022-10-01T00:00:00.000
[ "Physics" ]
Breadth and Exclusivity of Hospital and Physician Networks in US Insurance Markets Key Points Question How does the breadth of health care networks and the degree to which they overlap vary within and across specialties and insurance markets? Findings In this cross-sectional study of 1192 health care networks, large-group employer networks were broader than small-group employer, marketplace, Medicare Advantage, and Medicaid managed care networks. In many states, narrower networks had as much, if not more, overlap across different insurers’ networks than the broadest networks; areas with less concentrated insurance, physician, and hospital markets had narrower and more exclusive networks. Meaning These findings suggest that the structure of plan networks may be a factor in determining care affordability and continuity in the United States, particularly given how frequently individuals change insurance plans. Introduction This document provides methodological details and supplemental analyses for the manuscript. This report was compiled on 2020-10-16 using R version 4.0.2 (2020-06-22). Replication code is available on github. Geographic Measures Defining Geographic Accessibility Our primary measure of geographic accessiblity was based on a driving-time based isochrone centered on the population-weighted centroid of ZIP-code tabulation areas (ZCTAs). We identified centroids based on 2010 Census block population data using the Geographic Correspondance Engine at the Missouri Census Data Center. We constructed thirty-and sixty-minute isochrones for each ZIP using the Mapbox Application Protocol Interface (API). Isochrones for an Example ZIP Code eExhibits 1 and 2 map the isochrones for ZIP 53005 (blue polygon) near Milwaukee, WI. Also plotted in eExhibit 2 is the ZIP's surrounding county (solid black line) and the locations of primary care physicians (blue dots) and hospitals (pink squares) within the 30-minute isochrone. eExhibit 1: 30-and 60-Minut e Isochrones for Example ZIP As eExhibit 2 shows, isochrone-based definitions of geographic accessibility do not (arbitrarily) limit measures of access to only those providers located within geo-political boundaries (e.g., the county). Indeed, eExhibit 1 demonstrates (for the 60-minute isochrone) that geographic access can often reach into neighboring states. For our primary results we utilize a 60-minute isochrone for every ZIP code, however in sensitivity analyses we considered a 30-minute isochrone for ZIPs located within metropolitan core-based statistical areas (i.e., non-rural areas). Validation of Hospital and Physician Data While the Vericred data included the specialty (for physicians) and geocoded address location for each clinician and hospital facility, we included additional sample inclusion safeguards to ensure that the "denominator" of geographically accessible clinicians/facilities from a given ZIP code included only those currently practicing in the area. Common Reasons for Network Errors According to CMS audits, the most common reason for provider network errors is incorrect information on clinic location and contact information for in-network clinicians (74% of all errors; see table below). A frequent reason for the other 26% of errors (e.g., physician should not be listed as in-network) is retirement and moving from the area or clinic/facility. This often occurs because 53005 eExhibit 2: 30-Minut e I sochrone and Surrounding County Boundary Point s for Example ZIP, wit h Geographic Locat ion of Primary Care Physicians (circles) and Hospit als (squares) insurance carriers have historically been fairly passive about updating provider network data (i.e., they do not routinely canvass the directory to ensure that every provider is still practicing at each location). To address this limitation we augmented the insurance network data with a comprehensive validation process to ensure that the information we used reflected the best possible information on the specialty and active status of physicians and hospitals in the U.S. We describe our validation process separately for hospitals and physicians below. Hospital Data Validation Process One challenge with hosptial network data is that often, only a single national provider identifer (NPI) is provided for facilities in a given network. However, hospitals often have multiple NPIs registered for different buildings and units. For example, our final sample includes 4,127 unique facilities that, collectively, have 14,680 NPIs associated with them. This creates opportunities for data errors because the NPI number listed by one carrier may not be the same NPI provided by another. Without further adjustment we might incorrectly determine that the same facility does not appear in both insurers' networks, which would invalidate our measures of exclusivity. We addressed this challenge by constructing a master hospital NPI crosswalk that identified all NPIs associated with a hospital. This crosswalk was constructed using American Hospital Association (AHA) and National Plan and Provider Enumeration System (NPPES) data. Following the methodology outlined by Cooper et al (Quart J. Economics 2019), the following steps were used to construct the crosswalk: 10. Some hospitals in the NPI Registry were not in the AHA survey data files. For these hospitals, we pick one NPI as 'PRIMARY' and, using the match steps outlined above, add an 'X' to the AHA ID column and append the 'PRIMARY' NPI to all matched rows. Physician Data Validation Process To validate physician specialty, active status, and practice location information we drew on 2019 data from IQVIA and Hospital Compare. IQVIA routinely canvasses office-based physician clinics nationwide to collect information on specialty and organizational relationships, among other things. Because the IQVIA data are primarily sold for marketing purposes, the data contain up-todate contact information (including clinic ZIP code) for nearly all active office-based (as well as some facility-based) physicians nationwide. One downside to the IQVIA data is that the canvassing frame (mostly office-based physician clinics) undercounts certain physician specialties and types. The Medicare Payment Advisory Commission, for example, has found that IQVIA data do not cover roughly 30 percent of physicians who bill Medicare in a given year. Not surprisingly given its sampling frame, the IQVIA undercount is concentrated among certain hospital-based specialties (e.g., radiologists, pathologists, and anesthesiologists) and among physicians working predominantly in hospitals and other facilities (e.g. emegency medicine physicians and general internists working as hospitalists and intensivists). To address these data gaps and to further validate the information contained in the IQVIA and Vericred data we drew on additional December 2019 data from Physician Compare. Physician Compare is updated twice monthly by CMS and captures current information on clinic addresses, primary specialty, and licensure data. Critically, Physician Compare captures all active physicians who submitted a Medicare claim within the last 12 months of data collection, or who newly registered within the Medicare Provider Enrollment, Chain, and Ownership System (PECOS) within 6 months of data collection. Thus, it is effectively a continually-updated census of physicians billing Medicare. That said, Physician Compare also has its own limitations since it will not capture physicians who do not bill Medicare. We geocoded the location of all physicians using address data from these three data sources (Vericred, IQVIA and Physician Compare) based on the following process. Generally speaking, this process utilized the most detailed address information available from the data sources with the broadest agreement on clinic/facility location. Our process also reflects the assumption that any address information in IQVIA and Hospital Compare was more current than information in the Vericred data. This assumption rests on the observation (relayed to us by Vericred) that their address data was largely based on the NPPES (which often list a home address if the physician has licensure data sent there, and is less frequently updated to reflect changes in address, residency/training status, and clinic location) and network scrapes from the web. Likewise, we assumed that if a physician did not appear as active in either IQVIA or Hopstial Compare data, then he or she was likely not directly engaged in patient care in 2019. Finally, our process ensured that we identified all clinics where a physician practices. 1. We first compared unique NPIs in either IQVIA and Physician Compare to get a "master" listing of active physicians in the U.S. 2. We then took all NPIs, their specialty, and the geographic location of all listed clinics from this master listing to the Vericred data. 3. We next determined whether the clinic ZIP code listed in the Vericred data matched any of the ZIP codes associated with the NPI in the IQVIA and Hospital Compare data. 6. With this master data created, we then geocode each address (or population-weighted ZIP centroid, in cases where IQVIA data were used). These geocoded coordinates are then used to isolate the physicians within each ZIP code's isochrone. To ensure consistency, we utilized primary specialty information from the data source used to inform the clinic address locations. For example, if Hospital Compare was used to determine the address or addresses for a given NPI, then we utilized the primary specialty information from Hospital Compare for that NPI. Generally speaking the data sources agreed on specialty, though there was some disagreement between IQVIA and Hospital Compare for certain sub-specialties. For example, among physicians identified as emergency medicine in either IQVIA or Hospital Compare, 8% had a different primary specialty in one of the data sources. Likewise, there was 6.9% disagreement for Cardiology. All other subspecialties considered had much lower rates of disagreement (e.g., 0.3% for anesthesiology, 0.7% for radiology, 0.3% for behavioral health, 0.9% for othopedic surgery, and 3.6% for primary care). Based on this process we then compared counts of active physicians by specialty to another data source: the 2015 American Medical Association Master File. While these comparisons were somehat applies-to-oranges-for example, the AMA data were from 2015 and our data reflected 2019 counts, and the AMA data did not include clinical psychologists in its definition of behavioral eExhibit 3: Tot al and Unique Count s of Hospit als and Physicians by Input Dat a Source health physicians-our counts aligned well with counts of active physicians in the masterfile. Network Landscape Files Our analyses also requried identification of the set of plan networks available in each ZIP. For some markets (marketplace, small-group) this was straighforward beacuse CMS and Vericred had crosswalks that allowed us to map each plan marketed in the ZIP to its network. However, for other markets (Medicaid managed care, large group, and Medicare Advantage) we had to infer the set of networks using additional data sources. Below, we describe the process of identifying provider networks available and/or marketed in each ZIP. Marketplace and Small Group Plans To identify marketplace and small group plans available in a ZIP code, we first mapped each (population-weighted) ZIP centroid to its county and 3-digit ZIP code. States, in conjunction with CMS, define health insurance rating areas for marketplace and small group plans based on clusters of contiguous counties or 3-digit ZIP codes. This mapping allowed us to map each ZIP code to its geographic rating area. Crosswalks provided by CMS and Vericred (including HIX Compare, which Vericred curates) facilitated matching of each rating area to the set of marketplace and small group plans marketed in the area. We then mapped each of these plans to its network using Vericred crosswalks. The resulting output provided us with the set of networks available in the ZIP. Based on this process, of all possible ZIP-network matches, we had network data for 99.72% for marketplace and 97.7% for small-group markets. Medicaid Managed Care Plans Identifying the set of available Medicaid managed care plans required the use of enrollment data for individual insurance carriers. These data were obtained for January 2019 from Decision Resources Group (DRG). Specifically, the DRG data contained county-level enrollment (based on enrollee For each county we identified the set of carriers with non-zero Medicaid managed care enrollment based on beneficiary residence. We then mapped each ZIP's population-weighted centroid to the underlying county to match this set of carriers to every ZIP code in the county. We then matched this set of carriers to the Vericred data to identify the Medicaid networks available. Notably, certain states (AL, AK, AR, CT, ID, ME, MT, NC, OK, SD, VT) do not utilize Medicaid managed care and consequently were excluded from our analyses of Medicaid networks. In addition, Vericred did not capture network information for all Medicaid carriers nationwide. However, our data matching process determined that the Vericred data captured networks for 76.8% of Medicaid managed care enrollment nationwide. Table below shows, however, that the fraction of matched networks varied across states. Commercial (Large-Group) Networks We identified the set of commercial (large-group self insured) carriers using a similar method as for Medicaid managed care. Specifically, we used the county-level DRG data to isolate the set of carriers with non-zero enrollment (based on beneficiary residence). We then mapped this set of carriers to each ZIP code with a population-weighted centroid in the county, as well as to the set of networks associated with those carriers in the Vericred data. The Vericred data also included a market identifier for each network, allowing us to identify only those large-group networks associated with carriers in the ZIP. Because the DRG data included enrollment data on all carriers in the ZIP we were able to estimate, to a rough approximation, what fraction of carriers matched between the DRG data and the Vericred large group networks. Based on the above process we matched networks to carriers with approximately 64% of large-group self insured enrollment nationwide. One challenge to identifying large group networks is that we only knew enrollment at the carrier level, not the plan level. Moreover, even if we knew plan-level enrollment figures for each ZIP, we lacked a crosswalk mapping each network to each plan. Therefore, our analysis of large group newtorks has several limitations: (1) we were not able to capture large-group networks for carriers covering 36% of enrollment; (2) we could not verify that, among the 64% for whom we did have networks, those networks were exhaustive of the networks offered by the carriers. In other words, unlike all other markets considered, while we might match at the carrier level, we lacked data to verify that we matched at the network level. Medicare Advantage To isolate the MA plans available in the area we used the July 2019 enrollment by contract/plan/state/county files constructed by CMS. We first mapped each ZIP centroid to its county, then matched this county information with MA enrollment data in the county. Because some individuals live in multiple locations and thus may buy their MA plans from a different residence, we restricted the set of MA networks to only those with at least 2% enrollment market share in the county. We then used a crosswalk mapping each MA plan ID to its network ID in the Vericred data using a crosswalk provided to us by Vericred. This process resulted in matching networks to plans covering 97% of Medicare Advantage enrollment in July 2019. Measures of Market Concentration Our analysis relies on measures of market concentration for hosptials, physicians and insurers based on the Hirschman-Herfindahl index, or HHI. HHI measures are commonly used to quantify the degree of concentration among market participants, and are constructed based on the sum of squared market shares (expressed as a percentage) within a defined market. Values closer to zero indicate markets in which market shares are more evenly distributed among multiple market participants. By comparison, an HHI value of 10,000 (i.e., 100 ) indicates a completely consolidated market. The geographic market definition we used for all HHI measures was the commuting zone. Commuting zones are comprised of geographically contiguous counties with strong within-area clustering of commuting ties between residental and work county, and weak across-area ties. Thus, this set of approximately 600 commuting zones nationwide approximates areas of economic activity over which it is reasonable to consider measures of market concentration. We used commuting zones based on Census data from 2010 constructed by researchers at Penn State. While we provide basic details on our HHI measures here, we provide extensive documentation, code, and analyses of HHI methods in our github document Defining Markets for Health Care Services. Physician HHI Our measure of physician HHI relies on the methodology outlined in Richards et al (Health Serv Res 2017). As described in that study: [HHI measures reflect] the allocation and organization of all physician specialists within a given geographic area. In other words, we capture if an insurer would have relatively few or many physician practices to negotiate with in regard to enrollee access and payments (as well as other contractual terms). To construct our HHI measure by commuting zone we utilized detailed information on organizational relationships reported in the IQVIA data. By assigning each physician/NPI to her organization, we were able to construct count measures of the total number of physicians per organization in an area. These count measures were the basis for the market shares that fed into standard HHI equations (i.e., the sum of squared market shares). Hospital HHI Our measures of hosptial HHI draw on the 2016-2017 CMS hospital service area files that capture ZIP-level utilization of each general acute care hosptial. We use these data to identify the set of hospitals that treat Medicare patients who reside in each ZIP. We calculate an HHI value for each ZIP code and then aggregate up to the commuting zone level by taking a weighted (by 2010 Census population) average across ZIPs with centroids located within the commutizing zone. Insurer HHI To construct measures of insurer HHI we calculated total enrollment by insurance carrier in each county, then aggregated these enrollment totals up to the commuting zone level. These market shares then fed through standard HHI formulas. We did not calculate HHIs separately by market (e.g., large group, small group, etc.) under the assumption that insurers use the full weight of their enrollment totals as leverage when negotiating networks. Network Measures Network Definitions We measure provider network connections using binary bipartite networks that capture innetwork relationships between the ( ) individual hospitals or physicians of a given specialty ( ) and the ( ) provider networks available in a geographic market ( ). That is, these × networks capture binary information on whether each physician/hospital is in-network for each provider network available in the geographic market. As described above, the rows of these matrices are determined by the total number of physicains/hospitals within the isochrone, while the columns of the matrix are determined by the number of networks marketed as part of plans available in the ZIP. Thus, each combination of ZIP and specialty type receives its own bipartite matrix. Physicians and hospitals are identified using their national provider identifier (NPI). In the case of hospitals (which, as noted above, can have multiple NPIs), we utilize a single NPI so we do not overcount hosptials in the bipartite networks. We further define another set of binary matrices to identify networks offered by the same insurer, and networks for different insurance markets (large-group, small-group, Medicaid managed care, Medicare Advantage, and marketplace). These matrices are utilized to construct measures separately by market, and to construct measures of the degree of connections across insurers. Example Network eExhibit 6 below provides an example bipartite network matrix that will be used throughout this section. In this example, there are 10 clincians (NPIA-NPIJ), 10 provider networks (A1-G4), and 7 insurers (A-G). One insurer (G) offers 4 separate networks (G1, G2, G3, and G4). Each cell contains binary information on whether the clinician is in-network the network. While the network breadth measure provides information on the overall size of a provider network, it does not capture information on how connected the provider network is to other networks in the area. That is, provider networks for two competing insurers could be relatively broad but each may have exclusive contracts with its in-network physicians and/or hospitals. We thus construct a measure, the normalized strength of the network. This measure is defined as the total number of connections to other networks divided by the total possible of connections. Thus, a completely exclusive network (i.e., no connections with other insurers) will receive a value of 0, while a network with many connections will receive a value closer to 1. It is easiest to build intuition for the normalized strength measure using the bipartite (i.e., providernetwork) matrix. eExhibit 10 highlights how we measure strength for a single network (G1). In this example we are interested in the degree to which a given insurer's network is connected to other insurers' networks. Thus, a count of the total number of shared connections with other insurers' networks (shown in red in eExhibit 10) is the numerator for measuring strength. In the case of network G1 the total number of shared connections is 5. The denominator for the normalized strength measure is a count of the total number of possible connections a given network could have with other networks. For the example network, there are 4 in-network clinicians and 6 networks offered by other insurers. Thus, if those 4 in-network clinicians for G1 also are in-network for the other 6 networks, there are a total of 24 possible connections. Thus, the normalized strength value for network G1 is 0.2083 (5/24). Our primary results rely on strength measures as described above-that is, measures constructed by considering only connections with other insurers' networks. However, we also considered a total strength measure whereby we allowed for connections across all networks in the area. eExhibit 11 below adds in our grouped strength ratio measure as hollow "rings" at each node. As can be seen in eExhibit 11, networks only loosely connected to other insurers' networks receive small normalized strength values.
4,951.8
2020-12-01T00:00:00.000
[ "Medicine", "Economics" ]
The metaphor of Yahweh as healer in the prophetic books of the Old Testament The metaphor of Yahweh as healer in the prophetic books of The metaphor of Yahweh as healer in the prophetic books of the Old Testament 1 few possible responses to the following question: How did the prophets portray Yahweh as healer?According to the prophets, Yahweh's healing was more than a medically verifiable physical process.The prophetic books focus more on the spiritual healing of Israel and Judah than on the physical healing of an individual Jer. 3:22;30:17;Hos. 14:5 [4]).In some instances Yahweh offered comprehensive deliverance or concrete promises for a "sick" nation.This comprehensive healing includes the rebuilding of the city and temple, forgiveness of sins, joy and prosperity (Jer. 30:17;33:6;).Yahweh's healing was not restricted to his elected people alone; he even offered healing to Egypt (Is. 19:22) and . Introduction The Old Testament employs many metaphors for Yahweh because no single metaphor can describe everything about Israel's God: Yahweh as judge, king, warrior, father, artist, gardener-vinedresser, mother, shepherd, et cetera.Yahweh as healer is not a major metaphor in the Old Testament, but it nevertheless plays a significant role in the prophetic books (Brueggemann, 1997:230-261 The term prophetic books used in this article refers to the classical prophetic books from Isaiah to Malachi and does not include the "former prophets" from the Hebrew Bible (Joshua to 2 Kings). The depiction of Yahweh as healer in the prophetic books In this section we shall make a distinction between different types or ways of healing, but we accept the fact that there is a close 202) and Kee (1992:659-664) for a more detailed discussion of the semantic field of the healer metaphor. 4 We see several references in Jeremiah and Hosea.There is a strong possibility that Hosea influenced Jeremiah, especially if one compares healing passages like Jeremiah 3:22 and Hosea 14:5 (Petersen, 2002:132-133). 5 The Hebrew reference is always placed first with the NRSV in brackets.All English citations in this article are taken from the NRSV of the Bible. In die Skriflig 41(3) 2007:443-455 The metaphor of Yahweh as healer in the prophetic books of the Old Testament relationship between them and that healing must be seen in a comprehensive manner. 6 Yahweh as healer of physical sickness Isaiah 38:1-20 7 is probably the only passage in the prophetic books referring to the physical healing of an individual, king Hezekiah.The question may be posed: Why did Yahweh heal king Hezekiah?Why only a reference to a king and not to other people.Two possible responses may be given: the king was seen as the representative of the people of the city (cf.Is. 38:6), or King Hezekiah was a descendant of David and one of the most faithful kings of Judah (Is. 38:3, 5;Wildberger, 2002:466).An interesting element in this text is the role played by the prophet Isaiah.According to Isaiah 38:1 the prophet Isaiah proclaimed that Hezekiah would die and told him to get his house in order.King Hezekiah did not want to accept the prophet's words, put his trust in Yahweh and prayed for recovery (Is.38:2-3). Yahweh healed him and added fifteen years to his life.Although this passage focused on Hezekiah's physical healing, verse 6 also speaks about a more comprehensive healing.Yahweh promised healing and deliverance for Hezekiah and the city.One cannot exclude the metaphorical use of this passage.Clements (1985:127) says that the prayer in Isaiah 38 is to be understood "both literally in reference to an illness which befell the king but also metaphorically of the unease and spiritual confusion which had stricken his kingdom". 6 Chan, Song andBrown (1997:1166) argue that the healing of physical diseases was certainly not excluded from the wider metaphor of national healing and restoration. Yahweh as healer of broken relationships In most prophetic references words like health and healing have a broader meaning than only restoration of physical health.They are often referring to the reconciliation that will take place between God and his people.Even though God has inflicted the blow, He will take action to renew the relationship and heal the wounds (Simundson, 1982:338).In the Old Testament (especially in the Psalms) bodily sickness is very closely connected with sin and is therefore a manifestation of God's wrath against specific transgressions (Ps.32:1-4; 38:1-10; 39; Graber & Müller, 1976:167-168).The prophetic books also portray the "sickness and wounds" (hkmw ylj) of the city and its people (Jer.6:7b).Six times Jeremiah referred to the immanent lwdg rbv (great collapse). 8The Hebrew term rbv (disaster/ collapse) is the central word used to describe the people's broken condition and one third of the Old Testament occurrences are found in the book of Jeremiah (Brown, 1995:191).Fortunately the Bible does not only focus on sin and the consequences of sin, but portrays Yahweh as healer of sins and broken relationships. Repentance, healing and forgiveness In some prophetic texts repentance or return (bwv) is seen as a prerequisite for Yahweh's healing (Is.6:10; 19:22; Jer.3:22; Hos.6:1). 9However, this cannot be true for all the "healing" passages.Repentance is not always seen as a pre-condition for healing.In Hosea 11 healing and forgiveness are granted despite the fact that Israel kept sacrificing to the Baals and offering incense to idols.The child, Israel, disrespected the love of God her father (cf.Hos.11:1-3).The divine announcement of healing in Hosea 14:5 (4) cannot be the result of the repentance on the part of the people (Hos.14:3-4) for the backsliding is still going on.Yahweh's healing and forgiveness are not occasioned by Israel's repentance or her good intentions (Hos.14:3-4), but springs solely from His generous and free grace. Isaiah 57:17-19 also portrays Yahweh's healing despite Israel's apostasy.The text depicts Yahweh's anger because of their wicked covetousness, but also emphasises that Yahweh will heal them and "repay them with comfort".Jeremiah 30:17 clearly states that Yahweh will restore the health of his people and heal their wounds not because they repented, but because (yk) others had called them outcasts and no one cared for them. Healing and forgiveness do not stand in opposition to one another.Healing could also be understood as forgiveness and becomes a picture of God's mercy and forgiveness.Stoebe (1997Stoebe ( :1257) ) argues that the verb apr with Yahweh as subject is filled with a deepened content and also means forgiveness.The term apr is rarely used in an exclusively spiritual context and in the strict sense of the word at least two apr verses in the prophetic books refer directly to divine forgiveness, namely Hosea 14:5 (4) and Jeremiah 3:22 (Brown, 1995:381, 387;O'Kennedy, 2001:470). 10Besides these two references, Isaiah 53:5 also refers indirectly to divine forgiveness.The healing gained for others by the suffering of the servant includes the forgiveness of Israel's sins and the removal of their punishment.This passage summarises Deutero-Isaiah's essential message that Yahweh has forgiven his people and is on the point of restoring them (Whybray, 1981:175-176;Westermann, 1985:263). One must acknowledge the fact that the other "healing" references do not stand in contrast to Yahweh's forgiveness.They form a broad basis for the understanding of divine forgiveness, and confirm the opinion that Yahweh is a God who grants forgiveness and that forgiveness encompasses more than the removal of sin (O' Kennedy, 1997:104-105). Healing after judgment and punishment Frequently the prophets refer to Yahweh in terms of divine restoration for the nation after a time of judgment and punishment 10 O' Kennedy (2001:456) mentions three prominent reasons why the verb apr in Hosea 14:5 (4) may be understood as forgiveness: (1) The verb apr is used together with the word hbwvm, expressing the forgiveness of disloyalty; (2) One finds other terms or expressions for forgiveness and mercy in the same passage: @w[ acn ("take away/forgive all guilt"), hbdn !bha ("I will love them freely"); and wnmm ypa bv ("my anger has turned away from them"); (3) Several metaphors appear in Hosea 14:1-10 (9) describing the restored relationship between God and Israel as a result of divine forgiveness: "I will be like the dew to Israel"; "he shall blossom like the lily", et cetera. The portrayal of Yahweh as healer is not limited to restoration of what Yahweh has damaged through his judgment.Yahweh's healing capacity pertains to whatever damage has been done, by whatever agent (Brueggemann, 1997:252;Chan, Song & Brown, 1997:1164-1165). Yahweh as healer of the land, city and temple The healer metaphor does not merely refer to the healing of people. There are a few prophetic texts portraying the healing of the land, city, temple and even God's creation.Sometimes it is difficult to distinguish between people and material things but one can say that Yahweh as healer offered a comprehensive healing of people and material objects.This comprehensive healing includes the physical rebuilding of Jerusalem (Jer.30:17-18), abundance of prosperity and security (Jer.33:6), and the restoration of the fortunes of his people (Hos.6:11-7:1). Ezekiel 47:1-12 describes different dimensions of Yahweh's comprehensive healing.The prophet visualises water flowing from the threshold of the temple to the Dead Sea.The water becomes deeper as it moves away from its source and gives life and healing wherever it flows.Ezekiel 47:1-12 describes the following: the water sweetens the salty water of the Dead Sea (v.8); there are all kinds of trees on the bank of the river that will bear fresh fruit (v. 7, 12); the leaves of the trees are used for healing (v.12); there are living creatures (v.9) and many fish (v.9).The water may possibly represent the cosmic river that flows from the temple or the great water source that fertilised Eden (cf.Gen. 2:10-14).At the very least the river is a vivid symbol of the life-giving presence of Yahweh in his temple (Blenkinsopp, 1990:231-232;Block, 1998:696-697).One can pose the question whether the vision in Ezekiel 47:1-12 is describing the healing of the land, temple or creation.It is difficult to make a distinction.We can rather say that this text emphasises the comprehensive healing of Yahweh.Yahweh as healer is present in his temple and will bring healing for the people of the land and his creation. Yahweh as healer for other nations Yahweh's healing was not restricted to his elected people alone.There are at least three references focusing on the relationship between Yahweh the healer and other nations, one dealing with Babylon (Jer.51:8-9) and two texts referring to Egypt (Is. 19:22;Jer. 46:11). In Jeremiah 46:11 God, through the prophet, refers to Egypt's search for healing but that it will be in vain.They are summoned to go to Gilead12 for healing medicine.Jeremiah 51:8-9 refer to Yahweh's intention to heal Babylon but she could not be healed: Suddenly Babylon has fallen and is shattered; wail for her! Bring balm for her wound; perhaps she may be healed.We tried to heal Babylon, but she could not be healed.Forsake her, and let each of us go to our own country; for her judgment has reached up to heaven and has been lifted up even to the skies. The above-mentioned texts portray Yahweh's intention to heal other nations, but the texts do not say if real healing was accomplished.Isaiah 19:22 is perhaps the only prophetic text referring to "real" healing of another nation.Yahweh promises healing to Egypt after their repentance.In this passage one also finds the dual role of Yahweh as the one who both smites and heals (cf.Is. 57:18-19). Yahweh, the only God or One who can heal The prophetic portrayal of Yahweh as healer differs from the rest of the ancient Near Eastern world.The prophets focused on monotheism, emphasising clearly that it was one God (Yahweh) who both smote and healed (Brown, 1995:238).There are a few prophetic texts focusing on the fact that other gods, nations and kings cannot heal.In Hosea 5:13 one reads the following: When Ephraim saw his sickness, and Judah his wound, then Ephraim went to Assyria and sent to the great king.But he is not able to cure (apr) you or heal (hhg) your wound (rwzm). Assyria is understood as the subject of apr but one can assume that Hosea 5:13 refers indirectly to Yahweh as the healer of Judah and Ephraim.Assyria or the great king of Assyria is not able to heal; therefore Yahweh, the only King of Israel, is the healer.This fact is emphasised if you read Hosea 5:13 together with Hosea 6:1.It is interesting to note that Hosea 5:13 uses two different words to describe the sickness (ylh; rwzm) and the healing of the people (apr; hhg).This may emphasise the great extent of Israel's sickness and the fact that Assyria is not able to do anything (O' Kennedy, 2001:461).Yahweh alone is healer because He alone is the initiator of the covenant with Israel.When Israel and Yahweh are turned away from one another, the covenant is violated (Sweet, 1982:147). The prophet Jeremiah proclaims that Yahweh is the only genuine healer.An important text like Jeremiah 17:14 ("Heal me, O Lord, and I shall be healed") is part of a meditation which contrasts trust in humans with trust in Yahweh.Jeremiah 17:5 says "Cursed are those who trust in mere mortals" while Jeremiah 17:7 states "Blessed are those who trust in the Lord".All the other healing passages in Jeremiah emphasise the fact that Yahweh is the real healer (cf.Jer.30:17; 33:6).There are even sarcastic references to medicine and healers in Gilead, an important medical center in the time of the prophets (Jer. 8:22;46:11;.The above-mentioned texts indicate that Jeremiah views the human healer as inherently ineffective (Avalos, 1995:287-290). The prophetic books do not merely contrast Yahweh with humans.It is more appropriate to understand a contrast between Yahweh as Israel's healer and the healing deities of other nations such as Egypt, Assyria and Syria.Yahweh asserts that He is the real healer of his people, not Sekhmet, Marduk, Baal or any other alien god. Yahweh was never sick himself as the other deities could be.The gods of Egypt could suffer from disease.When an eclipse of the sun occurred in Egypt it was attributed to an eye disease of the god Ra.This same god nearly died after being stung in his heel by a scorpion.This could never happen to Yahweh, the healthy God of Israel.In contrast to the gods described in the Ugaritic texts, the God of Israel is never subjected to fate or overpowered by disease (Korpel, 1990:337, 341;Brown, 1995:74, 78;Wilkinson, 1998:56). 3.6 Yahweh as future healer on the Day of the Lord The prophetic books are not merely portraying Yahweh as healer in the present time, but also refer to the future.Isaiah 30:26 mentions the alteration of the light from the sun and moon, and the intensification of the sun's light as a sign of God's healing.On the Day of the Lord Yahweh will heal his people's wounds, resulting in the complete restoration for Israel and Judah (cf. Jer. 30:17a;Jer. 33:6;Simondson, 1982:338;Kee, 1992:659-664;Brown, 2004:598). Isaiah 53:5 describes the healing of Israel at the cost of the servant's wounds and bruises.Brown (1995:242) believes that one must understand this verse in its broadest possible term.The prophetic hope was for the whole man to be wholly healed.This is underscored by the prophetic expectation of the inbreaking of the kingdom of God.Not only would righteousness and peace prevail, but sickness would also disappear (cf.Is. 33:24; 35:1-6; 58:8).The future healing and restoration will not only be for Israel but also for Egypt (Is. 19:22). Conclusion The above discussion has led to the following concluding remarks: • The Old Testament employs many different metaphors for Yahweh (e.g.Yahweh as father, king, judge, et cetera).Yahweh as healer is not a major metaphor in the Old Testament, but it plays a significant role in the "classical" prophetic books (Isaiah-Malachi).The portrayal of Yahweh as healer was significant for the people of Israel.Sickness and death were viewed in a negative way.Yahweh was the healer and giver of life. • In many instances the healer metaphor is conveyed by the Hebrew root apr with Yahweh as subject; however, there are several other verbs and nouns focusing on the healer metaphor (restore, make whole, medicine, balm, disease, wound, sickness, et cetera). • According to the prophets, Yahweh's healing was more than a medically verifiable physical process.The prophetic books focus more on the spiritual healing of Israel and Judah than on the physical healing of an individual (cf.Jer. 3:22;30:17;Hos. 14:5 [4]).Isaiah 38:1-20 is probably the only passage in the prophetic books referring to the physical healing of an individual, king Hezekiah. • Yahweh is seen as the healer of sins and broken relationships.Healing could also be understood as forgiveness.In some prophetic texts repentance and obedience are seen as a prerequisite for Yahweh's healing (Is.6:10; 19:22; Jer.3:22; Hos.6:1), but there are at least two references that speak of God's healing despite Israel's apostasy . • The healer metaphor does not merely refer to the healing of people.There are several prophetic texts discussing the healing of the land, city, temple and creation (Is. 30:26;33:6;. • Yahweh's healing was not restricted to his elected people alone; He even offered healing to Egypt (Is. 19:22) and Babylon (Jer.51:8-9). • The prophetic books depict Yahweh as the only real healer.There are no limits to Yahweh's performance as healer and He cannot be compared with any other person or god.Yahweh is healer in the present time, but will also be a healer in the future (Is. 19:22;30:26;53:5;Jer. 30:17;33:6;Mal. 4:2).
3,994.2
2007-07-27T00:00:00.000
[ "Linguistics" ]
Fermitin family member 2 promotes melanoma progression by enhancing the binding of p-α-Pix to Rac1 to activate the MAPK pathway We identified fermitin family member 2 (FERMT2, also known as kindlin-2) as a potential target in A375 cell line by siRNA library screening. Drugs that target mutant BRAF kinase lack durable efficacy in the treatment of melanoma because of acquired resistance, thus the identification of novel therapeutic targets is needed. Immunohistochemistry was used to identify kindlin-2 expression in melanoma samples. The interaction between kindlin-2 and Rac1 or p-Rac/Cdc42 guanine nucleotide exchange factor 6 (α-Pix) was investigated. Finally, the tumor suppressive role of kindlin-2 was validated in vitro and in vivo. Analysis of clinical samples and Oncomine data showed that higher levels of kindlin-2 predicted a more advanced T stage and M stage and facilitated metastasis and recurrence. Kindlin-2 knockdown significantly inhibited melanoma growth and migration, whereas kindlin-2 overexpression had the inverse effects. Further study showed that kindlin-2 could specifically bind to p-α-Pix(S13) and Rac1 to induce a switch from the inactive Rac1-GDP conformation to the active Rac1-GTP conformation and then stimulate the downstream MAPK pathway. Moreover, we revealed that a Rac1 inhibitor suppressed melanoma growth and metastasis and the combination of the Rac1 inhibitor and vemurafenib resulted in a better therapeutic outcome than monotherapy in melanoma with high kindlin-2 expression and BRAF mutation. Our results demonstrated that kindlin-2 promoted melanoma progression, which was attributed to specific binding to p-α-Pix(S13) and Rac1 to stimulate the downstream MAPK pathway. Thus, kindlin-2 could be a potential therapeutic target for treating melanoma. INTRODUCTION Melanoma is one of the most malignant cutaneous cancers. Its occurrence has increased in the past several decades, especially in the Western world. Melanoma with early metastasis can develop rapidly and eventually cause death [1][2][3][4]. For advanced or recurrent melanoma, surgery alone does not have a good therapeutic effect; thus, other therapeutic options, including radiotherapy, immunotherapy, immune checkpoint inhibitor therapy, and molecular targeted therapy, are needed [5][6][7][8][9]. PD-1 immune checkpoint blockade therapy has already been used in melanoma; however, advanced metastatic melanoma exhibits a high rate of innate resistance (60-70%) to this treatment [10]. A high percentage of melanomas harbor BRAF mutations and RAS mutations, which affect the downstream MAPK pathway to influence melanoma progression [11,12]. These results provide reasonable evidence to support combination therapy with the BRAF inhibitor dabrafenib and the MEK inhibitor trametinib, and this combination has been proven to improve survival in patients with BRAF V600E/K mutations [13,14]. Because of acquired resistance to BRAFV600E inhibitors, the long-lasting therapeutic effect of BRAFV600E inhibitors is limited; moreover, acquired resistance to MAPK-targeted therapy results in failure to respond to anti-PD-1 therapy. Thus, finding a combination of molecular targeted therapies that more successfully inhibit the MAPK signaling pathway is essential [10,15,16]. Our previous siRNA library screening identified numerous new proteins implicated in melanoma progression [17]. Among these proteins, fermitin family member 2 (FERMT2, also known as kindlin-2) was selected for functional evaluation in melanoma growth and metastasis. Kindlin-2, is a member of the kindlin family, which contains three members that have a highly conserved FERM domain and usually function to link the cell membrane to the cytoskeleton and as molecular linkers [18]. Recently, some studies have discovered that kindlin-2 mutation or dysregulation can promote the development of certain cancers, including breast cancer, hepatocellular carcinoma (HCC), esophageal cancer, prostate cancer, gastric cancer and glioma progression [19][20][21][22][23][24][25]. In our study, we have explored the potential molecular mechanisms by which kindlin-2 regulates the growth and metastasis of melanoma and, more importantly, evaluated its clinical significance to identify an improved option for melanoma treatment. Our results demonstrated that kindlin-2 knockdown inhibited cell proliferation and metastasis by hindering the binding of Rac1 and p-α-Pix; thus, Rac1 could not be activated to promote MAPK pathway signaling. Rac1 is a member of the typical Rho guanosine triphosphate phosphohydrolase (GTPase) family and can cycle between an active, guanosine triphosphate (GTP)-bound conformation and an inactive, guanosine diphosphate (GDP)-bound conformation. This process is regulated by guanine nucleotide exchange factors (GEFs), which lead to Rac1 activation, and by GTPase-activating proteins (GAPs), which inactivate Rac1 [26]. α-Pix is a GEF that can activate p21 Rac1 and Cdc42 but not RhoA [27]. We discovered that high kindlin-2 expression can promote the binding of p-α-Pix to Rac1 and then activate the MAPK pathway. Recent studies showed that kindlin-2 promoted tumor growth and progression [28,29]. In our study, we confirmed the role of kindlin-2 in melanoma progression, and also discovered that Rac1 inhibition could hinder the growth and metastasis of melanoma. Moreover, the combination of a Rac1 inhibitor and vemurafenib (a drug that targets mutant BRAF kinase but lacks long-lasting efficacy because of acquired resistance [30]) suppressed melanoma growth and metastasis to a higher degree than either agent as monotherapy. Thus, we identified kindlin-2 as a potential therapeutic target for melanoma. Association between kindlin-2 expression and clinicopathological features of melanoma patients We screened an siRNA library targeting 46,000 human genes in A375 melanoma cells. Kindlin-2 knockdown by siRNA significantly suppressed cell viability by 59.22% (P = 0.000188), indicating that kindlin-2 might be a potential target in melanoma. Then, the relationship between the clinicopathological characteristics of melanoma patients and kindlin-2 protein expression was analyzed. Among the 82 patients with melanoma, 54 showed strong kindlin-2 expression, 28 showed weak kindlin-2 expression, and 28 showed negative kindlin-2 expression. The proportion of scores and representative examples of kindlin-2 expression in melanoma tissues are shown in Figure S1. The data in Table 1 indicate that a high kindlin-2 expression level might be related to T3-T4-stage (P < 0.05) and M1-stage (P < 0.05) disease. However, no evidence of an obvious association between kindlin-2 protein expression and sex (P = 0.1090), age (P = 0.1646), tumor location (P = 0.4444), N stage (P = 0.6925) or TNM stage (P = 0.0602) was observed. Transcription levels of kindlin-2 in patients with melanoma The Oncomine database suggested that patients with metastasis had higher kindlin-2 mRNA levels than those without metastasis ( Fig. S2A). High kindlin-2 mRNA expression was found to be correlated with M1-stage disease (Fig. S2B). In addition, kindlin-2 mRNA levels were significantly increased in patients with melanoma recurrence (Fig. S2C). Kindlin-2 promoted the proliferation of melanoma cells First, we determined the expression levels of kindlin-2 in human melanoma cell lines (A375, A875, MeWo, WM35, SK-Mel-2, and SK-Mel-28) by Western blotting (Fig. 1A). Kindlin-2 expression was detected in the cytoplasm in different cell lines (Fig. 1B, C). Then, we suppressed kindlin-2 expression in the MeWo and WM35 cell lines and overexpressed kindlin-2 in the A375 and A875 cell lines. Kindlin-2 knockdown via specific shRNAs suppressed its expression, whereas kindlin-2 overexpression significantly increased its expression (Fig. 1D). Kindlin-2 knockdown obviously inhibited the proliferation of melanoma cells and resulted in a marked decrease in the colony formation rates of MeWo and WM35 cells. In contrast, kindlin-2 overexpression dramatically increased the proliferation of melanoma cells and promoted colony formation (Fig. 1E, F). Flow cytometric analysis demonstrated that the reduction in kindlin-2 protein expression led to an increased rate of apoptosis in the melanoma cell lines MeWo and WM35 (Fig. 1G). Collectively, these results indicate that kindlin-2 plays a key role in mediating melanoma cell proliferation and growth. Kindlin-2 influenced the cell cycle in melanoma cells Compared with the control groups, groups with kindlin-2 suppression exhibited an increased proportion of G1 phase cells but a decreased proportion of S phase and G2 phase cells. In contrast, kindlin-2 overexpression led to a decrease in the proportion of G1 phase cells and an increase in the proportion of S phase and G2 phase cells. Thus, kindlin-2 knockdown induced G1 phase cell cycle arrest, while kindlin-2 overexpression promoted cell division ( Fig. 2A). Kindlin-2 knockdown inhibited the migration but not the invasion potential of melanoma cells After interference, the MeWo and WM35 cell lines exhibited an apparent decrease in migration but not invasion compared with the control cells (P < 0.05), while the A375 and A875 cell lines showed increased migration but not invasion after kindlin-2 overexpression (Fig. 2B-D). The results of anoikis assays showed that the anoikis rates were significantly increased in shkindlin-2 cells (MeWo and WM35) in suspension, kindlin-2 overexpression markedly reduced the anoikis rates in the A375 and A875 cell lines (Fig. 2E). Analysis of the epithelial markers E-cadherin and ZO-1 and the mesenchymal markers N-cadherin and vimentin revealed that kindlin-2 overexpression reduced E-cadherin and ZO-1 expression and increased N-cadherin and vimentin expression. However, kindlin-2 knockdown resulted in the inverse effects (Fig. 2F). Then, we knockdowned the expression of E-cadherin and ZO-1 by siRNA, respectively, the expression of kindlin-2 had no change (Fig. S3). That is, kindlin-2 can mediate cell-matrix adhesion itself, also it can influence cell-cell adhesion by affecting the expression of E-cadherin and ZO-1. Kindlin-2 bound independently to Rac1 and α-Pix To identify the proteins that interact with kindlin-2 in melanoma cells, we performed immunoprecipitation combined with mass spectrometry in A375 cells with an anti-kindlin-2 antibody. Rac1, a member of the Rac subfamily of Rho family small GTPases with strong evidence of its dysregulation in cancer, was found to be a candidate kindlin-2-interacting protein. Moreover, α-Pix, a GEF (activator) of the Rho family small GTP-binding protein family members Rac1 and Cdc42, was also found to be a candidate (Figs. S4, S5). The interaction between kindlin-2 and Rac1 and the C Immunohistochemical analysis of the localization of kindlin-2 in melanoma samples. D The kindlin-2 expression level was measured by western blotting in melanoma cells with kindlin-2 knockdown or overexpression. E The viability of melanoma cells with kindlin-2 knockdown or overexpression was measured by a CCK-8 assay. F Colony formation assay in melanoma cells following kindlin-2 knockdown or overexpression. G Apoptosis in melanoma cells with kindlin-2 knockdown was detected by fluorescence-assisted cell sorting (FACS) analysis, and the relative percent of apoptotic cells was calculated. The data are presented as the means ± SDs of three independent tests. *P < 0.05, **P < 0.01, ***P < 0.001, significant differences between the treatment groups and control groups. The migration ability of melanoma cells with kindlin-2 overexpression or knockdown was assessed by measuring the average gap width (μm) in a wound-healing assay. C The migration ability of melanoma cells with kindlin-2 overexpression or knockdown was measured by a Transwell assay. D The invasive ability of melanoma cells with kindlin-2 overexpression or knockdown was measured by a Transwell assay. E Anoikis in melanoma cells with kindlin-2 overexpression or knockdown was measured by FACS analysis, and the relative percent of apoptotic cells was calculated. F Levels of epithelial-mesenchymal transition (EMT) markers in melanoma cells with kindlin-2 overexpression or knockdown were measured by Western blots. The data are presented as the means ± SDs of three independent tests. *P < 0.05, **P < 0.01, ***P < 0.001, significant differences between the treatment groups and control groups. interaction between kindlin-2 and α-Pix were further confirmed by co-IP and immunofluorescence colocalization analyses in different melanoma cell lines. The results showed that kindlin-2 and Rac1 were mainly localized in the cytosol and α-Pix was localized in the cytoplasm. Rac-1 and kindlin-2, α-Pix and kindlin-2 were co-localized in the cytoplasm (Figs. 3A, B, S6). In addition, GST pull-down assay revealed that kindlin-2 can directly bind to Rac-1 and α-Pix, respectively (Fig. 3C). Also, we utilized isothermal titration calorimetry to quantitatively measured the affinity the affinities of kindlin-2 for Rac-1 and α-Pix. Kindlin-2 was found to bind to Rac-1 with K d = 14.9 ± 4.2 μM. The K d value of kindlin-2 and α-Pix is 21.6 ± 3.9 µM (Fig. 3D). To verify whether kindlin-2 influenced the binding of Rac1 and α-Pix, we overexpressed the kindlin-2 protein and found that the binding of Rac1 and α-Pix strengthened, while suppressing kindlin-2 expression weakened the binding of Rac1 and α-Pix (Fig. 3E). Kindlin-2 activates the MAPK pathway by promoting the binding of Rac1 to p-α-Pix We analyzed the activation of Rac1 and found that Rac1 activation was significantly increased after kindlin-2 overexpression in the A375 and A875 cell lines but substantially decreased after kindlin-2 suppression in the MeWo and WM35 cell lines. We also conducted integrin activity assays, and found that overexpression of kindlin-2 in A375 cells could promote the integrin activity, but the GTP-Rac1 level cannot be affect by the inhibition of integrin (Fig. S7). As the downstream effect of Rac1-GTP is MAPK pathway activation, Western blotting was carried out to detect alterations in the MAPK pathway. Kindlin-2 knockdown in MeWo dramatically suppressed the phosphorylation of the JNK, p38, and ERK proteins, but had nearly no effect on the levels of total JNK, p38, and ERK, while in WM35 melanoma cells (harbored the BRAF mutations) with kindlin-2 knockdown, the phosphorylation of the JNK and p38 proteins were decreased, but had nearly no effect on the levels of total JNK and p38. In contrast, kindlin-2 overexpression in A375 and A875 cells significantly increased the phosphorylation levels of the JNK, p38, and ERK proteins but, similarly, did not change the levels of total JNK, p38, and ERK. These findings indicate that the MAPK signaling pathway is likely involved in the kindlin-2-mediated promotion of melanoma growth and metastasis. However, the expression of α-Pix was not significantly altered (Fig. 4A, B). Thus, we knocked down α-Pix by siRNA in A375 cells (Fig. 4C). After interference, Rac1 activation was markedly decreased; moreover, the phosphorylation levels of the JNK and p38 proteins were decreased, but the levels of total JNK and p38 were barely affected (Fig. 4D). To determine whether kindlin-2 can bind only p-α-Pix, alkaline phosphatase was used to dephosphorylate α-Pix. After phosphatase treatment, the binding of kindlin-2 to α-Pix was abrogated in melanoma cell lines (Fig. 4E). To investigate the binding mode of kindlin-2 to Rac1 as well as to p-α-Pix, docking simulation studies were carried out. First, phosphorylation sites on α-Pix and the binding domain of p-α-Pix on kindlin-2 were predicted (Table S1, Table S2). To identify the phosphorylation sites on α-Pix, the predicted phosphorylation sites (S13, S19, T9, and T23) were individually mutated in A375 cells. Unlike wild-type (WT) α-Pix, α-Pix S13A failed to activate Rac1 and interact with kindlin-2 ( Fig. 4G, G). To discover the binding domains on kindlin-2 for p-α-Pix and Rac1, kindlin-2 truncation mutants were constructed. The kindlin-2 fragment containing residues 328-499 pulled down p-α-Pix (Fig. 4H, I), and the fragment containing residues 1-105 pulled down Rac1 (Fig. 4J, K). Moreover, when the residues 1-177 of Rac1 was deleted, The Rac1 fragment containing residues 178-192 cannot pull down kindlin2 (Fig. S8A). After interference, the expression of phosphorylation of the JNK and p38 proteins were suppressed when the Rac1 fragment containing residues 178-192, but the levels of kindlin2, p-α-Pix, JNK and p38 were barely affected (Fig. S8B). After the residues 1-177 of Rac1 was deleted, cell growth and cell migration were suppressed, while cell apoptosis was promoted (Fig. S8C, D, E). Suppression of RAC1 reversed the overexpression of kindlin-2 1A-116, an inhibitor of RAC1, was used to treat A375 cells. The appropriate dose and treatment time of 1A-116 were 5 µg/ml and 12 h, respectively (Fig. S9). After expression interference, the expression of RAC1-GTP and the phosphorylation of the JNK and p38 proteins were suppressed, but the levels of total RAC1, JNK and p38 were barely affected. Because A375 cells harbor the BRAF mutation, therefore, the ERK pathway is constitutively activated at the high level of the MAPK, irrespective of Rac1 activity (Fig. 5A). Then, we discovered that 1A-116 blocked p-α-Pix interaction of RAC1 (Fig. S10). Moreover, in the CCK-8 and colony formation assays, inhibition of Rac1 markedly suppressed the proliferation and growth of A375 cells (Fig. 5B, C). Flow cytometric analysis demonstrated that inhibiting Rac1 activation led to an increased rate of apoptosis (Fig. 5D). The proportion of G1 phase cells was increased after Rac1 inhibition, while that of S phase and G2 phase cells was decreased (Fig. 5E). After treatment with 1A-116, melanoma cells exhibited an apparent weakening of migration but not invasion (Figs. 5F, G, S11). The results of the anoikis assays showed that the anoikis rate was significantly increased after the application of 1A-116 (Fig. 5H). Kindlin-2 knockdown and RAC1 inhibition suppressed melanoma growth in a mouse xenograft model Kindlin-2 knockdown inhibited tumor growth, while kindlin-2 overexpression promoted tumor growth; however, this promotion was reversed by the RAC1 inhibitor. Daily treatment of mice with compound 1A-116 at a dose of at least 3 mg/kg body weight/day markedly reduced tumor formation. In addition, none of these treatments significantly affected the body weight of the mice, and no other signs of acute or delayed toxicity were observed in the mice during treatment (Fig. 5I-Q). In addition, Western blot analysis of cell lysates from the xenografted tumor tissues showed that silencing kindlin-2 in mice inoculated with kindlin-2knockdown tumor cells led to decreased phosphorylation levels of the JNK and p38 proteins, while kindlin-2 overexpression led to increased phosphorylation levels of the JNK and p38 proteins; however, this increase was inhibited by 1A-116 (Fig. S12). Kindlin-2 knockdown and RAC1-GTP inhibition suppressed melanoma metastasis in a mouse model of metastatic melanoma We found that compared to control animals, mice that received kindlin-2-knockdown tumor cells (MeWo) exhibited a weakened Fig. 4 Kindlin-2 activated the MAPK pathway by promoting the binding of Rac-1 to p-α-Pix. A A magnetic bead pulldown assay of Rac1 activity was conducted, and western blotting was used to assess α-Pix expression in melanoma cells with kindlin-2 overexpression or knockdown. B Proteins in the MAPK pathway were assessed by western blotting in melanoma cells with kindlin-2 overexpression or knockdown. C The α-Pix expression level was assessed by western blotting in melanoma cells with α-Pix knockdown. D The magnetic bead pulldown assay for Rac1 activity and western blotting showed the levels of proteins in the MAPK pathway. E The interaction between kindlin-2 and α-Pix was confirmed by co-IP in melanoma cells after treatment with alkaline phosphatase. F A magnetic bead pulldown assay for Rac1 activity was conducted in A375 cells expressing α-Pix with individual mutations of the S13, S19, T9 and T23 phosphorylation sites. G The interaction between kindlin-2 and α-Pix was confirmed by co-IP in A375 cells expressing α-Pix with individual mutations of the S13, S19, T9, and T23 phosphorylation sites. metastatic ability and fewer lung metastatic nodules (Figs. 6A-E, S13A). However, the metastatic ability was strengthened, and the number of lung metastasis nodules was substantially increased in mice injected with kindlin-2-overexpressing A375 cells; moreover, daily treatment of the mice with compound 1A-116 at a dose of at least 2 mg/kg body weight/day significantly reduced the formation of total metastatic lung colonies (Figs. 6F-J, S13B). Combination of the Rac1 inhibitor and vemurafenib strengthened the therapeutic effect of each monotherapy in melanoma The A375 cell line in our study harbors a BRAF mutation, and we combined vemurafenib (20 mg/kg) and the Rac1 inhibitor to treat melanoma [31]. Compared with vemurafenib or Rac1 inhibitor treatment alone, the combination of the Rac1 inhibitor and vemurafenib significantly reduced melanoma growth (Fig. 7A-D). In addition, none of these treatments, whether monotherapy or combination therapy, significantly affected the body weight of the mice (Fig. 7F). Regarding the metastatic capacity, the inhibitory effect in the combination treatment group was better than that in the vemurafenib treatment group (Figs. 7F-J, S14). DISCUSSION Recently, progress has been made in understanding the structure and biological functions of the kindlin-2 protein. Certain cancers, such as breast cancer, HCC, esophageal cancer, prostate cancer and gastric cancer, as well as glioma, have been found to be related to kindlin-2 dysfunction. The responsible mechanisms include the following: enhancing cell proliferation and migration by stabilizing DNMT1 or driving cancer progression by activating the TGF-β/CSF-1 signaling axis in breast cancer, promoting invasion and metastasis by activating Wnt/ β-catenin signaling in HCC, activating the β-catenin/YB-1/EGFR pathway to promote progression in glioma, facilitating invasiveness via the NF-κB pathway to upregulate MMP-9 and MMP-2 expression in prostate cancer, and accelerating invasion in gastric cancer by phosphorylating integrin β1 and β3 [19][20][21][22][23][24][25]. In our study, we found that kindlin-2 can affect the proliferation and migration of melanoma cells, which was consistent with some research. However, our results did not discover that kindlin-2 affected melanoma cell invasion, considering that this was related to slight differences in the grouping and statistical methods of the experiments. Many physiological processes, including mesenchymal stem cell differentiation, podocyte morphology acquisition, cell spreading, vascular barrier function, cardiac function and chondrogenesis, are affected by kindlin-2 mutation or dysregulation [32][33][34][35][36][37]. In malignant melanoma. kindlin-2 was identified as a candidate target from an siRNA library screen, and we found that in melanoma, kindlin-2 knockdown inhibited cell proliferation, promoted apoptosis, and suppressed growth and metastasis, while kindlin-2 overexpression resulted in the inverse effects. Further analysis showed that kindlin-2 overexpression promoted the phosphorylation of proteins (p-p38, p-JNK, and p-ERK) in the MAPK pathway; the responsible molecular mechanism is the strengthening of Rac1 and p-α-Pix (S13) binding mediated by high kindlin-2 expression. Then, the inactive, GDP-bound Rac1 conformation undergoes a switch to the active, GTP-Rac1 conformation and stimulates the downstream MAPK pathway, which leads to the growth and metastasis of melanoma. Importantly, we discovered that kindlin-2 can bind to only p-α-Pix (S13) and not to unphosphorylated α-Pix or α-Pix phosphorylated at other sites-most likely, α-Pix can bind to kindlin-2 only when the conformation of α-Pix transitions after phosphorylation at S13. Moreover, we revealed that the fragment of kindlin-2 containing residues 328-499 can bind to p-α-Pix and that the fragment of kindlin-2 containing residues 1-105 can bind to Rac1. In addition, our clinical data and Oncomine database strengthened the evidence that higher kindlin-2 expression predicts more advanced T stage and M stage and facilitates metastasis and recurrence. However, it made more sense to compare the expression of kindlin-2 in the N stages or M stages within the stage III and IV sub-cohorts. Therefore, more detailed patient information was needed for analysis. To the best of our knowledge, this study is the first to document that kindlin-2 plays a role in the growth and metastasis of melanoma by strengthening the binding of Rac1 and p-α-Pix (S13) and subsequently stimulating the downstream MAPK pathway. Furthermore, we attempted to determine the clinical therapeutic significance of our findings. First, the Rac1 inhibitor 1A-116 was used to treat kindlin-2-overexpressing A375 cells, and this treatment greatly reversed the effects of kindlin-2 overexpression on melanoma cell growth and metastasis. In animal studies, further evidence was obtained to demonstrate that kindlin-2 knockdown can dramatically suppress the growth and metastasis of melanoma. However, kindlin-2 overexpression definitely promoted the growth and metastasis of melanoma, but this promotive effect was hindered by the Rac1 inhibitor. The RAS-RAF-MEK-ERK pathway is a kinase cascade that controls multiple vital cellular processes, such as cell cycle progression, survival and migration [38,39]. Drugs targeting this pathway have achieved effective outcomes in the treatment of certain cancers with BRAF mutations, including colorectal cancer, hairy cell leukemia, melanoma, thyroid cancer, non-small cell lung cancer, etc. [40][41][42][43][44]. Vemurafenib was discovered as a highly specific BRAFV600 kinase inhibitor and produced a notable response in advanced melanoma patients with the BRAFV600 mutation [45][46][47]. However, tumor resistance occurs rapidly in some patients; unfortunately, searching for the appropriate combination therapy for melanoma remains challenging, and widespread metastasis is acknowledged as the main cause of death in melanoma patients [48,49]. In our research, kindlin-2 was found to enhance the binding of Rac1 and α-Pix and subsequently stimulate the MAPK pathway, and the Rac1 inhibitor was proven to significantly affect the growth and, more importantly, the metastasis of melanoma both in vivo and in vitro. Thus, we tried to determine whether the combination of Fig. 5 Suppression of RAC1 reversed the effect of kindlin-2 overexpression, kindlin-2 knockdown and RAC1 inhibition suppressed melanoma growth. A After the overexpression of kindlin-2 or the treatment of 1A-116 with kindlin-2 overexpression, a magnetic bead pulldown assay of Rac1 activity was conducted, and western blotting was used to assess the expression of proteins in the MAPK pathway. B The viability of A375 cells after the overexpression of kindlin-2 or the treatment of 1A-116 with kindlin-2 overexpression was measured by a CCK-8 assay. C Colony formation assay after the overexpression of kindlin-2 or the treatment of 1A-116 with kindlin-2 overexpression in A375 cells. D Apoptosis in A375 cells after the overexpression of kindlin-2 or the treatment of 1A-116 with kindlin-2 overexpression was detected by FACS analysis and the relative percent of apoptotic cells was calculated. E The percentage of A375 cells in each phase was quantified after the overexpression of kindlin-2 or the treatment of 1A-116 with kindlin-2 overexpression F The migration ability of A375 cells after the overexpression of kindlin-2 or the treatment of 1A-116 with kindlin-2 overexpression was assessed by measuring the average gap width (μm) in a wound-healing assay. G The migration ability of A375 cells after the overexpression of kindlin-2 or the treatment of 1A-116 with kindlin-2 overexpression was measured by a Transwell assay. H Anoikis of A375 cells after the overexpression of kindlin-2 or the treatment of 1A-116 with kindlin-2 overexpression was measured by FACS analysis, and the relative percent of apoptotic cells was calculated. I-K The morphology, weight and volume of the tumor from each mouse at sacrifice (MeWo cells with or without kindlin-2 knockdown). L The tumor volume in each mouse was measured and recorded every three days throughout the course of the experiment MeWo cells with or without kindlin-2 knockdown). M-O The morphology, weight and volume of the tumors from each mouse at sacrifice (A375 cells with or without kindlin-2 overexpression or with the treatment of 1A-116 at different doses after kindlin-2 overexpression). P,Q The tumor volume and body weight of each mouse were measured and recorded every three days throughout the course of the experiment (A375 cells with or without kindlin-2 overexpression or with the treatment of 1A-116 at different doses after kindlin-2 overexpression). The data are presented as the means ± SDs of three independent tests. **P < 0.01, ***P < 0.001, significant differences between the treatment groups and control groups. the Rac1 inhibitor and vemurafenib could improve outcomes. Encouragingly, the combination therapy group exhibited a markedly greater therapeutic effect than the vemurafenib and Rac1 inhibitor monotherapy groups. Dysregulation of Rac1 activity has been found in certain cancers, including melanoma, breast cancer, colorectal cancer, gastric cancer, etc., and Rac1 has been discovered to influence cell proliferation, adhesion, migration, and invasion in the progression of certain cancers [50][51][52][53][54][55][56][57]. As previously reported, activation of Rac1 can promote MAPK pathway signaling; thus, we speculated that the combination of the Rac1 inhibitor and vemurafenib, which provided a double hit to the MAPK pathway, might result in improved therapeutic outcomes, and the results of our animal studies validated this hypothesis. Thus, melanomas with high kindlin-2 expression and BRAF mutations can be treated more effectively with the combination of a Rac1 inhibitor and vemurafenib than with either monotherapy. In summary, our study demonstrated a substantial potential oncogenic role of kindlin-2 in melanoma. kindlin-2 can bind specifically to p-α-Pix (S13) and Rac1 and subsequently enhance the binding of p-α-Pix (S13) to Rac1 to induce the switch of Rac1 from the inactive, Rac1-GDP conformation to the active, Rac1-GTP conformation. In turn, the downstream MAPK pathway is stimulated to promote the growth and metastasis of melanoma. Furthermore, we revealed that Rac1 inhibition can reverse melanoma growth and metastasis caused by high kindlin-2 expression, and combination therapy with a Rac1 inhibitor and vemurafenib can result in a better therapeutic outcome than monotherapy in melanoma patients with high kindlin-2 expression and BRAF mutations (Fig. 8). Collectively, these findings indicate that kindlin-2 is a potential diagnostic biomarker and therapeutic target for melanoma. MATERIALS AND METHODS Cell culture Six melanoma cell lines (A375, A875, MeWo, WM35, SK-Mel-2, and SK-Mel-28) were cultured for the following experiments. The BRAF mutational status of melanoma cell lines were displayed in Table S3. Rac1 activation assay with magnetic bead pulldown The following protocol was followed: cells were cultured and rinsed twice with ice-cold PBS. Then, ice-cold leupeptin (0.5 ml per 150-mm tissue Fig. 6 Kindlin-2 knockdown and RAC1 inhibition suppressed melanoma metastasis. A Tumor metastasis was monitored using a small animal imaging system. B Lung metastasis was detected by HE staining; ×20 and ×40 magnification. C Number of metastatic nodules in the lungs. D Luciferase signal intensities in the mice. E The number of GFP-labeled CTCs was determined by flow cytometry of whole blood samples. (a-e. MeWo cells with or without kindlin-2 knockdown). F Tumor metastasis was monitored using a small animal imaging system. G Lung metastasis was detected by HE staining; ×20 and ×40 magnification. H Number of metastatic nodules in the lungs. I Luciferase signal intensities in the mice. J The number of GFP-labeled CTCs was determined by flow cytometry of whole blood samples. (F-J. A375 cells with or without kindlin-2 overexpression or with the treatment of 1A-116 at different doses after kindlin-2 overexpression). *P < 0.05, **P < 0.01, ***P < 0.001, statistically significant. culture plate) was added, and cells were detached by scraping and lysed. Next, the cell lysates were transferred to microfuge tubes on ice, and 0.5 ml of each cell extract was aliquoted to a microfuge tube. A total of 10 µl (10 µg) of Rac1/Cdc42 Assay Reagent (PAK-1 PBD-conjugated magnetic beads) was added to each tube and incubated for 45 min at 4°C. The beads were pelleted by centrifugation (10 s, 14,000×g, 4°C), and the supernatant was removed and discarded. The beads were washed 3 times with leupeptin and resuspended in 40 µl of 2× Laemmli buffer. Next, 2 µl of 1 M dithiothreitol was added and boiled for 5 min, and the beads were pelleted by centrifugation. The supernatant and beads were mixed, and 20 µl of the mixture was loaded on a polyacrylamide gel for SDS-PAGE. After that, the proteins were transferred to a membrane. After the above steps, the membranes were blocked with 5% skim milk (w/v) at room temperature for 1 h and incubated overnight at 4°C with an anti-Rac1 antibody (diluted to 1 µg/ml). Secondary antibodies were then added and incubated for 1 h at room temperature. Protein-antibody complexes were then detected by chemiluminescence (Pierce ECL Western Blotting Substrate, Thermo, USA). Xenograft animal experiment and tissue processing All animal procedures were performed in accordance with the Guide for the Care and Use of Laboratory Animals (NIH publication No. 80-23, revised 1996) and the Institutional Ethical Guidelines for Animal Experiments developed by Sun Yat-sen University. Melanoma cells (5 × 10 6 in 100 μl of PBS) were injected subcutaneously into the left flank of female athymic nude mice aged 3-4 weeks, and tumors developed at the injection sites after 1 week. When the formed tumors reached 100 mm 3 , the animals were divided randomly into different groups with 6 mice per group. Certain groups were treated with different concentrations (1, 2, 3, 4, 5, and 6 mg/ kg) of 1A-116 or with vemurafenib (20 mg/kg) daily via intratumoral injection [58]. The experiment was terminated 18 days after tumor cell inoculation. Tumor sizes were measured using Vernier calipers every three days, and tumor volumes were calculated as follows: V = (width 2 × length)/ 2. At the termination of the experiment, the mice were sacrificed, and the tumors were excised from each mouse and weighed. A portion of the tumors was fixed with 10% formalin and used to prepare tumor tissue lysates for western blot analysis. No blinding was done. Metastasis model A tail vein injection model was used for lung colonization assays. All female athymic nude mice aged 3-4 weeks were divided randomly into different groups with six mice per group. All mice were injected via the tail vein with 1 × 10 5 luciferase and GFP-positive melanoma cells in 0.1 ml of serum-free medium. On the day of cell inoculation, bioluminescence imaging using the IVIS@ Lumina II system (Caliper Life Sciences, Hopkinton, MA) was performed to verify a fluorescence signal in all mice after intraperitoneal injection of 4.0 mg of luciferin (Gold Biotech) in 50 µl of saline. The metastases were monitored using the IVIS@ Lumina II system (Caliper Life Fig. 7 Combination of the Rac1 inhibitor and vemurafenib strengthened the therapeutic effect of each monotherapy. A-C The morphology, weight and volume of the tumors from each mouse at sacrifice. D, E The tumor volume and body weight of each mouse were measured and recorded every three days throughout the course of the experiment. F Tumor metastasis was monitored using a small animal imaging system. G Lung metastasis was detected by HE staining; ×20 and ×40 magnification. H Number of metastatic nodules in the lungs. I Luciferase signal intensities in the mice. J The number of GFP-labeled CTCs was determined by flow cytometry of whole blood samples. K A schematic model of the functions of kindlin-2 in melanoma growth and metastasis. (A-K A375 cells with or without kindlin-2 overexpression or with the treatment of 1A-116 after kindlin-2 overexpression or with the treatment of vemurafenib after kindlin-2 overexpression or with the combination of vemurafenib and 1A-116 after kindlin-2 overexpression). *P < 0.05, **P < 0.01, ***P < 0.001, statistically significant. Fig. 8 Influence of kindlin-2 in melanoma growth and metastasis, and treatment targeted to kindlin-2 and BRAF mutation. Kindlin-2 can bind specifically to p-α-Pix (S13) and Rac1, enhance the binding of p-α-Pix (S13) to Rac1 to induce the switch of Rac1 from the inactive, Rac1-GDP conformation to the active, Rac1-GTP conformation. Then, the downstream MAPK pathway is stimulated to promote the growth and metastasis of melanoma. The combination of Rac1 inhibitor and vemurafenib can reverse melanoma growth and metastasis caused by high kindlin-2 expression.
7,831.6
2021-07-28T00:00:00.000
[ "Biology", "Chemistry" ]
BANKRUPTCY PREDICTION MODELS FOR LARGE AGRIBUSINESS COMPANIES IN AP VOJVODINA The aim of this paper is to present application of different methods used for predicting bankruptcy of large agricultural and food companies in AP Vojvodina, as well as to determine which model is the most suitable for analyzing the companies from the observed sectors. The following three models were applied in the paper: Altman’s Z’-score model, Kralicek DF indicators and Kralicek Quick test. The analysis included five companies from the agricultural sector and five companies from the food sector operating on the territory of AP Vojvodina in the period from 2015 to 2019. The results of the research based on the applied models showed that different conclusions can be made about the financial stability of the observed companies. Altman’s Z’-score model provided the most rigorous forecast in terms of the bankruptcy risk, while the results of Kralicek DF indicators and Quick test are relatively similar. © 2021 EA. All rights reserved. Introduction Agriculture, together with the food industry, is of strategic importance in the economic development strategy of the Serbian economy (Stošić & Domazet, 2014). In the period from 2002 to 2015, agriculture accounted for 11.1% of the total GVA in the Republic of Serbia (Novaković, 2019). The share of the food industry in the structure of GVA of the Republic of Serbia is about 4% (Domanović et al., 2018). The Republic of Serbia has favorable natural conditions for development of agriculture and thus also for food industry, which is inseparable from agriculture. For AP Vojvodina, which is dominantly agricultural area, the connection between these two industries is even more important. The common feature of these two sectors is duality of their structure, as they comprise a large number of micro and small companies, as well as a smaller number of medium and large companies, which are the backbone of these sectors. One of the key business issues of the companies in the agricultural and food sectors is related to awareness of the management about their financial position. In order to survive on the market and be competitive, every company must be able to assess the risk of insolvency, i.e. bankruptcy (Didenko, et al., 2012). There are a number of models for evaluating successfulness of a company's business. All of the models use different financial indicators, which are compared with the past or expected indicators for a specific company. The goal of financial indicators analysis is to timely detect the risk of a crisis in the functioning of the company (Jakovčević, 2011). Over time, various methods for assessing financial position have been developed, while the most commonly used and well-known methods include: Altman's Z-score model, Ohlson model, Zmijevski model, Kralicek DF and Quick test (Alihodžić, 2013; Barbut, a-Mis, and Madaleno, 2020). The paper analyzes five large companies from the agricultural sector and five large companies from the food sector operating on the territory of AP Vojvodina in the period from 2015 to 2019. The aim of this paper is to determine the financial position of the observed companies using Altman's Z score model and Kralicek DF indicators and Quick test. Literature review A number of authors from our country and the region have dealt with assessment of the bankruptcy risk, i.e. assessment of the financial position of agricultural and food companies. Stojanović & Drinić (2017) tested the application of Altman's Z-score models on a sample of 270 agricultural companies from the Republic of Srpska. The companies were analyzed in the period from 2011 to 2015 and it was concluded that none of Altman's models is suitable for assessing the creditworthiness of the observed agricultural companies, but that these models can be useful in detecting certain long-term financial difficulties. Apan et al. (2018) applied Altman's Z-score model and VIKOR method to determine the financial position of 18 food companies operating in the Turkish market in the period from 2008 to 2014. By comparing the observed models, it was concluded that VIKOR method shows better results. In the paper by Stošić (2019), Altman's Z'-score model was applied in order to assess the financial success of medium-sized companies in the Republic of Serbia. The research included companies from the manufacturing, trade, agricultural and construction industry sectors. The analysis of the obtained results showed that agricultural companies are in the gray zone and that, compared to the companies from other observed sectors, agricultural companies have the highest level of financial stability and liquidity as well as the lowest level of marketability. Kovács et al. (2020) analyzed the risk of bankruptcy for three agricultural companies operating in Hungary in the period from 2014 to 2018. The bankruptcy risk was assessed using the following four models: Altman Z-score model, Springate model, Comerford model, and Fulmer model. The results demonstrated that all four models provide the same results and that all three observed companies have a high risk of bankruptcy. Materials and methods The research analyzed the financial stability of 5 agricultural companies and 5 food companies, all of which are characterized as large companies, operating in the territory of AP Vojvodina in the period from 2015 to 2019. The data were taken from the annual reports of the companies (Business Registers Agency) in order to calculate the financial indicators used for analyzing the companies' financial stability. The financial stability of the observed companies was analyzed using the following three methods: Altman's Z-score model In 1968 Altman I. Edward (Altman, 1968) investigated the influence of various financial indicators on a company's risk of bankruptcy in the United States, and the result of this research is the model known as Altman's Z-score model. The research was performed on a sample of 66 manufacturing companies, including 33 companies that went bankrupt and 33 companies that operated successfully. Altman used the method of multivariate discriminant analysis to test the impact of 22 business indicators on the likelihood of bankruptcy and obtained the model in which the initial number of indicators was reduced to 5 indicators proven to have the greatest impact on bankruptcy prediction. Depending on its impact on the company's operations, each indicator was assigned an appropriate ponder. As a final result of the analysis, the following discriminant function was obtained: The indicators in the discriminatory function are calculated as follows: Z = Z-score, = working capital/ total assets, = retained earnings/ total assets, = EBIT/ total assets, = market value of equity/ book value of total liabilities, = sales/ total assets. Indicator is an indicator of the company's liquidity, and the value of this indicator is obtained as the ratio of working capital and total assets of the company. Working capital of the company is defined as current assets fewer current liabilities. Indicator is obtained as the ratio of retained earnings and total assets and shows the cumulative profitability of the company. Indicator is calculated as the ratio of earnings before interest and taxes (EBIT) and total assets, and this indicator shows the company's profitability. Indicator , which is obtained as the ratio of the market value of equity and total liabilities, shows how much the company's assets can decrease in value before its liabilities exceed the assets and the company enters the zone of insolvency. Indicator is an indicator of the turnover of fixed assets, and it shows the ability of the asset to generate sales. Based on the obtained Z values, the companies are classified into three groups. The companies with Z-score value above 2.67 are considered financially stable and are classified in the safe zone. If the value of Z-score is between 1.81 and 2.67, it is considered that the business is financially unstable, but there is a chance of recovery, so the companies are classified in the gray zone. The companies with Z-score value below 1.81 are the companies that will go bankrupt and they are in the distress zone. The disadvantage of the 1968 Z-score model was that it was not applicable to companies whose shares are not traded on the stock exchange. In 1983, Altman modified the (Altman, 1983). The difference between the original model and Z'-score model is in indicator , in which the market value of equity is replaced by the book value. New ponders were assigned to the indicators, obtaining the model with the following form: Based on the modified model, the company is considered successful if the value of Z' is above 2.9. The value of Z' between 1.23 and 2.9 indicates that the company operates in the gray zone, while the value below 1.23 indicates high risk of bankruptcy, i.e., these companies are in the distress zone. Kralicek DF model Indicator shows the degree to which net cash flow covers liabilities. Indicator is obtained as the ratio of total assets and total liabilities, and it shows the share of liabilities in total assets (Vučković, 2014). Kralicek Quick test Quick test was created in 1990s with the purpose of examining the financial performance of companies using four indicators, including two indicators related to financial stability and two profitability indicators. Each indicator is assigned a score within the range from 1 to 5, where 1 represents the best and 5 the worst result (Vukadinović et al., 2018). The final result is obtained as the average of previously calculated average values of the indicators, expressed in points ( Table 2). = net profit + amortization/ business earnings Indicator shows the share of capital in total financing sources. The recommended value of this indicator is 10% or higher. Indicator shows debt repayment period, and if the value of this indicator is above 30 years, it is considered that the company has certain difficulties with solvency, while the recommended value of this indicator is 12 years or less. Indicator shows profitability of total assets relative to operating profit. If the value of this indicator is negative, it is considered that the company has difficulties with solvency, while the recommended value is 8% or higher. Indicator shows the share of cash flow in operating income. The recommended value of this indicator is 5% or higher (Vukadinović et al., 2018). Results and discussions The results of the research show the financial position of the selected companies based on Z'-score model, Kralicek DF indicator and Quick test. The data required for calculation of the indicators used for the methods described in the previous chapter were obtained from the companies' financial reports. In order to determine the financial position of the companies, the data were analyzed in a period of five years The first step in the research was to calculate Z'-score indicators for the observed agricultural companies. These values indicate the financial position of the company (Table 3). Based on the values of Z' indicator for the company A, it can be observed that in 2015 the company was in the distress zone, i.e., at high risk of bankruptcy. Z' indicator had a negative sign, and such a low value of the indicator is accounted for by a negative business result achieved in the observed year. In 2016, the company A improved its financial position, achieved a positive business result and moved into the success zone, i.e., the safe zone, where it remained during the rest of the observed period (Z' >2.9). The company B also achieved a negative business result in 2015 and it was in the distress zone. In the following years, the company B achieved a positive result, but remained in the distress zone, which indicates that this company had difficulties with liquidity, profitability and solvency and is at risk of bankruptcy, as it was in the distress zone for the entire period (values of Z' indicators <1.23). The company C was in the gray zone during the entire observed period (1.23 <Z' <2.9), which indicates that the company is at risk of bankruptcy, but there is a possibility of improving its business and moving to the safe zone. Based on the values of Z' indicator for the company D in 2015, it can be concluded that the company operated in the gray zone (Z' <2.9). This result is accounted for by low level of liquidity and low profit rate of the company. In the following year, the company D moved to the safe zone (Z' >2.9), but due to low rate of return, it moved back to the gray zone in 2017 and remained in the gray zone for the rest of the observed period (values of Z' indicator <2.9). The company E was in the gray zone (Z' <2.9) in the period from 2015 to 2018 due to low liquidity rate and low profit rate, but in 2019 it moved to the safe business zone (Z' =3.028). Assessment of the financial stability of the observed agricultural companies was performed also by calculating the values of Kralicek DF indicator (Table 4). The data obtained from the companies' financial reports were used to calculate Kralicek Quick test indicators, the number of points assigned to each indicator and the indicators of financial stability (S1) and profitability (S2) and their arithmetic mean (T), which represents the total business result ( Table 5). The company A did not have difficulties with solvency in terms of its own financing in 2015 ( 10%). The second indicator had a negative sign, because the value of the achieved business result (loss) is higher than the amount of depreciation, so the obtained value indicates that the company had difficulties with solvency ( >12). The average value of these two indicators is used for assessing the financial stability, so it can be concluded that the company had very good financial stability (S1=2). The value of the profitability indicator was lower than the recommended value of 8%, and the share of cash flow in operating income was also below the recommended value of 5%. As the average value of these two indicators (the result of S2) is used for assessing profitability, it can be concluded that the company had difficulties with profitability in the observed year. The average value of the stability and profitability indicators suggest that the company had difficulties with solvency. In the following years (from 2016 to 2019), the solvency of the company was excellent and very good (T<2). The company B had good solvency (T <3) during the whole observed period, except in 2018 when the solvency of this company was at an unsatisfactory level. The poor financial position of the company B in 2018 is accounted for by too long debt repayment period ( poor profitability ( and low share of cash flow in operating income ( which is significantly lower than the recommended value of 5%. In 2015 and 2016, the company C showed the same trends for all four indicators and it was in the zone of good solvency (T <3). During these two years, the company had a high value of indicator , which indicates a high share of equity. In the following years of the observed period, the company was in the zone of very good solvency, and it can be seen that in this period there was an increase in the share of equity and a decrease in the time required to repay the debt ( . The company E was in the zone of good solvency due to low profitability (S2) in 2015, 2017, and 2018. Its profitability improved in 2016 and the company was then in the zone of very good solvency. The company E was in the zone of very good solvency also in 2019 due to good financial stability (S1 =1). The average values of the indicators for the observed agricultural companies suggest that the companies did not have difficulties with solvency in terms of their own financing 10%). Indicator shows that in 2015 and 2018 the companies had difficulties with the debt repayment period, which was longer than the recommended value of 12 years. According to the average scores for the first two indicators, the companies in the agricultural sector had good financial stability during the observed period. The value of the profitability indicator was lower than the recommended value of 8%, except The values of Z '-score indicator were calculated also for the observed food companies ( Table 6). The values of Z' indicator for the company F suggest that this company was in the gray zone in 2015 and 2019 (Z' <2.9), i.e., at risk of bankruptcy, due to low profit rate. In the other years of the observed period, the company F was in the safe zone. The company G was improving its financial position over the years and after being in the gray zone during the first four years (Z' <2.9), it moved to the safe zone in 2019 (Z' >2.9). The company H and the company I were in the gray zone during the entire observed period (1.23 <Z' <2.9). These results indicate that the companies are at risk of bankruptcy, but that it is still possible to improve the companies' business and move to the safe business zone. The company J was in the distress zone in 2015, while in 2016 and 2017 it moved to the gray zone due to increased profit rate. However, as the company was in the distress zone during the rest of the observed period (Z' <1.23), it can be concluded that the company has a high risk of bankruptcy. The financial stability of the observed food companies was assessed also by calculating the values of Kralicek DF indicator (Table 7). According to the results of Kralicek DF indicator, the company F moved from good financial stability in 2015 to the zone of very good financial stability in 2016. However, it was followed by moving again to the zone of good financial stability in 2017, where the company remained until the end of the observed period. The company G was in the zone of poor and average financial stability during the observed period. The company H showed marked instability during the observed period, moving from the poor financial stability zone in 2015 to the average stability zone in 2016, and then to the moderate insolvency zone in 2017. The following year, this company moved to the zone of average financial stability, while in the last year of the observed period the company was in the zone of good financial stability. The company I was in the zones of excellent and very good financial stability during the observed period, which indicates that this company is not at risk of bankruptcy. The company J was in the zone of excellent stability for the first three years of the observed period, but in 2018 its position deteriorated and the company passed into the zone of poor financial stability. In the last year of the observed period, the company managed to move to the zone of average financial stability by improving its business. In addition, the data from the financial reports of the observed food companies were used to calculate Kralicek Quick test indicators, the number of points assigned to each indicator and the indicators of financial stability (S1) and profitability (S2) and their arithmetic mean (T), which represents the total business result (Table 8). According to the results of Kralicek Quick test, the company F had very good solvency during the observed period (T =1.75). As can be seen from Table 8, this company should increase its profitability, which was low during all years of the observed period ( . The company G was in the zone of good solvency during all years of the observed period. As was the case with the company F, this company also had difficulties with profitability, which was extremely low ( The company H was in the zone of good solvency in 2015, but based on indicator it can be seen that the debt repayment period was far longer than 30 years, and that the company had low profitability rate and low share of cash flow in operating income ( , which was significantly lower than the recommended value of 5%. In 2016, the company remained in the zone of good solvency, with slightly better results. In 2017, the company moved into the zone of bad solvency, due to a new increase in debt repayment time, poor profitability and low share of cash flow in total revenues. In 2018 and 2019, the company returned to the zone of good solvency due to reduced debt repayment time, which was shorter than the recommended value of 12 years, and due to increased share of cash flow in total revenues. The company I was in the zone of excellent solvency during the observed period, and it can be concluded that this company is not at risk of bankruptcy. During the first three years of the observed period, the company J was in the zone of very good and excellent solvency, while in 2018, due to increased debt repayment time ( , low profitability rate ( and low share of cash flow in total revenues ( ), the company moved to the zone of good financial stability. In the last year of the observed period, the company reduced the time required to repay the debt, as well as the share of cash flow in total revenues, but it remained in the zone of good solvency due to low profitability rate. The average values of the indicators for the food companies indicate that the companies in the observed sector did not have difficulties with their own financing ( 10%). Based on the values of the second indicator, it can be observed that the companies had difficulties with debt repayment time in 2015 and 2017, which was the result of debt repayment period of the company H, as other companies in this sector had good debt repayment period (less than 12 years). The average values for the first two indicators show that the company had very good and good financial stability during the observed period. The value of the profitability indicator was lower than the recommended value of 8% in all observed years except in 2016, while the share of cash flow in operating income was above the recommended value of 5% during the entire period. The average values of these two indicators show that the food companies had difficulties with profitability only in 2018. The final result (T) indicates that the observed food companies had good or very good solvency during the entire observed period. Conclusion According to the results of the applied models for assessing the companies' financial position, the following conclusions can be drawn for the observed agricultural companies: -Based on all three models, the company A is a financially stable company. The results of all three models show that the company A had difficulties with solvency in 2015, caused by negative business results, long debt repayment period and low share of cash flow in total revenues. In 2016, the company made profit and improved its business results, remaining in the zone of financial stability until the end of the observed period. -According to Altman's Z'-score model, the company B was in the distress zone, i.e., in the bankruptcy zone, during the whole observed period. The results of Kralicek DF indicators show that this company was in the zone of extreme insolvency in 2015, and then in the zone of poor financial stability 2016 and 2017, followed by moving to the zone of beginning of insolvency in 2018, and the zone of average financial stability in 2019. The results of Quick test indicate that the company B had good financial stability in all years of the observed period, with the exception of 2018 when the debt repayment period was too long and the company had low profitability rate and low share of cash flow in operating income. -The results of Altman's Z'-score model suggest that the company C was in the gray zone, i.e., at risk of bankruptcy, during the whole observed period, but that there is Economics of Agriculture, Year 68, No. 3, 2021, (pp. 805-822), Belgrade a possibility of business improvement. On the other hand, the results of Kralick DF indicators and Quick test indicated that the company was in the zone of good solvency, i.e., it was not at risk of bankruptcy. -According to the results of Altman's Z'-score model, the company D operated in the gray zone in all years of the observed period, except in 2016, due to low profit rate. Contrary to the results of Z'-score, the results of Kralicek DF indicators and Quick test suggest that the company was in the zone of average to good financial stability in 2015, 2018 and 2019, and that in 2016 and 2017 the company had excellent financial stability. -According to the results of Altman's Z'-score model, the company G was in the gray zone from 2015 to 2018, moving to the safe zone in 2019. The values of DF indicators suggest that the company had poor to average financial stability during the observed period, while the results of Quick test show that the company had good solvency in all years of the observed period. -The results of Altman's Z'-score model showed that the company H was in the gray zone, i.e. at risk of bankruptcy, during the entire observed period. The values of DF indicators varied greatly over the years. The company had poor financial stability in 2015, and average financial stability in 2016. Due to improved business result, the company had average financial stability in 2017, followed by moderate insolvency in 2018, and good financial stability in 2019. Quick test showed that the company had good solvency in all years of the observed period, except in 2017, when the company had poor solvency due to long debt repayment period and low profitability. -Based on the results of Altman's Z'-score model, the company I was in the gray zone during the entire observed period. The results of DF indicators and Quick test indicated that the company had a very good and excellent solvency during the observed period and that it was not at risk of bankruptcy. http://ea.bg.ac.rs Economics of Agriculture, Year 68, No. 3, 2021, (pp. 805-822), Belgrade -The results of Altman's Z'-score model indicate that the company J was in the distress zone in 2015, followed by moving to the gray zone in 2016 and 2017 due to increased profit rate, and then returning to the distress business zone in 2018 and 2019. According to these results, the company was at risk of bankruptcy. The results of DF indicators suggest that the company had excellent financial stability in the first three years of the observed period, followed by poor financial stability in 2018, and average financial stability in 2019. The results of Quick test confirmed the results of DF indicators, suggesting that the company had excellent solvency during the first three years of the observed period, while in 2018 the company had good solvency due to increased debt repayment period, low profitability and low share of cash flow in operating income. Based on the research results, it can be concluded that each of the applied models provided different results for bankruptcy assessment of the agricultural and food companies in AP Vojvodina. Since the applications of Kralicek DF indicators and Quick test provided similar results, it is recommended to use them in future research. By calculating the average values of Quick test indicator for agricultural companies, it was observed that the companies from this sector had difficulties with debt repayment period and profitability in 2015 and 2018, which led to financial instability in 2015, while during the rest of the observed period the companies had good and very good solvency. In contrast, the results of Quick test indicated that the companies from the food sector had good or very good solvency throughout the whole observed period. According to the presented results, it can be concluded that large companies in the food sector have better business results and better financial stability compared to large companies in the agricultural sector.
6,519.8
2021-09-25T00:00:00.000
[ "Agricultural and Food Sciences", "Business", "Economics" ]
Bluetongue Virus VP1 Polymerase Activity In Vitro: Template Dependency, Dinucleotide Priming and Cap Dependency Background Bluetongue virus (BTV) protein, VP1, is known to possess an intrinsic polymerase function, unlike rotavirus VP1, which requires the capsid protein VP2 for its catalytic activity. However, compared with the polymerases of other members of the Reoviridae family, BTV VP1 has not been characterized in detail. Methods and Findings Using an in vitro polymerase assay system, we demonstrated that BTV VP1 could synthesize the ten dsRNAs simultaneously from BTV core-derived ssRNA templates in a single in vitro reaction as well as genomic dsRNA segments from rotavirus core-derived ssRNA templates that possess no sequence similarity with BTV. In contrast, dsRNAs were not synthesized from non-viral ssRNA templates by VP1, unless they were fused with specific BTV sequences. Further, we showed that synthesis of dsRNAs from capped ssRNA templates was significantly higher than that from uncapped ssRNA templates and the addition of dinucleotides enhanced activity as long as the last base of the dinucleotide complemented the 3′ -terminal nucleotide of the ssRNA template. Conclusions We showed that the polymerase activity was stimulated by two different factors: cap structure, likely due to allosteric effect, and dinucleotides due to priming. Our results also suggested the possible presence of cis-acting elements shared by ssRNAs in the members of family Reoviridae. Introduction Viral RNA-dependent RNA polymerases (RdRps) share a similar catalytic mechanism as well as a similar structure, including conserved sequence motifs and catalytic residues [1,2,3,4,5]. Despite these similarities, each RdRp has different ways to recognize RNA templates, initiate RNA synthesis, elongate the RNA chains and regulate those activities [6,7,8,9]. For segmented, double-strand RNA (dsRNA) viruses, including Reoviridae family members, RNA synthesis by RdRp occurs within a capsid and is capable of reading both single-and double-strand RNAs in association with other inner viral proteins (polymerase complex) in the absence of host factors [2,10,11,12,13,14,15,16,17]. It is believed that this specific feature of dsRNA viruses allows their RdRps to synthesize ssRNA transcripts (mRNAs) from viral genomic dsRNA segments and viral genomic dsRNAs from ssRNA transcripts without exposing their genomic dsRNA to the host innate immunity sensors [18]. Recently, it was reported that dsRNAs which were detected outside the rotavirus viroplasm seemed to activate PKR [19]. Bluetongue virus (BTV), the etiological agent of Bluetongue disease of livestock, is a member of the Orbivirus genus of the family Reoviridae. BTV particles have three consecutive layers of proteins organized into two capsids, an outer capsid of two proteins (VP2 and VP5) and an inner capsid or ''core'' composed of two major proteins, VP7 and VP3 which encloses the three minor proteins VP1, VP4 and VP6 in addition to the viral genome. The viral genome is segmented and consists of ten linear dsRNA molecules, classified from segment 1 to segment 10 in decreasing order of size (S1-S10) [20,21]. After cell entry, the outer capsid is removed to release a transcriptionally active core particle, which provides a compartment within which the ten genome segments can be repeatedly transcribed by core-associated enzymes including VP1, VP4 and VP6 [14,15]. Ten mRNAs are synthesized from the ten genome segments and released from the core particle into the host cell cytoplasm where they act as templates both for translation and for negative strand viral RNA synthesis to generate genomic dsRNAs [14,22]. Previously, we demonstrated that purified BTV VP1 is active as replicase synthesizing dsRNA from positive strand ssRNA in vitro in the absence of any other viral protein [23,24]. However, the catalytic activity of VP1 was not further characterized. The crystal structures of RdRp proteins have been reported for two members of the family, l3 of reovirus and VP1 of rotavirus [3,4]. Both RdRp showed a similar cage-like structure with four well-defined tunnels that allow access of the template RNAs, nucleotides and divalent cations to the internal catalytic site, as well as two distinct exit channels for template RNA and products [25]. Although the crystal structure of BTV VP1 is not known, a secondary structure-based three-dimensional model of BTV VP1 revealed structural similarity to other Reoviridae polymerases [26]. Despite this structural similarity, BTV VP1 exhibits two distinct functional features which distinguish it from rotavirus VP1 [24]. Firstly, BTV VP1 exhibits RNA replicase activity in the absence of all other virus encoded proteins, whereas rotavirus VP1 requires VP2, which forms the inner layer of the virus particle, for its activity [27,28,29]. Secondly, our initial study indicated that BTV VP1 does not require the 39 conserved region for in vitro dsRNA synthesis, unlike rotavirus VP1, which recognizes the UGUG tetranucleotide of the 39 end conserved sequence [4,24,29,30,31,32,33]. Nevertheless, during virus replication all three proteins, BTV VP1, reovirus l3 and rotavirus VP1, as well as dsRNA bacteriophage phi 6 RdRp, essentially perform the same function [3,5,24,34,35,36,37]. The crystal structure of the reovirus l3 and rotavirus VP1 also showed that there are 'cap' binding sites on the surface of the cagelike structure [3,4], suggesting that the cap appears to be the primary element by which VP1 docks and recognizes the 59 end of a plus strand [4,25]. The activity of influenza virus-associated polymerase, which is well known to have a cap-snatching mechanism, can be stimulated by cap-1 structures ( m7 GpppN m ) as well as dinucleotides, such as ApG and GpG [38,39,40,41,42,43,44,45]. The regulation of transcription by cap structures was also reported in Bunyaviridae [46]. Unlike dinucleotides, the cap-1 structure functions as an allosteric regulatory factor, rather than by priming transcription, with enhanced RNA synthesis by influenza virus-associated polymerase [40,43,44,45]. Furthermore, the cell-free system for rotavirus RNA polymerase revealed the specific priming of minus strand RNA synthesis by a dinucleotide rather than dinucleoside monophosphate [32], and formation of the initiation complex with dinucleotide and template, unlike the RdRp of dsRNA bacteriophage, phi6, which does not require a primer for initiating dsRNA synthesis but has a ''back-priming'' initiation mechanism [47,48]. These previous studies strongly suggest that the cap structure or dinucleotide may have an effect on the polymerase activity of BTV VP1. In this study, we investigated the factors that affect BTV VP1 in vitro catalytic activity including the requirement of RNA sequences that are recognized by BTV VP1, priming and other co-regulating factors. We first confirmed the robustness of polymerase activity by demonstrating that VP1 could synthesize all ten dsRNA segments from ten individual ssRNA segments in a single reaction. Further, we showed that in vitro polymerase activity of VP1 is sequenceindependent and could synthesize genomic dsRNAs of the other orbiviruses or rotaviruses when provided with ssRNA templates of these heterologous viruses. In contrast, dsRNAs were not synthesized from non-viral ssRNA templates by VP1, unless those were fused with some specific BTV sequences, indicating the presence of cis-acting elements shared by members of the family Reoviridae. Moreover, our data showed that the activity was enhanced both in the presence of a cap structure or a dinucleotide; although their roles appear to be distinct, one appears to be allosteric while the other is required for priming. Results Polymerase activity of BTV VP1 is highly efficient in vitro BTV VP1 has already been reported to have the ability to synthesize dsRNA from BTV T7 ssRNA templates [24]. However, it has not been previously shown that polymerase proteins of any members of the family Reoviridae can synthesize the complete set of genomic dsRNA in vitro in a single reaction mixture. Since purified BTV VP1 alone could synthesize a duplex on a single ssRNA template, we attempted to assess if VP1 could synthesize all ten dsRNAs of BTV in a single reaction. Either 1.0 mg of the complete set of in vitro synthesized ssRNAs from viral cores (core ssRNAs) or in vitro generated ten ssRNAs from T7 plasmids (T7 ssRNAs) [49,50], each approximately 0.1 mg, were used as templates together with approximately 70 ng of purified VP1, significantly less VP1 than was used previously [24], for in vitro polymerase assay. Both reactions were carried out at 37uC for 5 h and dsRNAs were purified from the reaction mixtures. The dsRNA profiles of each reaction, analyzed by 7% native PAGE gel, demonstrated clearly that purified VP1 synthesizes all ten dsRNAs in a single reaction mixture and could utilize efficiently both authentic core ssRNAs and the T7 ssRNAs (Fig. 1). The amount of each synthesized segment was not equal. However, when each single T7 ssRNA template was used for separate reactions, the dsRNA from each T7 ssRNA was synthesized equally well (data not shown) [24]. The data also showed that BTV VP1 could efficiently synthesize all ten dsRNAs in the absence of any other protein, whereas rotavirus VP1 required VP2 for single dsRNA synthesis, suggesting that the recombinant BTV VP1 possesses robust polymerase activity in vitro. BTV polymerase is capable of synthesizing genomic RNAs of rotavirus and other members of family Reoviridae The data above confirms that BTV VP1 is not only functional on its own and requires no other viral protein but that the activity is also highly efficient in vitro. Thus, it is possible that VP1 may be capable of using other related viral ssRNAs as templates. We selected another member of the Orbivirus genus, African Horse Sickness virus, AHSV, the genomic RNA segments of which have 59 and 39 conserved regions similar to those of BTV genomes ( Fig. 2A). For this study, we used the complete set of ssRNAs of AHSV-4, synthesized in vitro from purified cores, the completeness of which were confirmed by reverse genetics [51]. In parallel, we also used several T7 ssRNAs (S4, S5 and S10) of AHSV-4 as templates for in vitro polymerase assay. Polymerase reactions were carried out with either the core ssRNA templates or the single T7 ssRNA templates as described above. When purified dsRNAs were analyzed by 7% native-PAGE gel, dsRNA segments were detectable from each template (Fig. 2B). To investigate further if VP1 could synthesize dsRNA on an ssRNA template of another member of the family, we selected rotavirus ssRNA templates derived from rotavirus double-layered particles (DLPs), which are equivalent to BTV cores, in vitro. Rotavirus DLP ssRNAs (11 segments) were generated in vitro from purified DLPs of SA11, a strain of simian rotavirus, and again used as templates as described. Surprisingly, although genome sequences of rotaviruses are different from those of BTV serotypes ( Fig. 2A), BTV VP1 could indeed synthesize dsRNAs from each rotavirus DLP ssRNAs (Fig. 2C). Similar results were obtained when rhesus rotavirus DLP ssRNAs or bovine rotavirus DLP ssRNAs were used as templates (data not shown). These results confirmed that the in vitro polymerase activity of BTV VP1 is sequence-independent at least within the family Reoviridae. Alternatively, viruses belonging to the family Reoviridae may share some motif or elements in their genomic RNAs, which may be required for polymerase activity. To verify this further, three nonviral ssRNAs, a luciferase gene ssRNA (,1800 nucleotides), a puromycin resistant gene ssRNA (PAC, 609 nucleotides) and an EGFP gene ssRNA (729 nucleotides) (Fig. 3B) were used as templates. In addition, one chimeric S9-EGFP gene (S9E 277/ 657) ( Fig. 3A and B), which was established previously as a functional segment in vivo [50], was also used as a template. None of the non-viral ssRNAs could serve as templates to generate correct duplexes (Fig. 3C, lanes 2-4). However, when the chimeric S9-EGFP gene, in which BTV RNA sequences were fused with EGFP RNA, was used as a template, a ''perfect'' chimeric dsRNA was synthesized similar to the wild-type S9 ( Fig. 3C, lanes 5 and 6). These data suggested that some sequence-based character of BTV segments, such as certain secondary structure of ssRNA, is necessary for polymerase activity of BTV. It is possible that other members of the family may also possess similar characteristics. Does Cap structure of ssRNA stimulate polymerase activity? It is known that a 59 cap structure can regulate viral polymerase activity as well as stabilize ssRNA [40,43,44,45,46]. Additionally, the crystal structure of reovirus l3 and rotavirus VP1 revealed that it has cap-binding sites on the surface of its cage-like structure, close to the entry channel for template ssRNA [3,4]. Similarly, the study on rotavirus polymerase VP1 suggests that the cap appears to be the primary element by which VP1 docks and recognizes the 59 end of a plus strand RNA [4]. Previous work has shown that T7-derived uncapped BTV ssRNAs could serve as templates for in vitro polymerase reactions but a role for the cap structure as an enhancing element was not investigated [24]. Since the positivesense RNAs of the BTV genome possess a cap structure at the 59 end, we assessed the role of the cap structure on the polymerase activity of BTV VP1. Thus, the efficiency of dsRNA synthesis was compared between capped T7 S9 ssRNA and uncapped T7 S9 ssRNA templates. Several serial dilutions (2-fold and 3-fold) of both types of templates were used for polymerase assay and the final dsRNA products were analyzed by native PAGE gel as described above (Fig. 4, upper panel). The radioactivity of each dsRNA band was quantified by using ImageJ software as described in Materials and Methods (Fig. 4, lower panel). The yield of dsRNAs from capped T7 S9 ssRNAs was higher than that from uncapped T7 S9 ssRNAs (Fig. 4). No significant difference was observed between the Km value of capped T7 S9 ssRNA and that of uncapped T7 S9 ssRNA (6.7861.674 vs 5.7563.287, P.0.05), suggesting that the presence of cap structure at 59 end did not affect on the affinity of ssRNA template to VP1. However, the Vmax value of capped T7 S9 ssRNA was approximately six fold higher than that of uncapped T7 S9 ssRNA (64.3765.88 vs 11.6262.254, P,0.01), suggesting that the presence of cap structure at 59 end of ssRNA template increased VP1 catalytic activity. Moreover, dsRNA synthesis was saturated by approximately 0.5 mg of input ssRNA template, regardless of being capped or not. In addition, although the amount of ssRNA decreased after 5 h of reaction, uncapped ssRNA still remained intact after the reaction (data not shown). These results suggested that the lower efficiency of dsRNA synthesis from uncapped ssRNAs was not due to template instability. These results support a model in which the cap structure of the template influences the catalytic activity of BTV VP1. Our previous study using a two-transfection reverse genetics (RG) system had suggested that the cap structure was not essential for genome packaging in BTV [50]. To further confirm the role of cap structure in BTV replication, we repeated the two-transfection RG schedule using for the first transfection only the genes that are essential for synthesis of the protein components of the primary replicase complex (S1, S3, S4, S6, S8, and S9) [50]. In the second transfection, the complete set of 10 ssRNA, all uncapped, were included as described in Materials and Methods. The lack of the capped RNA in all ten segments in the second transfection, which provides the BTV genome templates, reduced the efficiency of virus recovery (Fig. 5, column 2). However, lack of a cap structure on only the T7 S9 ssRNA did not reduce virus recovery (Fig. 5 column 3). It is noteworthy that the amount of VP4 synthesised by the first transfection of S4 into BSR cells was negligible and was incapable to form cap structure efficiently at 59 end of an uncapped ssRNA [50]. These results indicated that, in addition to the initiation of translation and stabilization of mRNAs transcribed from core particles, there might be a role for cap structures for the enhancement of VP1 activity during assembly of the primary replication complex. Stimulation of polymerase activity by the 59cap structure of template ssRNA is not due to priming or initial recognition of template but likely due to an allosteric modulation To further investigate the precise role of the cap structure in enhanced VP1 activity we made use of various cap analogues to determine their effect on dsRNA synthesis. Several types of cap analogue, 39-O-methyl-m7GpppG (Anti-Reverse Cap Analogue, ARCA), m7GpppG, GpppG and GpppA were added to a standard reaction mixture containing 0.5 mg of capped or uncapped T7 S9 ssRNAs. Interestingly, the dsRNA synthesis from both capped and uncapped T7 S9 ssRNA was enhanced by the addition of all cap analogues, with GpppG showing the greatest enhancement (Fig. 6). Together with the kinetics data shown above, the fact that the presence of cap analogues did not compete for VP1 activity suggests that the 59 cap structure of template ssRNA is not the primary element by which VP1 recognizes ssRNA template but that it acts to enhance the activity. Moreover, although there is the possibility that the enhancement of polymerase activity is due to artificial direct priming by the cap analogues, the fact that addition of the cap analogues enhanced the activity, together with the in vivo data shown above, suggests that the 59 cap structure may act to stimulate activity in trans. T7 polymerase has a sequence preference, such as GpGpGp at the 59 end of nucleotides [52]. To determine whether the enhancement of polymerase activity was due to sequence preference at the 59 end of the template ssRNA, we modified the T7 S9 ssRNA by adding either guanosine (Gp-S9) or adenosine (Ap-S9) at the 59 end of T7 S9 ssRNA and tested the efficiency of the dsRNA synthesis from these templates (Fig. 7, upper panel). In parallel, unmethylated capped T7 S9 ssRNAs (Gppp-S9) and methylated capped T7 S9 ssRNAs (39-O-methyl-m7Gppp-S9) were tested for their effects on the dsRNA synthesis (Fig. 7, upper panel). The amount of dsRNA synthesized from each template was compared using quantitative autoradiography as described in Materials and Methods (Fig. 7, lower panel). The amount of dsRNA synthesized from Gp-S9 was five times more than that from uncapped T7 S9 ssRNA whereas the dsRNA synthesized from Ap-S9 did not increase (Fig. 7, columns 1, 4 and 5). Additionally, the dsRNA synthesis from unmethylated or methylated capped S9 ssRNA was higher than uncapped T7 S9 ssRNA (Fig. 7, columns 1-3). These results suggested that BTV VP1 has a sequence preference for GpG at the 59 end, which may mimic the authentic cap structure, GpppG. Dinucleotides stimulate polymerase activity by priming the initiation of dsRNA synthesis Polymerase activities of some viruses, such as Influenza virus, Bovine viral diarrhea virus, Hepatitis C virus and GB virus-B, are initiated by dinucleotides due to priming the transcription initiation [41,53,54,55]. Additionally, the study using rotavirus open cores showed that GpG, which complements the sequences of the 39 end of rotavirus G8 'plus' strand RNA, forms initiation complexes with VP1 and template ssRNA to initiate 'minus' strand synthesis [32]. To determine the effect of dinucleotide on the BTV polymerase activity, several types of dinucleotides were added to the polymerase reaction mixture. Three dinucleotides, GpG, GpA and ApG, which consisted of the same nucleotides as cap analogues, were tested with 0.5 mg of T7 S9 ssRNA. When the products were analysed the data showed clearly that GpG and ApG, which complement only the last nucleotide of the 39 end sequence of BTV S9 plus strand RNA, strongly enhanced the polymerase activity (Fig. 8, columns 2 and 4). Similarly, biotinated pApG (biotin-ApG), which was incorporated only once at the 59end, also showed the strong enhancement of polymerase activity (Fig. 9A). In contrast, GpA did not enhance the observed activity (Fig. 8, column 3). Subsequently four more dinucleotides, GpU, GpC, CpU and UpA were tested for their effects on dsRNA synthesis. Of these, only GpU, which complements the last two bases of the 39 end sequence of BTV S9 plus strand RNA, strongly enhanced the dsRNA synthesis whereas GpC, CpU and UpA, which fail to complement the 39 end sequence of BTV S9 plus strand, did not exhibit any detectable enhancement (Fig. 8, columns 5-7). These results suggest that dinucleotides are capable of stimulating BTV polymerase activity by priming the initiation of dsRNA synthesis, similar to that observed for rotavirus VP1 [32]. To confirm this further, biotin-ApG was used for the nonradioactive polymerase assay as described in Materials and Methods. When biotin-ApG was added to the reaction mixture containing non-radiolabeled rNTPs, the newly synthesized dsRNAs were labeled with biotin at the 59 end of the negative sense RNA and detected at the same position as S9 viral dsRNA stained with methylene blue (Fig. 9B). Interestingly, GpG always enhanced polymerase activity more than ApG. Additionally, the amount of dsRNA synthesized from Gp-S9 was at least five times more than that of uncapped T7 S9 ssRNA whereas synthesis of dsRNA from Ap-S9 did not increase (Fig. 7). Thus, while ApG could stimulate activity by priming, albeit less efficiently than GpU, GpG was superior, plausibly as a result of direct priming and allosteric stimulation, as it may mimic the effect of GpppG. Discussion We previously demonstrated that BTV VP1 could act as a replicase in the absence of any other virus or host protein [23,24], in contrast to rotavirus VP1, which failed to exhibit catalytic activity in the absence of the inner capsid protein VP2 [29]. In this study, we further confirmed the robustness and versatility of the BTV VP1 replicase activity by demonstrating that it could synthesize all ten dsRNAs simultaneously from BTV core-derived ssRNA templates in a single in vitro reaction in the absence of any other virus proteins. In addition, genomic dsRNA segments from rotavirus DLP-derived ssRNA templates that possess no sequence similarity with BTV also acted as templates, suggesting that this assay system could be an advantage for the future studies of Reoviridae RdRp. The 39 conserved sequence in rotavirus RNA segments is essential for polymerase activity [4,29,30,32,33]. In addition, although the initiation mechanism differs from the family Reoviridae, the 39 end sequence is important for the pre-initiation events of bacteriophage phi6 RdRp, which assembles into a productive binary complex with template ssRNA [56]. However, in our preliminary study BTV VP1 did not require conserved termini for its catalytic activity, suggesting that BTV VP1 has sequence-independent replicase activity [24]. In the present study, we confirmed its sequence-independent activity by demonstrating that BTV VP1 could synthesize correct dsRNA segments from viral genomic templates of other Reoviridae. AHSV has 59 and 39 conserved regions in its genome similar to BTV genomic RNA whereas rotavirus genomes, especially the terminal conserved regions, are very different from BTV. Nonetheless, BTV VP1 could synthesize dsRNAs of rotavirus with correct sizes, suggesting strongly that BTV polymerase activity was sequence-independent in vitro. The amount of each synthesized segment was not equal. This phenomenon was also observed in rotavirus open core system [57]. There may be some structural constraints in certain ssRNAs. However, the dsRNA was well synthesized from single T7 ssRNA template of AHSV S4 unlike the AHSV core ssRNA template. Although further investigations are required, it may be that a mixture of several ssRNA segments in a single reaction may cause RNA-RNA interaction, thereby preventing the 39 end of some ssRNA from reaching at the active site of VP1 and consequently resulting in uneven dsRNA synthesis. When non-viral ssRNAs were used as templates, BTV VP1 failed to synthesize dsRNAs of correct lane sizes. Smearing and many truncated bands were detected, suggesting premature termination as well as poor template recognition. This phenomenon is not due to the lack of cytidine at the 39 end of non-vial ssRNAs as VP1 could still synthesize dsRNA from S2 mutant, which does not possess cystidine at 39 end [24]. Several virus polymerase activities are already known to regulate their transcription by structure-based cis-acting replication elements in their genomic or subgenomic RNAs [58,59,60,61,62,63]. In rotavirus replication, the presence of cis-acting functional elements of rotavirus ssRNAs has been suggested [30,31,64,65,66]. We previously demonstrated the presence of cis-acting sequences required for replication or packaging [50]. When the functional S9-EGFP transcript, S9E277/656, was added to the reaction, dsRNA synthesis was efficient, as expected. A role for a conserved feature in the templates of the family Reoviridae, required for polymerase activity, is suggested by these results although the precise nature and location remains to be determined. The most important role of the 59 cap structure in eukaryotic mRNAs is in the initiation of translation. However, it is also known to regulate RNA synthesis in virus replication [40,43,44,45,46]. The crystal structure of reovirus l3 and rotavirus VP1 revealed a cap-binding site on the surface of their cage-like structures [3,4], suggesting that the cap was the primary element by which VP1 interacts with and recognizes the 59 end of a positive-strand. The putative model structure of BTV VP1 has a strong similarity with the RdRp structures of other members of the family [26] and our data here suggest that this similarity extends to cap recognition by, and stimulation of, replicase activity in BTV. As demonstrated by the competition assays with various cap analogues and kinetic analysis, this effect was not due to direct VP1 recognition of the 59 end of the ssRNA template that is suggested for other members of the family [3,4], but it is likely due to an allosteric modulation. Precise kinetic support for the effect of cap structure for RdRps activity may come from further investigation of BTV VP1 and other members of the RdRp family. Our data also demonstrate that the cap structure is unlikely to play any role in genome packaging of BTV, although it could be important for dsRNA synthesis during primary replication. The influence of the cap structure in virus replication is worthy of future investigation using additional reverse genetics experiments. A dinucleotide, GpG, had been shown to be incorporated into the 59-end of the newly synthesized negative sense RNA, suggesting that GpG primes dsRNA synthesis by forming the replication initiation complex with template RNA and RdRp in the early step of rotavirus replication [32]. Our results also showed that dinucleotides GpG, ApG and GpU could stimulate BTV VP1 replicase activity by priming although priming by either GpG or ApG resulted in synthesis of artificial dsRNA with the 59-overhang of negative sense RNA. This indicated that the last nucleotide, 'G' of the dinucleotides that complemented the 39 end of the template sequence was sufficient to prime the activity. This feature of BTV VP1 is not shared by rotavirus VP1 as ApG failed to prime in rotavirus open core polymerase assay [32]. There are several characteristics of BTV VP1 activity that are noteworthy. For example, dsRNA synthesis from Gp-S9 ssRNA template was more efficient than that from uncapped S9 ssRNA template and the GpG was a better stimulator for polymerase activity than that of ApG. This indicates that in addition to priming, GpG could enhance the activity by allosteric effect, possibly due to some similarity to GpppG. Although further study is necessary to clarify the effect of GpG and ApG, and in addition if they possess allosteric effect, our results indicated that the replicase activity of BTV VP1 could be stimulated by dinucleotide priming at the initiation of replication. In summary, in this study we showed two stimulation factors of VP1 replicase activity, allosteric effect of cap structure and priming effect of dinucleotides as well as the possible presence of cis-acting element shared by ssRNAs in the members of family Reoviridae. Our system for polymerase assay could be modified for in vitro assembly assay of the replication complex to further clarify the mechanism of BTV replication in the future. Expression and Purification of his-tagged BTV VP1 The his-tagged VP1 was expressed in the Spodoptera frugiperda (Sf9) cell line (ATCC, Rockville, MD) and purified as described previously with some modifications [24]. Briefly, Sf 9 cells were infected with AcBTV10.NHis1 at a multiplicity of infection of 5.0. At 2 days post-infection, cells were harvested and lysed with VP1 lysis buffer (50 mM sodium phosphate pH8.0, 10% (v/v) glycerol, 0.5% (v/v) Nonidet P-40) containing 16protease inhibitors (Protease Inhibitor Cocktail Set V EDTA-Free, Calbiochem). Nuclei and cell debris were removed by centrifugation at 9400 g for 1 hour (h) at 4uC. The cell lysate was mixed with HIS-SelectH Nickel Affinity Gel (Sigma) for 1 h at 4uC. After washing the affinity gel with 50 mM sodium phosphate buffer (pH 8.0) containing 10% glycerol and 20 mM imidazole, his-tagged VP1 was eluted with 50 mM sodium phosphate buffer (pH 8.0) containing 10% glycerol and 300 mM imidazole. The eluted his-tagged VP1 was diluted at one in five with 50 mM Tris-HCl buffer (pH 8.0) containing 10% glycerol and 1 mM DTT and further purified by the affinity column, Hi-TrapH Heparin HP column (GE Healthcare), using AKTA system (GE Healthcare) with a linear sodium chloride gradient (100 mM-1000 mM). Polymerase assay Polymerase assay was performed as described previously with some modifications [24]. Briefly, 70 ng of his-tagged VP1 and several amounts of ssRNA templates were added to 50 ml of reaction mixture (50 mM Tris-HCl pH7.4, 6 mM magnesium acetate, 600 mM Manganese chloride, 320 mM ATP, 320 mM GTP, 320 mM UTP, 2 mM CTP, 0.2 mCi/ml [a-32 P] CTP (Perkin Elmer), 2% (w/v) PEG4000, and 1.6 U RNasin plus (Promega) in presence and absence of 20 pmol of cap analogues (New England Biolabs) or 20 pmol 59-hydroxyl dinucleotides (IBA). Note that ssRNA templates were not denatured. After incubation for 5 h at 37uC, synthesized dsRNAs were purified using a standard phenol/ chloroform method and analyzed using native-PAGE. Note that the dsRNA synthesis in the VP1 reactions proceeded in a linear manner for at least 5 h [24]. As markers, end-labeled-dsRNA genomes purified from virus-infected cells were used as described previously [24]. Gels were dried and exposed to Storage Phospher screen (GE HEalthcare). The radiolabeled dsRNAs were detected using the image analyzer, Typhoon Trio (GE HEalthcare), and each radiolabeled band was quantified using ImageJ software (NIH: http://rsb.info.nih.gov/ij/). The experimental data was then fitted by a nonlinear regression method using the program Prism (GraphPad Software, USA). The kinetics parameters were determined by the Michaelis-Menten equation: where [S] is the substrate concentration (ng/ml); K m is the apparent Michaelis-Menten constant; and V max is the maximal rate attained when the enzyme active sites are saturated by substrate (quantified radioactive count/min). Transfection of cells with BTV transcripts BSR monolayers in twelve-well plates were transfected twice with BTV mRNAs using Lipofectamin 2000 Reagent (Invitrogen) as describe previously [50]. BTV transcripts were mixed in Opti-MEM (Invitrogen) containing 0.5 U/ml of RNasin plus (Promega) before being mixed with Lipofectamin 2000 Reagent in Opti-MEM containing 0.5 U/ml of RNasin plus. The transfection mixture was incubated at 20uC for 20 min and then added directly to cells. The first transfection was performed with a standard of 50 ng of each T7 transcript (S1, S3, S4, S6, S8 and S9), and a second transfection, again with 50 ng of each of the ten T7 transcripts, at 18 h post first transfection. At 6 h post second transfection, the culture medium was replaced with 1.5 ml overlay consisting of DMEM, 2% FBS, and 1.5% (wt/vol) agarose type VII (Sigma) and the plates were incubated at 35uC in 5% CO 2 for 3 days to allow plaques to appear. Priming assay A modified method of polymerase assay was used for measuring dinucleotide priming. Briefly, 70 ng of His-tagged VP1 and 0.5 mg of T7 S9 ssRNA were added to 50 ml of reaction mixture (50 mM Tris-HCl pH7.4, 6 mM magnesium acetate, 600 mM Manganese chloride, 320 mM ATP, 320 mM GTP, 320 mM UTP, 320 mM CTP, 2% (w/v) PEG4000, and 1.6 U RNasin plus (Promega)) in presence of 20 pmol of biotin-labeled ApG (IBA). After separation by native-PAGE gel, samples were transferred to the positivecharged nylon membrane, Hybond N+ (GE Healthcare) and Biotin-labeled dsRNA bands were detected using Streptavidinealkaline phosphatase conjugate (Sigma) and BCIPH/NBT Alkaline Phosphatase Substrate (Sigma) by the same method as with the Biotin Luminescent Detection Kit (Roche Applied Science). As markers, purified dsRNA genomes were used and stained with methylene blue solution (0.02% (w/v) methylene blue, 0.3 M sodium acetate, pH 5.0) after transferring to the nylon membrane.
7,420
2011-11-15T00:00:00.000
[ "Biology" ]
Effect of measuring distance error on working phase voltage at zero potential live working point When the live working is carried out in the distribution network, the zero-potential charging operation is often used. The safety of zero-potential live working is closely related to the measurement distance error. The influence of the measurement error of the zero-potential live working point on the operating phase voltage needs to be studied. Based on the Floyd algorithm, a method for calculating the shortest electrical distance from the operating point to the substation is proposed. The sub-station is centered on the branch nodes of the line to form the distance matrix, and the Floyd algorithm is used to calculate the shortest electrical distance from the working point to the substation. The working phase injection voltage is further calculated based on the zero phase potential characteristics of the operating phase. Finally, based on the PSCAD, the system model is established. The simulation concludes that the distance between the substation and the working point is less than 20m, and the operating point voltage can still be below the safe voltage. Introduction With the rapid development of China's power grid technology and the continuous improvement of power supply reliability requirements, the compatible live working technology has made new developments and breakthroughs [1] . For the live working of the distribution network, the operating point voltage needs to be kept below the safe voltage. However, due to the existence of the measurement distance error at the zero-potential live working point, the safety of the live working operator is seriously jeopardized [2][3] . Therefore, the research on measuring the distance error of the zero potential live working point has received more and more attention and attention. In order to study the influence of the measurement distance error on the working phase voltage of the zeropotential live working point, in recent years, the research on the measuring method of the live working point has been rapidly developed. In the current study, there are mainly three methods for measuring the distance of live working points: (1) Impedance method. The impedance of the loop is calculated from the voltage and current measured at the operating point in [4]. Since the length of the line is proportional to the impedance, the distance from the working point to the exit of the transformer can be determined. The premise is to ignore the distributed capacitance and leakage conductance of the line. The advantage of the impedance method is that it is relatively simple and reliable, but most impedance methods have accuracy problems [5] . (2) Work point analysis method. The distance between the working points is obtained by analysis and calculation using the power frequency voltage and current amount recorded during the charging operation in [6]. According to the required measurement information, the operating point analysis method can be divided into single-ended electric quantity method and double-ended electric quantity method. The single-ended electric quantity method calculates the working point according to the voltage and current of one end and the necessary system parameters. Distance [7] , the doubleended electrical quantity method is based on the voltage and current at both ends of the line and the necessary system parameters, and the distance measurement equation is obtained by simplification, and the working point distance is solved [8] . (3) Traveling wave method. According to the traveling wave transmission theory, the working point ranging is realized in [9]. The reliability and accuracy of the traveling wave method are theoretically independent of the line type, the grounding resistance of the working point and the systems on both sides, but are subject to many engineering factors in practice. The existing ranging method does not consider the influence of the measurement distance error on the operating phase voltage, which will seriously endanger the safety of the live working operator. In view of this, a method for calculating the shortest electrical distance from the operating point to the substation is proposed in this paper. Based on the Floyd algorithm, calculate the shortest electrical distance from the operating point to the substation. The working phase injection voltage is further calculated based on the zero phase potential characteristics of the operating phase. Finally, based on the PSCAD simulation software, the system model is established. The simulation shows the influence of the measurement error of the zero-potential live working point on the operating phase voltage through the operating phase voltage curve. Calculation method for shortest electrical distance from live working point of distribution network to substation In the above method, it is necessary to know the voltage drop from the substation to the operating point U , and it is related to the line unit impedance z and the electrical distance l from the substation to the working point. The unit impedance of the line can be obtained by the line parameters and related measuring equipment. The electrical distance l has many complicated branch lines due to the structure of the distribution network. There is no direct and convenient measurement method. Therefore, a Floyd algorithm based method for calculating the shortest electrical distance from the live working point of the distribution network to the substation is proposed. The method process is as follows: (1)The branch nodes of the line are numbered (1, 2, , ) n with the substation as the center, the total number of nodes is n, and the node set is N; (2)The distance matrix D is obtained by the adjacent distance between the nodes; (3)Define the shortest path matrix as min D , let min  DD ; (4)Determine whether (1) is true when seeking the shortest path; min min min ( , ) ( , ) ( , ) ,, (1) turn. If the elements in matrix min D have not changed during the search, min D is the final shortest path matrix. If an element changes, jump to step (4) until all elements are no longer Until there is a change; (6) Output matrix min D . The arbitrary element of the matrix is the shortest distance from branch node i to branch node j. By inputting the value through the remote humanmachine interchange module, the central computing module can calculate the electrical distance from the working point to the outlet end of the substation, and then calculate the exact value of the input current. The current injection module can inject the current into the neutral point of the substation. Realize zero point of the operating point. After obtaining the distance between the branch nodes of the line through the pre-project construction data and the post-measurement calibration, the shortest electrical distance from any branch node to the exit end of the line can be conveniently and quickly obtained, and input into the active inverter device can realize precise voltage regulation and operation. The point potential is zero. The zero-potential non-blackout operation method for distribution network To regulate the voltage of any working phase, a power supply device or a power electronic device can be used to inject current( i I )into the neutral point. Taking the C phase as an example, the vector diagram is used for analysis. The current is injected into the neutral point through the grounding transformer, and the zerosequence voltage will appear in each phase. It will cause the neutral point to ground voltage( N U ) to shift. The power supply and load are not affected by the phasedown of the C-phase. Therefore, the non-effective grounding distribution system has the natural advantage of maintaining the line voltage symmetric operation when single-phase step-down, and the single-phase stepdown symmetrical operation is feasible. In Fig. 1, the C-phase line needs to be energized. Assume that the operating point voltage is For distribution lines of 35kV and below, the impedance between the substation and the operating point in Figure 1 can be obtained by multiplying the unit impedance Z by the electrical distance l. When the working phase current is . c I , U can be obtained by Ohm's law: . . The KVL theorem for c-phase circuits is available: The neutral point can be obtained by using the KCL theorem: It can be known from equation (9) . .     C N U E U , according to equation (7), the voltage at the operation point at this time is 0, that is, when the change of zero sequence voltage is . d U , the potential at the corresponding operation point is zero. Therefore, if and only if when the zero-sequence current is injected, the amount of change in the zero-sequence voltage is adjusted to the opposite of the sum of the operating phase power supply voltage and the operating phase line voltage drop, the operating point voltage is guaranteed to be zero, creating for subsequent manual safety operations. condition. Verification of simulation In PSCAD, a schematic diagram of the zero-potential charging operation of the 10kV distribution network shown in Figure 1 is built. The neutral point is taken from the YnY11 grounding transformer. The power supply device is connected between the neutral point and the ground, and is respectively output by a voltage source and a current source. Set the total length of the line to 15km, carry out live operation at point C of phase C, and point D is 3km away from the line exit. Under the same injection mode, the calculation error of the distance between the substation and the working point is assumed to be 20m, 110m, 205m. After the regulation, the grounding line is set at the working point to observe the change of the operating point voltage during the whole operation. Set the simulation parameters as shown in Table 1. The calculation error is 20m Assume that when calculating the distance between the substation and the working point, the calculation error is 20m, that is, the actual control point is 3.02km away from the line exit. The voltage source is used to realize the zero-potential charging operation, and the simulation parameter is substituted into the formula (5) Figure 2 and Figure 3. The operating point(C phases) voltage change is shown in Figure 4. It can be seen from Fig. 2 and Fig. 3 that the nonworking phase voltage is normal before 0.4 s. After 0.4s, the voltage source starts to inject the current regulation voltage, and the A and B phase voltages are higher than the phase voltage under normal operation, close to the line voltage. At the 8s time, the grounding line is set on the analog working point, which is equivalent to a single-phase grounding fault. In this case, the nonworking phase voltage rises to the line voltage. It can be seen from Fig.4 that before 0.4s, the effective value of the operating point voltage is 6.05kV, and the voltage source is used to inject current at 0.4s. Since the actual regulation point is 3.02km away from the line exit, the actual operating point D voltage is not zero. Potential, at this point, the effective value of the point D of the operating point is 40.13V. At 0.8s, the grounding line is set on the analog working point. It can be seen that after setting the grounding wire, the voltage at point D is around 0V. The calculation error is 110m Assume that when calculating the distance between the substation and the working point, the calculation error is 110m, that is, the actual control point is 3.11km away from the line exit. The voltage source is used to realize the zero-potential charging operation, and the simulation parameter is substituted into the formula (5) Figure 5 and Figure 6. The operating point(C phases) voltage change is shown in Figure 7. It can be seen from Fig. 5 and Fig. 6 that the nonworking phase voltage is normal before 0.4 s. After 0.4s, the voltage source starts to inject the current regulation voltage, and the A and B phase voltages are higher than the phase voltage under normal operation, close to the line voltage. At the 8s time, the grounding line is set on the analog working point, which is equivalent to a single-phase grounding fault. In this case, the nonworking phase voltage rises to the line voltage. It can be seen from Fig.7 that before 0.4s, the effective value of the operating point voltage is 6.05kV, and the voltage source is used to inject current at 0.4s. Since the actual regulation point is 3.11km away from the line exit, the actual operating point D voltage is not zero. Potential, at this point, the effective value of the point D of the operating point is 195.48V. At 0.8s, the grounding line is set on the analog working point. It can be seen that after setting the grounding wire, the voltage at point D is around 0V. The calculation error is 205m Assume that when calculating the distance between the substation and the working point, the calculation error is 205m, that is, the actual control point is 3.205km away from the line exit. The voltage source is used to realize the zero-potential charging operation, and the simulation parameter is substituted into the formula (5) Figure 8 and Figure 9. The operating point(C phases) voltage change is shown in Figure 10. It can be seen from Fig. 8 and Fig. 9 that the nonworking phase voltage is normal before 0.4 s. After 0.4s, the voltage source starts to inject the current regulation voltage, and the A and B phase voltages are higher than the phase voltage under normal operation, close to the line voltage. At the 8s time, the grounding line is set on the analog working point, which is equivalent to a single-phase grounding fault. In this case, the nonworking phase voltage rises to the line voltage. It can be seen from Fig.10 that before 0.4s, the effective value of the operating point voltage is 6.05kV, and the voltage source is used to inject current at 0.4s. Since the actual regulation point is 3.205km away from the line exit, the actual operating point D voltage is not zero. Potential, at this point, the effective value of the point D of the operating point is 378.75V. At 0.8s, the grounding line is set on the analog working point. It can be seen that after setting the grounding wire, the voltage at point D is around 0V. Conclusion (1)The injection of the calculated injection current through the voltage source can achieve precise control of the operating point voltage; (2)When calculating the distance between the substation and the working point, if a small error occurs (the error distance is less than 20m), the operating point voltage can still be guaranteed below the safe voltage; (3)If the grounding wire is set on the ground during the live working, the working point will be zero potential regardless of the distance error between the substation and the working point. Therefore, in order to ensure the safety of the operator during live working, the grounding wire should be set locally.
3,495.8
2020-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Application of the method of moments for doublet and triplet analysis in the radiation spectra When processing spectra, it is usually sufficient to determine only the positions of the peaks and their areas. To use the method of moments for doublet peaks analysis, it is necessary to first construct (calibrate) the dependence of the second and third moments for single peaks on the energy of the corresponding radiation. After such calibration overlapping peaks can be separated. To separate peaks in doublet using the method of moments, it is necessary to solve a quadratic equation. To separate peaks in triplet it is necessary to have an appropriate calibration curves already up to the fifth moment inclusive and solve a cubic equation. Examples of the separation of overlapping peaks in the analysis of gamma-ray spectra are presented. Introduction In the currently used programs for spectra analysis, the shape of peaks is described either by a model function or using a table.The deviation of the experimental peak form from the model one leads to the appearance of systematic errors in the analysis of overlapping peaks.When tabulating the shape, it is difficult to take into account the dependence of the peak shape on energy.For spectra analysis, it is usually sufficient to determine only the positions of the peaks and their areas, but when using parametric methods, it is necessary to determine the full set of parameters describing the shape of the peak.Therefore, methods that do not use an explicit description of the shape of peaks are interesting.Such methods are called distribution-free or nonparametric.The moments of distribution carry sufficiently complete information about the shape of the instrument lines and they can be used both for the analytical description of the shape of the peaks when using the minimum χ 2 -method, and for the direct separation of overlapping peaks by solving simple algebraic equations without using any model assumptions about their shape [1].If the dependence of the moments on the energy obtained after determination of the moments for single peaks is known, overlapping peaks can be separated.To separate doublets using the method of moments, it is necessary to solve a quadratic equation, to separate triplets -a cubic equation.To use the method of moments for peak separation in doublet, it is necessary to first construct calibration curves of the dependence of the second and third moments for single peaks on the energy of the corresponding radiation.To separate peaks in triplet it is necessary to have appropriate calibration curves already up to including the fifth moment.The report provides examples of the separation of overlapping peaks in the analysis of gamma-ray spectra.Methods for estimating the errors of the results and the applicability of the method of moments for separating overlapping peaks are discussed.The position of the peak is usually associated with the position of the maximum of the model function f(i), but since it is necessary to find estimates that do not use parametric function for the peak shape (figure 1) in the experimental spectrum S(i), it is necessary to use another definition of this value.For the spectra we are considering S(i) as a discrete random variable has a Poisson distribution in i channal.It is natural to associate the mathematical expectation M{S(i)} with the energy of the peak, and use the sample average as a statistical estimate of the peak position.Then for the position P and the area B of a single peak (if there is no background) we have: ( The line at the top indicates estimates of random variables.The variance of the estimate of the value x will be denoted by D{x}, and √{} − ∆ .Then ∆ = √ , since {} = .The position P according to the definition given above is the first sampling moment, hence [2,3] ∆ = 2 / , where M 2 is the second central moment of S(i) distribution.For the S(i) distribution, it is possible to estimate the higher central moments M n and their dispersions D{M n } according to the formulas [2,3]. The highest moments are associated with the shape of the peak.This relationship can be expressed analytically applying decompositions used in theory of statistics [2][3][4].The method of moments is quite suitable for describing the shape of an instrument line in different spectra.If the dependence of the moments on the energy obtained after calculating the moments for single peaks is known, the overlapping peaks can be separated by changing only the positions and areas of the components.When using moments to account for the final channel width, simple moments of the density S(i) grouped by channels ′ and the moments of the initial density are connected by simple relations [2][3][4].For the second moments one has: Figure 2 shows an example of the dependence of the central moments on the energy for the γ-ray spectrum measured using a semi-conductor HPGe detector.It can be seen from figure 2 that the moments μ 2 and μ 3 for single peaks change quite slowly with increasing the γ-ray energy.The points marked with stars correspond to a section of the spectrum containing two overlapping peaks. Obviously, the deviations ∆ 2 and ∆ 3 are related to the areas and positions of these peaks and using this relationship, it is possible to separate overlapping peaks without minimizing the χ 2 function [1]. Figure 2. The dependence of the moments 2 and 3 on the µ 1 for single peaks and Δ i =M i -µ i , where M i -measured moment for doublet [1].Value of µ 1 is related to the γ-ray energy by using a calibration procedure [5]. Determination of the position and area of peaks for doublet The measured or instrumental spectrum is the distribution density ρ(V) of the amplitudes of the pulses V, which are caused by the measured radiation in the detector.The true spectrum or density of the probability distribution of the detected radiation with energy E is determined by using calibration of the detector with radiation energy, i.e. the determination of the instrumental response function (response function) [1]. Consider the case of two overlapping peaks.Let's assume for the doublet M 1 = 0, that is, the positions of the doublet components will be counted from the center of gravity of the multiplet.In this case, the measured spectrum, assuming that the shape of the peaks in the multiplet is the same, can be written as follows: We determine the values Δ i =M i -µ i for the analyzed doublet (figure 2), where M i is the corresponding measured moment for the doublet, µ i is the moment known from calibration experiments for single peaks.We introduce symmetric polynomials σ Using the expressions and the definition of moments for the doublet, we obtain the following equations for determining the values of 1 , Note that to separate the peaks of the doublet, it is necessary to have a calibration curve (figure 2) up to and including the third moment.Knowing the and , we can determine the areas B k and positions P k of the doublet components: To determine the errors, neglecting correlations, we introducing the β i values and differentiate (5).After that, using standard algorithms [2][3][4] we get: The table and figure 3 show example of the separation of two overlapping peaks according to the above formulas.Doublet was obtained by superposition of the two single lines.The shape of the line is shown in figure 1.After superposition the areas and position of single lines for doublet were determined by formulas (5). Determination of the position and area of peaks for triplet The measured spectrum for triplet, assuming that the shape of the peaks in the multiplet is the same, can be written as follows: Using the expressions for calculating the moments of the triplet M i and the values known from the calibration curve for single peaks µ i , we obtain the following relations: Introducing the corresponding symmetric polynomials: Substituting ( 13) into ( 11) and ( 12) we obtain the equations for calculating σ 1 , σ 2 , σ 3 : , solving which we get: By entering the variables u = b 2 + b 3 and v = b 2 b 3 , we obtain a cubic equation for determining the value of u: and the corresponding value of v: Since the original equations are symmetric, it is necessary to find one root of the cubic equation, which can be done numerically.If the solution of the equation ( 16) is found, then the desired parameters b 1 , b 2 , b 3 can be determined: The a i parameters are found as solutions of a system of linear equations: The areas B k and positions P k of the triplet components are determined by using formula (6). It should be noted that in order to analyse a triplet, it is necessary to have an appropriate calibration curves (of the type shown in figure 2) already up to including the fifth moment. The errors of the obtained values can be estimated in a standard way [1][2][3][4].If the contributions of the fourth and fifth moments to the error value can be neglected, then the corresponding formulas for estimating the error are simplified [1]: Discussion and Conclusions The examples given show that sample moments can be used to separate overlapping peaks by solving simple algebraic equations without using any model assumptions about their shape.To use the method of moments for doublet peaks separation, it is necessary to first construct the special calibration curves for second and third moments.For triplet peaks separation the calibration curves must be obtained for moments up to fifth.The necessary information for constructing the such curves and for dependence of the sample moments on radiation energy can be obtained from the analysis of single peaks with known energy.Proper standard sources of radiation must be used for calibration procedure [5].Since the considered method is nonparametric, its application is very attractive in the case of spectrum analysis, where the description of the shape of single peaks using analytical functions is not always obvious.Such a situation may arise, for example, when analysing the alpha particles spectra [6] of radioactive atomic nuclei implanted into detector.Using the method of moments to separate doublets and triplets in this case avoids the problems associated with describing the shape of the line at the tails both of single peaks and multiplets.When processing spectra by the method of moments, it is important to ensure the same marking of all intervals (selection of the beginning and end of the multiplets).In cases where increased accuracy of processing is required, the boundaries of the peaks can be adjusted using estimates of the first two or three sample moments obtained after preliminary rough marking, performed, for example, using an automatic peak search program.Errors associated with marking and background should be included in the analysis for variances of estimates of sample moments based on the selected method of marking the spectrum. Figure 1 . Figure 1.The shape of the peak (background included) in the γ-ray spectrum. Figure 3 . Figure 3. Doublet obtained by overlapping of the two single peaks without background. Table . Example of analysis of two overlapping peaks.
2,594.2
2024-02-01T00:00:00.000
[ "Physics" ]
Point Cloud Classification Algorithm Based on the Fusion of the Local Binary Pattern Features and Structural Features of Voxels : Point cloud classification is a key technology for point cloud applications and point cloud feature extraction is a key step towards achieving point cloud classification. Although there are many point cloud feature extraction and classification methods, and the acquisition of colored point cloud data has become easier in recent years, most point cloud processing algorithms do not consider the color information associated with the point cloud or do not make full use of the color information. Therefore, we propose a voxel-based local feature descriptor according to the voxel-based local binary pattern (VLBP) and fuses point cloud RGB information and geometric structure features using a random forest classifier to build a color point cloud classification algorithm. The proposed algorithm voxelizes the point cloud; divides the neighborhood of the center point into cubes (i.e., multiple adjacent sub-voxels); compares the gray information of the voxel center and adjacent sub-voxels; performs voxel global thresholding to convert it into a binary code; and uses a local difference sign–magnitude transform (LDSMT) to decompose the local difference of an entire voxel into two complementary components of sign and magnitude. Then, the VLBP feature of each point is extracted. To obtain more structural information about the point cloud, the proposed method extracts the normal vector of each point and the corresponding fast point feature histogram (FPFH) based on the normal vector. Finally, the geometric mechanism features (normal vector and FPFH) and color features (RGB and VLBP features) of the point cloud are fused, and a random forest classifier is used to classify the color laser point cloud. The experimental results show that the proposed algorithm can achieve effective point cloud classification for point cloud data from different indoor and outdoor scenes, and the proposed VLBP features can improve the accuracy of point cloud classification. Introduction In recent years, with the rapid development of three-dimensional (3D) sensors, point cloud data have been widely used in fields such as unmanned driving, measurement, remote sensing, smart agriculture, "new infrastructure", and virtual reality. In recent years, acquisition systems that can acquire point cloud data with color information, such as depth cameras and backpack/handheld mobile surveying and mapping systems, have attracted increasing attention and been widely used. The feature extraction and classification of point clouds are the key steps in point cloud data application. In the process of constructing 3D semantic maps and performing feature extraction based on point clouds, the classification accuracy of point clouds directly affects the application effects of a method. In point cloud segmentation, classification, registration, and surface reconstruction algorithms' processing effects mostly rely on the feature extraction ability of the method applied. The accuracy of point cloud classification is closely related to the effectiveness of features. Therefore, research on point cloud feature extraction and classification is of great significance. For the feature extraction of point clouds, researchers proposed a large number of feature descriptors including, for example, normal vector, elevation feature [1], spin image [2], covariance eigenvalue feature [3,4], global feature viewpoint feature histogram (view feature histogram, VFH) [5], clustered view feature histogram (CVFH) [6], and fast point feature histogram [7] (fast point feature histograms, FPFH). However, the abovementioned features are all extracted from the geometric structure information of the point cloud and lack the use of the color information of the color point cloud. Considering the point cloud data acquired in recent years usually have color information and the geometric structure characteristics of the point cloud cannot fully describe the object, it is necessary to combine the color information and the geometric structure of the point cloud for analysis. For example, for a flat point cloud area, the geometric features of the surface may be consistent. If there are important pattern marks on the plane, the geometric structure features cannot find the distinction. In contrast, the color information and texture features can capture the variation on this plane. In addition, the color point cloud is obtained by fusing the data collected by the camera and the LiDAR sensor. Considering the fusion level is low, the original data collected by the sensor are retained to the greatest extent [8]. Achanta et al. [9] used the SLIC algorithm to combine the color similarity with the spatial neighbors of the image plane to use the color information of the reasonable color point cloud. This method uses the lab color space to represent the color features of the point cloud, combined with the pixel position, to form a five-dimensional feature vector. The feature vector uses Euclidean distance to measure the similarity between three-dimensional points. The effect of this method is unstable and sensitive to noise. For the classification of point clouds, the traditional method is to determine the category of each point by defining relevant judgment rules. For example, by assuming the height of the ground point in the neighboring area is the smallest as the judgment rule, all ground points are marked. However, in many cases, it is difficult to design a robust decision rule and the effect is not ideal. To solve this problem, methods based on machine learning are widely used for point cloud classification. The basic idea of this kind of method is to perform feature extraction on the point cloud, use the point cloud features of the training set to train a classifier, and then use the classifier to classify the point cloud to be classified. Currently, commonly used classifiers are Random Forest (RF) [10], MLP [11], SVM [12], and AdaBoost [13]. Guan [14] and others applied the random forest classifier to the feature selection of point cloud data and achieved good classification results. This also shows that the use of random forest classifiers can improve the performance of data classification [15]. Mei et al. [16] used the RGB value of each point, the normal vector of each point extracted from the neighboring points in the radius neighborhood, the spin image and the elevation feature, the boundary and label constraints for feature learning, and finally a linear classifier to classify each point. Although this type of algorithm directly integrates the three-channel value of the color into the feature of the point, it does not make full use of the color information. To leverage point clouds' color information, this paper draws on references to the local binary pattern feature description operator (local binary pattern, LBP) [17,18] in a twodimensional image. The grayscale does not change with any single transformation. The degree scale has good robustness and no parameters (non-parametric). There is no need to pre-assume its distribution in the application process and then extend it to the feature description of point clouds. For a two-dimensional image, a fixed-position neighborhood can be used to construct LBP features. However, the point cloud data are irregular and disordered, and the fixed neighborhood position of each point cannot be directly obtained. Therefore, we propose a voxel-based local binary pattern feature, that is, the VLBP feature. In addition, to achieve the effective classification of point cloud data with color information, we propose a point cloud classification algorithm based on the fusion of voxel local binary pattern features and geometric structure features in which the random forest classifier with an excellent classification performance is selected. The process of the classification algorithm is shown in Figure 1. As shown in Figure 1, the proposed algorithm first voxelizes the input point cloud data so that the neighborhood of each point is regularized. Then, the gray-level mean and gray-level variance features of each cube for each voxel constructed by a single point are extracted, and the gray information between the center of the voxel and the neighboring sub-voxels is compared to obtain the local difference. Then, local difference sign-magnitude transform (LDSMT) is performed on the local difference. In this way, the two complementary components of the sign and magnitude of the local domain are obtained and converted into binary codes through the global thresholding of voxels. Then, the gray level at the center of the voxel is compared with the gray average of the entire voxel to obtain the global contrast. Next, the extracted VLBP features are normalized, fused with the original color RGB of the point cloud to form the color feature of the point cloud, and then fused with the geometric structure feature (normal vector and FPFH feature) of the point cloud. Finally, based on the fusion features, a random forest classifier is used to classify the point cloud. We conducted classification experiments on point clouds of different indoor and outdoor scenes to verify the effectiveness of the proposed algorithm. The main contributions of this article are as follows: (1) A point cloud color feature descriptor is proposed, that is, a local binary pattern feature based on voxels (VLBP). This feature describes the local color texture information of the point cloud and has the characteristic that the grayscale does not change with any single transformation. The expression of this grayscale texture information can effectively improve the classification effect of the point cloud. (2) A point cloud classification algorithm based on the fusion of point cloud color and geometric structure is proposed. The proposed algorithm uses the RGB information of the color and the VLBP feature proposed in this paper as the color feature of the point cloud, merges it into the geometric structure feature of the point cloud to construct a more discriminative and robust point cloud fusion feature, and then uses a random forest classifier to effectively classify the point clouds. Related Work In this paper, the local feature VLBP of point cloud scene data based on voxel extraction is an extended texture descriptor based on the complete modeling of local binary patterns (CLBP) [19][20][21]. CLBP is a related complete binary mode scheme for texture classification that can well describe the local spatial structure of image textures. The local region is represented by its central pixel and local differential sign-amplitude transformation. The center pixels represent the gray level of the image. Through global thresholding, they are converted into binary codes, namely, CLBP_Center (CLBP_C). The local difference sign-amplitude transformation decomposes the local difference of the image into two complementary components: sign and amplitude. As shown in Figure 2, given that central pixel g c and its radius R s are circular and have evenly spaced P neighbors of g 0 , g 1 , g 2 , . . . , and g p−1 , we can simply calculate the d p difference between g c and g p . As shown in Equation (1), d p can be further broken down into two parts: is the sign and m p is the amplitude of d p . d p is the difference between the neighboring pixel and the center pixel. It cannot be used as a feature descriptor directly because the difference is sensitive to illumination, rotation, and noise. However, these effects can be overcome by dividing the difference into a sign component and an amplitude component. The two are multiplied and recorded as positive and negative binary mode CLBP_Sign (CLBP_S), and amplitude binary mode CLBP_Magnitude (CLBP_M) by Equations (2) and (3). where c m is the mean value of m p in the entire image. where t(x, c) = 1, x ≥ c 0, x < c , c i is the average gray level of the entire image and binary encoding is performed by comparing the size of the center pixel and the pixel value of the entire image. For a two-dimensional plane image, CLBP features can be constructed by setting the neighborhood of a fixed position. Then, for 3D irregular and disordered point cloud data, we can regularize the point cloud field by voxel and then extract the local binary pattern feature VLBP of the point cloud. Voxel-Based Shading Point Cloud Feature Descriptor (VLBP) The extraction of VLBP feature descriptors is divided into three steps: voxelization, VLBP feature descriptor construction, and voxel histogram F VLBP feature vector, as follows. Voxelization Given a point p(x, y, z, R, G, B) in the point cloud, we take point p as the center, use kdtree [22] to search for the radius neighbors, and find all points within radius r. All points obtained by the nearest-neighbor search are N and these N points form a cube V(r), that is, voxel V. Then, voxel V(r) is divided into n × n × n cubes, that is, sub-voxels. These n × n × n sub-voxels are on the sides of the x, y, and z coordinate axes. The lengths are all equal. The side lengths of voxel V(r) in the x, y, and z directions are dx, dy, and dz, respectively. As shown in Equation (5), the side lengths of each sub-voxel in the x, y, and z directions are L x , L y , and L z , respectively: Traverse all points in voxel V(r) to determine to which sub-voxel they belong. Taking a point q(x , y , z , R , G , B ) in V(r) as an example, find which sub-voxel this point belongs to. First, number n × n × n as sub-voxels. Each sub-voxel is represented by coordinates (a, b, c), where a, b, c [1, n]. The coordinates of the sub-voxel where point q is located (a 0 , b 0 , c 0 ) are shown in Equation (6): where x 0 , y 0 , and z 0 are the minimum values of the N points in voxel V in the x, y, and z-axes direction. The number of points contained in the i voxel is K i and the points in the i-th voxel are expressed as P i = {p 1 , p 2 ..., p K i }. If the number of points in the i voxel is K i > 0, then the R i , G i , and B i values of the i-th small block are calculated by Equation (7). where r j , g j , and b j are the R, G, and B values of the j-th point, respectively. If the number of points of the i-th small block is The gray value Vg c of the voxel center is calculated by Equation (8). The average gray value Vgi of the i-th sub-voxel in voxel V is calculated by Equation (9). The gray value of the center point p of the current voxel is Vg c , the gray values of adjacent sub-voxels in voxel n × n × n − 1 are Vg i , and i represents the i-th small block. The abovementioned voxelization on all points in the point cloud are performed and the gray values of the current voxel center point p and its adjacent voxels can be calculated. Then, the VLBP feature descriptor of voxels is constructed. VLBP Descriptor Construction The gray value of the current point (the sub-voxel where the current point is located) p is g c and the gray values of the (n × n × n − 1) neighborhood blocks of this point in the voxel are g i , representing the i-th small block, i = 0,1...(n × n × n − 1) − 1. The grayscale difference between the i-th small block and the small block where the current point is located is d i = g i − g c . Using Equation (1) Among them are In VLBP, a local area is represented by its central pixel and local differential signamplitude transformation (LDSMT). The gray level of the central voxel is simply encoded with a binary code after global thresholding. LDSMT decomposes the local structure within the voxel into two complementary components: difference sign and difference amplitude. The three-dimensional features of VLBP are VLBP_S, VLBP_M, VLBP_C, and I = (n × n × n − 1). These three-dimensional features are defined as Equation (11). i=0 m i /I; and c n is the average gray value of n × n × n sub-voxels in the entire voxel, that is, c n = (∑ I−1 i=0 m i + g c )/(n × n × n). Scale Invariance To make VLBP descriptors scale-invariant, two different scales are selected for voxelization and then the VLBP descriptor is constructed. By changing the size of r, voxels of different scales are obtained. The features of different scales obtained are directly output in order, which can make the features more robust [20]. Rotation Invariance When constructing voxels, this article numbered n × n × n sub-voxels. They were numbered based on XYZ, XZY, YXZ, YZX, ZXY, and ZYX as 6 different coordinate directions and then these numbers were placed in ascending order. Arrange and extract the values of VLBP_S and VLBP_M in the voxel, in turn, and then encode it, that is, construct a binary code structure similar to 1001111000 . . . 00. Then, convert the binary arrangement to the decimal number to find the smallest one. The 0-1 binary obtained after the rotation sorting is performed according to the finally obtained 01 binary in the definition formulas of CLBP_S I and CLBP_M I . F VLBP Feature Vector of Voxel Histogram The construction steps of the LBP feature vector of the voxel histogram are as follows: 1. Given search radius r2, take point p as the search center point and use kdtree as the search radius to perform a second radius near-neighbor search to find all points within its radius, r2. 2. Find the VLBP_S and VLBP_M values corresponding to all points obtained by this radius neighbor search, divide 0-2 I into the T ZN detection interval, and then calculate the VLBP_S and VLBP_M values corresponding to each point. Here, T ZN is a threshold to divide the feature value interval for the histogram statistics. The following also pertain to the test interval: 3. After traversing all points, record the number of times in the T ZN interval and create two new T ZN -dimensional features: VLBP_S(r) and VLBP_M(r). Replace VLBP_S and VLBP_M with these two new features. 4. The final VLBP characteristics of each voxel are: Among them, VRGBg is the average value of the gray value of the R, G, and B structure of the N points in voxel V. VarRGBg is the variance of the gray value constructed by R, G, and B of N points in voxel V. 5. Traverse all points in the point cloud and generate a VLBP feature for each point; change radius r of the first-radius nearest-neighbor search; and generate a set of VLBP features again and continue writing. Repeat this step until the radius set by the scale invariance is reached. Point Cloud Classification Based on Multifeature Fusion and Random Forest Point cloud classification based on multifeature fusion and random forest is divided into two steps (multifeature fusion and the use of random forest to classify point clouds) as follows: Multifeature Fusion To improve the characterization ability and robustness of point cloud features, this paper fuses the point cloud color RGB, normal vector feature, and FPFH feature with the VLBP feature constructed in this paper. The feature corresponding to each point in the point cloud after fusion is F = [F VLBP , F RGB , F Normal , F FPFH ]. Among them, the ten-dimensional VLBP feature F VLBP constructed in this paper and the 3-dimensional original point cloud color RGB constitute the color feature of the point cloud. The three-dimensional normal feature F Normal and the 33-dimensional fast feature histogram feature F FPFH constitute the geometric structure feature of the point cloud. Among them, the normal vector feature F Normal estimates the neighborhood plane of each point in the original point cloud and the feature vector corresponding to the smallest feature value obtained by PCA is regarded as the normal feature of the point. F FPFH is used to describe the relationship between the adjacent points of the point cloud, has the advantages of low computational complexity and strong robustness, and has better results when used for point cloud classification. After multifeature fusion, feature F of each point in the original point cloud has 49 dimensions (the ten-dimensional features are VLBP features obtained at two different scales). Point Cloud Classification The proposed method adopts a classification strategy based on a single point of the point cloud; that is, after the fusion feature construction of each point of the point cloud is completed, each point in the point cloud is classified by a machine learning classifier. The random forest classifier is suitable for multi-classification problems, can handle high-dimensional input features, has good classification performance in the point cloud classification of indoor and outdoor scenes, and can achieve high classification accuracy. Therefore, we choose random forest classifiers to perform point cloud classification. Among them, the random forest constructed in this paper has 250 trees and the maximum tree depth is 20. Analysis of Experimental Results To evaluate the effectiveness of the proposed algorithm, we conduct experiments on three mobile laser scanning (MLS) urban point cloud scenes and two indoor point cloud scenes, and perform qualitative and quantitative analyses. Experimental Data In this paper, five different point cloud scenarios are selected to verify the proposed algorithm and the point cloud data all contain x, y, z, R, G, and B information, i.e., point coordinates and color information. Collection equipment of a point cloud scene is shown in Figure 3. Scene 1, Scene 2, and Scene 3 are outdoor scene colored point cloud data collected by advanced backpack mobile surveying and mapping robots provided in CSPC-Dataset [23]. The robot collects the data of these scenes by laser sensors and panoramic cameras, and the average point cloud density is about 50~60/m 2 . After the refined modeling and coloring of point clouds, the complete colored point cloud of the scene can be produced. The dataset contains both larger objects (e.g., buildings, ground, and trees) and smaller objects (e.g., cars). Scene 4 and Scene 5 are indoor scene point clouds, which are chosen from the S3DIS dataset [24]. The S3DIS dataset is an indoor point cloud dataset produced and developed by Stanford University et al. with a Matterport camera (combined with three structured light sensors with different spacings). The dense point cloud obtained in this dataset has high precision and uniform color distribution. The Scene 4 and Scene 5 produced in this paper include four types of objects, i.e., chair, table, ground, and potted plants. The training set and test set of the five scenarios are shown in Figures 4-8. Table 1 shows the distribution of the exact number of points for training and the test sets for each scene. The proposed algorithm is implemented on PCL1.8.1 (C++), python3.7.6, and Cloud-Compare2.11.3. All experiments in this article are run on a computer with an AMD Ryzen 5 3600 6-core processor at 3.59 GHz with 16 GB of RAM. The average training time of five scenes is about 6.48 min, and the average testing time of five scenes is 3.96 min. To evaluate the performance of the different algorithms more comprehensively and effectively, we uses Precision/Recall/F1-scores to evaluate the classification effect of each category, and uses Overall Accuracy (OA) and Kappa to evaluate the overall classification result of each scene. These evaluation indicators reflect the classification effects of different attributes. The higher the values of these classification indicators, the better the classification effect, as shown in Table 2 from which T p , F n , F p , and T n represent the number of true positives, false negatives, false positives, and true negatives. Precision measures the ability of a classifier to not mistakenly divide real negative samples into positive ones. The calculation method is Equation (13). Equation (14) is the calculation method of Recall, Recall measures the ability of a classifier to find all positive samples. In order to comprehensively evaluate the classification ability of a classifier for each category, the F1 score is usually used to measure the whole classifier. The calculation method is Equation (15). The experiment point cloud dataset has multiple category labels. Therefore, to comprehensively evaluate the classification effect of the algorithms on all categories of the whole point cloud, we use OA and Kappa to evaluate the overall classification performance of different algorithms. Each evaluation metric is calculated according to Equations (16)- (18). where C is a L × L classification confusion matrix; L is the number of an object category; C ij is the true label of the i-th class classified to the j-th class; and Q is the number of all points. Point Cloud Classification Effect To evaluate the effect of the proposed algorithm and verify the influence of different classifiers and point cloud features on point cloud classification, this paper compares the proposed algorithm with other classification algorithms composed of point cloud features and classifiers. The features, classifiers, and classification accuracy of the experimental comparison methods are listed in Table 2. The classifiers include the random forest classifier (RF main parameters: 250 forest trees; the maximum tree depth is 20), multilayer perceptron classifier (MLP main parameters: a hidden layer with 100 neurons; the activation function is relu; regular term parameter alpha = 20; etc.), and support vector machine classifier (SVM main parameters: error term penalty coefficient C = 1; kernel = 'rbf'; etc.). We also compared with PointNet [25], which is a deep learning method based on a multilayer perceptron. The features include the following: based on point cloud color extraction feature F VLBP and point cloud geometric structure feature F N_F (normal vector and fast feature histogram feature), the F N_F_RGB feature refers to the RGB color information integrated on the basis of feature F N_F . Feature F All is a fusion of the color features of the point cloud (RGB and VLBP features) and point cloud geometric structure feature F N_F . From the results listed in Table 3, we can make the following observations: 1. From the data in the table, we can see that the classification accuracy (Kappa/OA) of the proposed algorithm is different in different scenarios, but the point cloud classification accuracy is basically the highest in all feature fusion situations and the results of the five scenarios are in different features. The results' trends of the classifier is consistent. 2. By comparing the results of five scenes of point cloud classifications using different types of classifiers for the same feature, it can be seen that the features used in the proposed method can achieve the best classification results by using random forest classifiers; that is, the classification algorithm designed in this article is better than that based on other classifications. 3. A comparison of the effects of classifying different types of features by the same classifier shows that, based only on the VLBP features proposed in this paper, they cannot achieve better classification results because the feature descriptors proposed in this paper only represent the point cloud color information and lack the structural information of the point cloud. The fusion of RGB color information on the basis of geometric features will significantly improve the classification effect of point clouds. On this basis, continuing to integrate the VLBP features extracted in this paper based on color will improve the classification accuracy of point clouds. Regardless of which classifier is used, the classification effect of the four types of feature fusion is better than the classification effect based on a single feature. This shows that the fusion of color information based on the geometric structure characteristics of the point cloud can improve the classification accuracy of the point cloud. 4. By comparing the improvement of the classification accuracy of each scene, we can see that the RGB and VLBP color features are combined on the basis of the geometric structure characteristics of the point cloud, and the classification accuracy of indoor Scene 4 and 5 is more obvious than that of outdoor Scenes 1-3. This is because the coloring of the point cloud is not only related to the point cloud collection equipment but it is also affected by the illumination to a certain extent. This makes the coloring of the point cloud collected indoors more uniform than the point cloud collected outdoors. Thus, compared with outdoor scenes, the classification accuracy of indoor scenes is better. In order to highlight the advantages of the random forest classifier selected in this paper, classification comparison experiments are carried out under the conditions of different features and classifiers. As shown in Table 4 To show the classification effect of the proposed algorithm more prominently, this paper compares the classification algorithms of different feature constructions using the random forest classifier. As shown in Table 5, Method 1 is a prediction classification method based on F VLBP features and Method 2 is a prediction classification based on geometric features (i.e., normal vector and FPFH features). Method 3 is a type of predictive classification based on F N_F_RGB features. Our method is a combination of point cloud color features (RGB and VLBP features) and point cloud geometric structure features (normal vector and FPFH features) for predictive classification. As shown in Table 6, the comparison of five point cloud scenes classification results are given. From the results listed in Table 6, it is easy to draw the following conclusions below: Table 3, it can be seen that Method 3 shows significant improvements for most indicators compared to Method 1 and Method 2, especially in Scene 4 and 5, and the classification effect is significantly improved. This shows that the point cloud color feature has an improved effect on point cloud classification. From a comparison between Method 3 and the algorithm in this paper, it can be seen that in the outdoor Scenes 1-3, the algorithm in this paper performs better than Method 3 in most cases. It can be seen that in the indoor Scene 4 and 5, the algorithm in this paper shows a significant improvement for all indicators compared to Method 3. This also shows that in the case of less noise in the color information of the point cloud coloring, the VLBP feature descriptor proposed in this paper can significantly improve the point cloud classification effect. (4) By observing Scene 4 and the other four scenes, we can see that the proposed method has the best classification performance on the point cloud scene containing only man-made objects. When there are irregular objects such as plants in the scene, it will increase the complexity of the scene, thereby reducing the classification performance. It can be seen from the precision/recall/F 1 -scores in Table 6 that the performance of the proposed method for the vast majority objects has a certain improvement. Especially for the indoor point clouds, the improvement is more significant. This is because the color information of the indoor point cloud is more accurate than the outdoor point clouds, which is caused by the colored point cloud collection device. In this paper, the proposed method has been compared with other methods by multiple evaluation metrics at the same time. Considering the time efficiency, it can be seen from Table 4 that the proposed method outperforms PointNet. For the Kappa and OA, the proposed method can achieve better performance than the other methods with different features and classifiers. Therefore, we can make the satisfactory conclusion that the overall performance of the proposed classification method is a promising method by considering different evaluation metrics and ablation studies. To show the point cloud classification effect of the algorithm in this paper more intuitively, Figures 9-13 show the classification effect of different algorithms on five point cloud scenes. It can be seen from the figure that the classification result of the algorithm in this paper is closest to the true value effect and the classification effect of trees in Scenes 1-3 is better than that of other algorithms. According to a comparison of (b) and (c) in Figures 9-13, the classification based on VLBP features and the classification based on the normal vector and FPFH features will have a certain complementary effect. After the fusion of RGB and VLBP features, for Scene 2 and Scene 3, which are medium and small scenes, some building points are misclassified as tree points, some table points and chair points in Scene 4 are misclassified, and potted plants and chairs in Scene 5 are misclassified. This confusion is caused by the similar colors in the point cloud but the overall classification effect is generally good. In this article, Scenes 1-3 are outdoor point cloud scenes. As shown in Figures 8-10 (the black circles/boxes in the figure), the proposed algorithm has obvious advantages but the difference between the proposed algorithm and Method 3 is relatively small. This is because in outdoor scenes, as shown in Figures 3-5, the color information corresponding to the color point cloud has certain noise and errors, making the effect of the proposed VLBP feature descriptor not obvious. However, Scene 4 and Scene 5 are indoor scenes based on the color point cloud data collected by a Kinect, as shown in Figures 6 and 7. The color information is relatively stable and there is less noise. As shown in Figures 11 and 12 (the black circle/box), the algorithm proposed in this paper has obvious advantages over other algorithms. Although Method 3 also uses color features, the effect is still not as good as the algorithm in this paper. This also shows the effectiveness of the VLBP feature descriptor proposed in this paper. Conclusions This paper proposes a novel voxel-based color point cloud local feature VLBP and three defined descriptors (VLBP_C, VLBP_S, and VLBP_M) to extract the local grayscale and local difference sign and magnitude of each voxel corresponding to each point in the point cloud. In addition, this paper proposes a point cloud classification algorithm based on multifeature fusion and a random forest classifier. The proposed algorithm uses the color information of the colored point cloud to obtain the color features of each point of the point cloud. To represent the point cloud features more robustly, the geometric structure information of the point cloud is characterized by the introduction of normal vector features and FPFH features. In addition, the color, feature, and geometric structure features are merged to construct the feature of each point of the point cloud. Finally, each point is classified based on a random forest classifier. The proposed algorithm was used to experiment on point clouds in different scenes. The experimental results showed that the proposed VLBP feature is effective in improving the classification accuracy of point clouds and the proposed point cloud classification algorithm can effectively classify point clouds in different scenes. Although the proposed algorithm can achieve good classification results in the five point cloud scenes, the point cloud scenes contained a lot objects including trees, shrubs, etc., that maybe reduce the classification performance. Thus, there is still room for improvement. The future work is summarized as follows: the features selected in this paper are the classical point cloud feature descriptors and more efficient geometry features can be designed to fuse the VLBP feature to improve the point cloud classification accuracy. In the process of features' fusion, the direct connection method is used in this paper. In the future, more excellent feature fusion methods can be used to construct the aggregation features of the point cloud. Although the proposed classification algorithm achieves good classification results, there are still some details of the misclassification phenomenon. In the future, the classification results can be optimized by post-processing optimization with neighborhood information. In addition, the point cloud scenes selected in this paper do not involve the intensity information. The introduction of the intensity information of the point cloud on the basis of the fusion feature will be used for point cloud classification.
8,355.2
2021-08-10T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Simultaneous Structural Identification of Natural Products in Fractions of Crude Extract of the Rare Endangered Plant Anoectochilus roxburghii Using 1H NMR/RRLC-MS Parallel Dynamic Spectroscopy Nuclear magnetic resonance/liquid chromatography-mass spectroscopy parallel dynamic spectroscopy (NMR/LC-MS PDS) is a method aimed at the simultaneous structural identification of natural products in complex mixtures. In this study, the method is illustrated with respect to 1H NMR and rapid resolution liquid chromatography-mass spectroscopy (RRLC-MS) data, acquired from the crude extract of Anoectochilus roxburghii, which was separated into a series of fractions with the concentration of constituent dynamic variation using reversed-phase preparative chromatography. Through fraction ranges and intensity changing profiles in 1H NMR/RRLC–MS PDS spectrum, 1H NMR and the extracted ion chromatogram (XIC) signals deriving from the same individual constituent, were correlated due to the signal amplitude co-variation resulting from the concentration variation of constituents in a series of incompletely separated fractions. 1H NMR/RRLC-MS PDS was then successfully used to identify three types of natural products, including eight flavonoids, four organic acids and p-hydroxybenzaldehyde, five of which have not previously been reported in Anoectochilus roxburghii. In addition, two groups of co-eluted compounds were successfully identified. The results prove that this approach should be of benefit in the unequivocal structural determination of a variety of classes of compounds from extremely complex mixtures, such as herbs and biological samples, which will lead to improved efficiency in the identification of new potential lead compounds. fractions. 1 H NMR/RRLC-MS PDS was then successfully used to identify three Introduction The discovery and identification of new chemical entities in complex mixtures means a ubiquitous challenge in drug discovery and development regimes. To accomplish the time-consuming task of screening pharmaceutical libraries, often consisting of (multi)millions of molecules, a large variety of methodologies is currently available [1]. During the last few years, we have reported a series of methods for the rapid structural identification of compounds in crude extracts from herbal medicines and Traditional Chinese Medicine by MS/MS and liquid chromatography-MS/MS (LC-MS/MS) [2][3][4][5][6][7][8][9][10]. However, sometimes nuclear magnetic resonance (NMR) data (mostly 1 H NMR) was needed to provide complementary information for the purpose of validating the chemical structures adequately, especially in complex mixture analysis. In fact, parallel use of NMR and MS methods as structural tools have efficiently provided complementary data in structural elucidation studies for natural product research, drug metabolite analysis, and other complex mixture analysis problems [11][12][13]. Recently, directly coupled LC-NMR-MS has been used in pharmaceutical laboratories worldwide to avoid traditional isolation of analytes [14][15][16]. Successful studies have also been conducted using HPLC-NMR-MS, allowing a superior level of peak discrimination and structure elucidation [17][18][19][20]. However, some technical drawbacks still exist in direct hyphenated methods involving the use of NMR [21,22], such as high cost, a narrow range of deuterated solvents as LC eluents, lower detection sensitivity, and the compatibility of the chromatographic peak volume with that of the NMR flow cell. Alternatively, the combination of chemometric and mathematical methods relying on inherent multivariate profiling capabilities have been successfully used to recover latent active compound information, such as potential biomarkers in metabonomics. Recently, a number of statistical techniques have aided in peak resolution and identification, such as statistical total correlation spectroscopy (STOCSY) [23], statistical heterospectroscopy (SHY) [24] and NMR/LC-MS parallel dynamic spectroscopy (NMR/LC-MS PDS). In particular, NMR/LC-MS PDS, based on the off-line analysis of a series of incompletely separated chromatographic fractions with different concentration changing profiles of the constituents, can provide the intrinsic correlation between retention time (Rt), mass/charge (m/z) and chemical shift (δ) data of the same individual constituent in the LC fractions through the co-analysis of visualized MS and NMR data with signal amplitude covariation in the NMR/LC-MS PDS spectra. As a consequence, the complementary spectral information is obtained from mixture spectra for unambiguous structural identification of individual constituents in crude extracts [25]. Using NMR/LC-MS PDS, the complementary strengths of the two methods can be combined, and the covisualization of NMR and MS data can yield not only the simplification of the separation analysis procedure for complex mixtures, but also simultaneous and unambiguous structural information than can be either used alone or applied pairwise between individual samples. The genera Anoectochilus and Goodyera (Orchidaceae) are perennial herbs which comprise more than 35 species and are widespread in the tropical regions, from India through the Himalayas and Southeast Asia to Hawaii [26]. Of those species, Anoectochilus roburghii, an indigenous and valuable Chinese folk medicine, has been used as a popular nutraceutical herbal tea in China and other Asian countries. This herbal plant is also called "king medicine" because of its diverse pharmacological effects. The whole dried plants have been widely used to treat diabetes [27], cancers [28], underdeveloped children [29], liver diseases [30], cardiovascular diseases [31], nephritis and venomous snake bite [32], etc., in China, further proving that natural products should be considered important resources for future medicines which Koop [33] advocated in his article in Science. Reports are available on the constituents of the herb, which include flavonoids, organic acids and aliphatic compounds, and both flavonoids and glucosides were found to be the predominant components [34][35][36]. Because of a low budding and growth rate in natural surroundings, predatory mass collection, and damages to the ecological environment, the natural resources of Anoectochilus roburghii are becoming exhausted. Thus, artificial breeding ones of this species by tissue culture techniques are gradually commercialized as substitutes used for the same purpose in the recent marketplace [37]. Therefore, the rapid and simultaneous structural identification of natural products in the wild plant has become very important to assess the quality of the cultivars. It is known that precious and endangered materials such as Anoectochilus roxburghii and Taxus madia are very difficult to obtain or can be obtained in small quantities. Therefore, considering that the major advantage of NMR/LC-MS PDS over routine separation and structural elucidation methodologies is that fewer samples as well as analysis time are needed, we used this method as a structural identification tool to identify natural products in Anoectochilus roburghii. Previous work has shown that NMR/LC-MS PDS successfully identified identical types of natural products, such as 12 flavonol glycosides in an active herbal extract from flowers of Gossypium herbaceam L [25] and 7 phenylethanoid glycosides in the crude extract of Forsythia suspensa [38]. In this work, we placed particular emphasis on the simultaneous structure of variety classes of compounds, and presented the results obtained from a crude extract of the rare endangered plant Anoectochilus roburghii. Here, a reversed-phase preparative column chromatography was employed to simplify the separation procedure and acquire a series of fractions with different concentrations from complex mixtures, while flash column chromatography, which is unpopular and has moderate separation, was used in previous work. Moreover, RRLC and microcoil probe were first employed in this method to improve signal resolution and sensitivity. Finally, the study involved a 1 H NMR/RRLC-MS PDS analysis of 1 H NMR spectra and available negative RRLC-MS spectra of the fractions, together with fragmentation behavior analysis of MS/MS spectra acquired in the same instrumental run to develop a relatively rapid, precise and accurate method for the structural identification of different types of compounds in complex samples. We chose RRLC as a separation tool for the analysis of the crude extract to achieve an increase in separation efficiency, shorter run times than HPLC, and better peak resolution [39]. Moreover, it was reported that there were a large number of flavonoid glycosides in Anoectochilus roxburghii, therefore a wavelength of UV detector at 345 nm for the UV detection was chosen as the detection wavelength. Figure 1(a) shows the RRLC-UV chromatogram spectrum for the crude extract. We found that the retention times of the main compounds in the crude extract were between 6 and 24 min. Considering that some compounds without chromophore groups have low UV absorption intensity, the total ion chromatography (TIC) spectrum of the crude extract is shown in Figure 1 including the main compounds shown in Figure 1(a) were numbered in order by their retention times in RRLC-MS. It was apparent that the relative contents of these compounds varied widely and some compounds, including a group of isomers, were eluted together with almost non-separation, which indicated that the crude extract was very complex. Figure 2 displays the composition profiles of the 13 constituents, which was reconstructed by plotting the XIC areas of all the constituents in the series of fractions. This clearly showed that the crude extract was incompletely separated, and the 13 constituents eluted into nearly different fractions and processed different concentration changing profiles. Most importantly, compound 4 and 9 were eluted into different fractions with their co-eluted compounds, respectively, and were considerably hard to separate in the analysis of the crude extract even when an excellent separation tool (RRLC) was used ( Figure 1(c)). These results suggested that preparative column chromatography could be applied in the separation of extremely complex herbal extracts into an incompletely separated series of fractions. 1 H NMR/RRLC-MS PDS Spectrum A series of fractions were taken for 1 H NMR and RRLC-MS analysis, and after data processing the signal amplitude co-variation between the 1 H NMR and XICs signals were visualized together to produce the 1 H NMR/RRLC-MS PDS spectrum of the ethanol extract of Anoectochilus roxburghii (shown in Figure 3). For our applications, the fraction axis (vertical axis) resembled the retention time axis in the chromatogram, and each line represented the XIC and 1 H NMR spectra of each fraction. It can be seen that the XICs signals of the 13 constituents including two groups of isomers were lined out on the 1 H NMR/RRLC-MS PDS spectrum with suitable separation and eluting in different fractions. Figure 3 shows that, constituent 9 and 10, which were strongly overlapped in the XIC spectrum of the ethanol extract were eluted into different fractions and could be distinguished clearly by the 1 H NMR/RRLC-MS PDS spectrum with dark blue and yellow profiles respectively. In addition, one of the three isomers with [M-H] − ion at m/z 163, constituent 4, could also be clearly distinguished from the other two isomers (constituents 2 and 3) owing to its distribution in different fraction ranges, which indicated that this approach could play a prominent role in the chemical structural identification of co-eluted constituents. Based on this visualization tool, the intrinsic correlation between 1 H NMR and RRLC-MS data of the same constituent could be discovered easily, such as constituent 4 with orange arrows highlighting the mass/charge (m/z) and chemical shifts (δ) data. In order to illustrate the significant role of the 1 H NMR/RRLC-MS PDS spectrum in the structural identification of individual constituents in a complex mixture, we magnified the 1 H NMR/RRLC-MS PDS spectrum of the crude extract from fraction 1 to 5 (Figure 4), in which 1 H NMR signals of four constituents were highlighted in the same colored square or arrow with corresponding XICs, respectively, and two co-eluted isomers with critically overlapped peaks appeared. Figure 2. Thus, the 1 H NMR signals of the correct compound could be picked out and assigned. Based on the co-variation among the fraction ranges and signal intensity changing profiles, six columns of 1 H NMR signals (highlighted with blue arrows) varied in almost the same fraction range with the XIC at m/z 623 and were correlated and assigned to constituent 5. Then, through using the correlated XIC and 1 H NMR signals as index and comparing their fraction ranges, a doublet of doublet at δ 7.40 ppm was recognized as a seriously overlapped signal with that of another constituent from fractions 2 to 4 along fraction axis. However, in fraction 1, it was a pure signal of constituent 5 with coupling constants of 2.0 and 8.0 Hz, displaying correlation with two recovered doublets at δ 7.50 and 7.06 ppm with coupling constants of 2.0 and 8.0 Hz respectively, which indicated a typical ABX coupling system of 3",4"-disubstituted ring B of a flavonoid skeleton. Two doublets at δ 6.42 and 6.11 ppm with the same coupling constant of 2.0 Hz which were partly overlapped in fractions 1 and 2 by other signals were attributed to two protons at the meta position of disubstituted ring A. A singlet at δ 3.83 ppm was recognized as a signal of a methoxyl group. Therefore, the skeleton of constituent 5 was presumed to be isorhamnetin. The supplementary RRLC-MS/MS spectrum corresponding to m/z 623 shown in Figure 5 Rel.Int.(%) Not only did the 1 H NMR/RRLC-MS PDS spectrum with incompleted separation strategy have a significant role in the assignment of overlapping signals, it was also important in the structural identification of co-eluted isomers, when the chromatographic separation conditions were carefully optimized in the hyphenated NMR technique. For example, three peaks were observed in the "blue" XIC of [M-H] − ion at m/z 163, and named constituent 2, 3 and 4, respectively, in Figure 3. Of these, the first two constituents were incompletely separated with almost the same retention time and XICs fraction range from fraction 2 to 5 (shown in Figure 2), which resulted in great difficulty in distinguishing and identifying their chemical structures. However, detailed inspection of Figure 4 revealed that, benefited from the lower sensitivity of 1 H NMR to MS, most of the 1 H NMR signals of constituent 3 with lower concentration appeared from fraction 3 to 5, different from that of constituent 2. Therefore, the 1 H NMR signals could easily be attributed to the correct compound. Interestingly, constituent 4, nearly co-eluted with the other isomers in RRLC-MS analysis of the crude extract as shown in Figure 1(c), was eluted into different fractions from constituents 2 and 3 by preparative column chromatography, which also facilitated the elucidation of the three compounds. In Figure 4, four sets of 1 H NMR signals (highlighted with red squares), that is δ 7.44 (2H, J = 8.6 Hz), 6.80 (2H, J = 8.6 Hz), 7.59 (J = 16.0 Hz), and 6.27 ppm (J = 16.0 Hz), varied in the same fraction ranges with the XIC at m/z 163 were correlated and assigned to constituent 2. Although the doublet at δ 7.44 ppm was seriously overlapped with a doublet of doublet from constituent 3 in fractions 2 and 3 along fraction axis, it was a pure signal of constituent 2 in fractions 4 and 5. Then, the first two doublets were attributed to two pairs of protons with the same chemical shift respectively at adjacent position of a benzene ring while the latter two doublets were attributed to two protons across the double band from each other. RRLC-MS/MS data corresponding to m/z 163 (constituent 2) listed in Table 1 indicated the presence of a carboxyl group by the characteristic product ions at m/z 119, the putative decarboxylated species, a very reasonable neutral loss (44 Da) from the parent ion at m/z 163. Comparing the 1 H NMR signals with those of trans-4-hydroxycinnamic acid previously reported as a Anoectochilus roxburghii constituent in a literature [34], we found a high degree of consensus. Therefore, constituent 2 was identified as trans-4-hydroxycinnamic acid. Again, the 1 H NMR/RRLC-MS PDS method deconvolved the overlapping NMR peaks, in this case revealing an organic acid. As for constituent 3, three columns of 1 H NMR signals (highlighted with green squares) at δ 7.23, 7.10, and 6.85 ppm, covering almost the same fraction range with the XIC peak corresponding to constituent 3, were picked out first. Subsequently, two doublets at δ 7.52 and 6.43 ppm were recognized and assigned to constituent 3 by detailed inspection of the 1 H NMR/RRLC-MS PDS spectrum. Moreover, the relevant RRLC-MS/MS signals corresponding to m/z 163 (constituent 3) listed in Table 1 was highly similar to that of compound 2, giving significant product ions at m/z 119 and 93. The above data indicated that constituent 3 and 2 shared the same groups just with different substituent sites. Thus, despite deficiency in the deconvolution of a seriously overlapping NMR peak from an aromatic proton, useful correlated data were enough to identify constituent 3 as trans-3-hydroxycinnamic acid, which was first discovered in Anoectochilus roxburghii. Signal assignments of constituent 3 are presented in Table 1. In Figure 3, the fraction range and intensity changing profile of each XIC and 1 H NMR signal from constituent 4 could be observed clearly and highlighted with orange arrows. In Figure 3, three peaks in the XIC of m/z 301, 285 and 315 covered almost the same fractions 9-12, which resulted from the co-elution of constituent 10, 11 and 12. The assignment of 1 H NMR peaks was difficult due to the overlap of parts of the signals. However, using NMR/LC-MS PDS spectrum, the problem can be approached as follows. Four 1 H NMR peaks at the high-frequency region were clearly presented at δ 8.15 (2H, J = 8.8 Hz), 7.07 (2H, J = 8.8 Hz), 6.53 (J = 2.0 Hz) and 6.27 ppm (J = 2.0 Hz), respectively, as observed in the relevant 1 H NMR spectra of fraction 11 shown in Figure 6. Subsequently, the signal amplitude dynamic co-variation between the four peaks and the XIC of m/z 301, 285, 315 was co-analyzed. We found that the intensity variation of the above 1 H NMR signals from fractions 9 to 11 and that of the XIC peaks at m/z 285 showed the same tendency. Consequently, the columns of the six protons were assigned to constituent 11. In the same way, the other mixed and overlapped protons signals from fractions 9 to 12 were correlated and deconvolved simultaneously, and were assigned to constituents 10 and 12, respectively. Figure 6 shows the complete NMR signal assignment to constituents 10, 11 and 12 in fraction 11. Finally, three flavone aglycones, quercetin, isorhamnetin and kaempferol, with similar structure were identified unambiguously based on the 1 H NMR/RRLC-MS PDS spectrum by co-analyzing their 1 H NMR peaks, the XIC signals. To our surprise, kaempferol, a most familiar constituent, has not been reported in Anoectochilus roxburghii. In addition, the assignment of -OH signals of quercetin were listed in Table 1 following the results of reported literatures [42,43]. Taking advantage of the 1 H NMR/RRLC-MS PDS spectrum, the complementary RRLC-MS and 1 H NMR data for the 13 constituents in the crude extract were correlated and recovered successfully for unambiguous structure identification, and further reinforced with corresponding supplementary information from RRLC-MS/MS spectra. Eight flavonoids (constituents 5, and 7-13) and four organic acids (constituents 2-4, and 6), and p-hydroxybenzaldehyde (constituent 1) were identified. The recovered RRLC-MS data, 1 H NMR data, product ions, retention times and molecular formula are listed in Table 1, and 1 H NMR spectra with primary signal assignments are presented in the supplementary information. Reagents Anoectochilus roxburghii was collected from Fujian, China, and was identified by Prof. GUO shun-xing, Chinese Academy of Medical Sciences and Peking Union Medical College. The dried and powdered whole herbs were then extracted with 95% v/v ethanol in water and concentrated in vacuo to yield a crude extract applied by Prof. GUO shun-xing. HPLC-grade acetonitrile and formic acid were obtained from Merck (Darmstadt, Germany). Dimethyl sulfoxide-D6 (DMSO) containing 0.03% (v/v) tetramethylsilane (TMS) was obtained from Cambridge Isotope Laboratories Inc. Fractions Preparation The crude extract (0.55 g) was dissolved in 5 mL of a mixed solution containing acetonitrile and water (4:1 by volume). Preparative column chromatography separations were performed on a 15 cm × 19 mm reversed-phase C18 (Waters SunFire TM ) 5-µm column at room temperature, using an elution of water (eluent A) and acetonitrile (eluent B) at a flow rate of 10 mLmin for 90 min. The composition was started at 5% B, and then ramped linearly to 100% B at 90 min. The crude extract was separated into a series of fractions collected at a fixed time interval of 1 min. 0.2mL of each fraction was taken for RRLC-MS analysis and the remaining fractions were dried by rotary evaporation. re-equilibrated over the final 10 min prior to injection of the next sample. UV spectra were recorded from 190 to 400 nm and the detection wavelength was set at 345 nm. The mass spectrometer was operated in negative ion mode with an ionspray voltage of −4.5 kV, declustering potential of −70 V, curtain gas of 25 (arbitrary units), nebulizer gas (gas1) flow of 70 (arbitrary units), and heater gas (gas2) of 60 (arbitrary units). The source temperature was set at 350 °C . Spectra were collected in the enhanced full mass scan mode from m/z 50-1000. RRLC-MS/MS analysis was performed using the same LC conditions as above. For MS/MS, the collision gas was N 2 and set at high, collision energies were −35eV in the enhanced product ion (EPI) scan. In the linear ion trap (LIT) mode, the scan speed was 1000 Da/s and the LIT fill time was set at 80 ms, and only quadrupole (Q0) trapping was activated while EPI data were acquired, the r.f./DC analyzing quadrupole (Q1) was set at unit mass resolution. Spectra were collected in the enhanced product ion (EPI) scan mode from m/z 50-800. NMR Samples and Analysis 19 fractions from the incompleted separation fractions were selected for 1 H NMR analysis according to the LC-MS analysis results. Before NMR analysis, the dried residuals were then transferred to 3-mm NMR tubes with acetonitrile/water 4:1 (v/v), respectively, dried completely in a nitrogen stream and redissolved in 0.2 mL DMSO for NMR analysis. All NMR spectra were acquired at 599.7 MHz on a Varian NMR System-600 NMR spectrometer using a 3-mm SW probe controlled by Varian Vnmr 6.0 C software software. 3-mm NMR tubes (NO. S-3-600-7) were purchased from NORELL Inc., to obtain high-resolution spectra. For each sample, 32 free induction decays (FIDs) were collected into 34,374 data points at a spectral width of 8400 Hz, with an acquisition time of 13 min per sample and a 1 s relaxation delay, and all scans were acquired at 298 K. Data Processing Total ion chromatogram (TIC) data from RRLC-MS were converted into MATLAB format file (ms.mat) in Analyst 1.5, and the RRLC-UV data were converted into text format file. Then the potential [M−H] − ions were extracted to produce extracted ion chromatogram (XIC) data by in-house routines written in MATLAB 7.0.1 (Mathworks, Natick, MA, USA). 1 H NMR spectra from Varian format data files were phased, baseline corrected, smoothed and referenced to TMS (δ 0.0) using MestReC 4.9.9.6 after an exponential line-broadening factor of 0.3 Hz was applied to the FIDs prior to Fourier transformation, and then exported as ASCII format file (nmr.txt). The ASCII format files were read into MATLAB for threshold setting. Sections of the 1H NMR spectra containing the aromatic and aliphatic signals were considered. Conclusions Following its original application in an active extract obtained from Gossypium herbaceam L., the results presented here demonstrate the general usefulness of NMR/LC-MS PDS for extracting structural information on components present in a crude extract based on 1 H NMR and RRLC-MS, using a hybrid mass spectrometer (QLIT) with a suitable IDA protocol. Applied to the rare endangered plant Anoectochilus roxburghii, this approach with some technological modifications enables the synthesis of 1 H NMR spectra, XIC signals and MS/MS data and successfully permits the identification of different types of compounds in the extract. Furthermore, the results of this method can be expected to significantly aid in comparing the constituents of Anoectochilus roxburghii with those which were cultivated by biological techniques, which will be addressed in future publications. Overall, 1 H NMR/RRLC-MS PDS combined with an incompleted separation strategy has an important future role in expediting the structural identification of constituents in crude extracts, and indeed for the research of covering an even greater variety of different target molecules in complex samples.
5,511.8
2011-04-15T00:00:00.000
[ "Chemistry" ]
A Cascadia Slab Model From Receiver Functions We map the characteristic signature of the subducting Juan de Fuca and Gorda plates along the entire Cascadia forearc from northern Vancouver Island, Canada, to Cape Mendocino in northern California, USA, using teleseismic receiver functions. The subducting oceanic crustal complex, possibly including subcreted material, is parameterized by three horizons capable of generating mode‐converted waves: a negative velocity contrast at the top of a low velocity zone underlain by two horizons representing positive contrasts. The amplitude of the conversions varies likely due to differences in composition and/or fluid content. We analyzed the slab signature for 298 long‐running land seismic stations, estimated the depth of the three interfaces through inverse modeling and fitted regularized spline surfaces through the station control points to construct a margin‐wide, double‐layered slab model. Crystalline terranes that act as the static backstop appear to form the major structural barrier that controls the slab morphology. Where the backstop recedes landward beneath the Olympic Peninsula and Cape Mendocino, the slab subducts sub‐horizontally, while the seaward‐protruding and thickened Siletz terrane beneath central Oregon causes steepening of the slab. A tight bend in slab morphology south of the Olympic Peninsula coincides with the location of recurring large intermediate depth earthquakes. The top‐to‐Moho thickness of the slab generally exceeds the thickness of the oceanic crust by 2–12 km, suggesting thickening of the slab or underplating of slab material to the overriding North American plate. 10.1029/2023GC011088 2 of 22 subduction system transforms into the right-lateral Queen Charlotte and Revere-Dellwood Faults and to the south into the San Andreas Fault (Figure 1). Farther downdip, the JdF has been identified below the Salish Sea on marine seismic sounding transects through the Juan de Fuca Strait and Georgia Strait.At about 20 km depth, the sharp <2 km thick reflector that marks the top of the slab widens into an up to 10 km wide reflection band, the so-called E-layer (e.g., Clowes et al., 1987a;Nedimović et al., 2003), that extends to depths of at least ∼50 km (Calvert et al., 2006).A similarly thick reflective zone has been identified atop the subducting JdF at 35-40 km depth beneath central Oregon (Keach et al., 1989;Tréhu et al., 1994).It has been argued that the E-layer represents the transition into a wider shear zone that creeps aseismically and hosts episodic tremor and slip (ETS, see e.g., Calvert et al., 2020;Nedimović et al., 2009). In Cascadia, the subduction zone stratigraphy has also been previously characterized using teleseismic P-wave receiver function data (e.g., Abers et al., 2009;Bostock et al., 2002;Cassidy & Ellis, 1993;Langston & Blum, 1977;Mann et al., 2019;McGary et al., 2014;Nabelek et al., 1993;Nicholson et al., 2005).A recent study employing receiver functions, local tomography and seismic reflection data in southern Vancouver Island suggests that the oceanic crust may reside below the E-layer (Calvert et al., 2020) and that at least part of the E-layer comprises an ultralow S-wave velocity zone (ULVZ), with V P /V S of the order of 2-3 (Audet et al., 2009;Cassidy & Ellis, 1993).In local seismic tomograms, the slab stratigraphy oftentimes appears smeared into a single layer with moderately elevated V P /V S in the order of 1.8-2.0,consistent with basaltic or gabbroic lithologies with some contribution of fluid-filled pores.Interpretation of the oceanic Moho in tomographic models is less ambiguous, where it appears as a strong negative V P /V S gradient to values lower than 1.7 that mark the oceanic mantle below (Guo et al., 2021;Merrill et al., 2020Merrill et al., , 2022;;Savard et al., 2018). An initial margin-wide map of the top of the JdF was constructed from a mixed data set of earthquake hypocenters, active source seismic profiles, receiver functions and local earthquake tomograms with the aim to model interseismic strain accumulation in the overriding plate (Flück et al., 1997).With increasing data availability over time and a better understanding of subduction processes, the initial model has been updated and extended in space using additional constraints from seafloor magnetic anomalies, deeper seismicity and diffraction of strong earthquake first arrivals (McCrory et al., 2004) and later from relocated earthquake hypocenters and electrical conductivity profiles (Hayes et al., 2018;McCrory et al., 2012).Other slab models are based purely on receiver functions (Audet et al., 2010;Hansen et al., 2012).Despite a broad agreement in recovered slab depths to within ∼10 km, considerable differences exist across these models.These differences are associated with data uncertainties, the fact that the slab models are based on different data types, and with ambiguities in the interpretation of proxies for what constitutes the "slab top" (McCrory et al., 2012). Here, we construct a margin-wide slab model that honors an oceanic crustal stratigraphy that may consist of up to two layers, including the possibility of subcreted material.Our model is based on the observation that receiver function images of the slab exhibit characteristic successions of positively and negatively polarized conversions that can be explained by interfering forward-and back-scattered seismic wave modes originating at three interfaces.We map these interfaces continuously along dip from the coast to the forearc lowlands (Salish Sea, Willamette Valley) and along strike from Brooks Peninsula on northern Vancouver Island, Canada, to Cape Mendocino, USA (Figure 1).Our results demonstrate how the overall slab morphology is controlled by the Figure 1.The tectonic setting of the Cascadia subduction zone and station distribution employed to determine the slab geometry under the forearc.The convergence of the Juan de Fuca and Gorda Plates relative to stable North America is shown as arrows (DeMets et al., 1994).Terrane boundaries modified after Watt and Brothers (2020).Top inset: Location of the study area on the North American continent.Bottom inset: Earthquake source distribution form 30° to 100° epicentral distance used to compute receiver functions.location of the static backstop.A subduction stratigraphy that is generally thicker than the incoming oceanic crust is testament to complex deformation processes affecting slab morphology along the subduction trajectory. Data and Methods A total of 45,601 individual receiver functions recorded at 298 seismic stations distributed across the Cascadia forearc contributed to the slab model.For each station, 100 s recordings symmetric about the P-wave arrival (i.e., 50 s noise and 50 s signal, for convenience) of earthquakes with magnitudes between 5.5 and 8, in the distance range between 30° and 100°, were downloaded (Figure 1).Waveforms with a signal-to-noise ratio smaller than 5 dB on the vertical component or 0 dB on the radial component were excluded.The instrument responses were removed and the seismograms were transformed into the upgoing P-SH-SV modes (Kennett, 1991).The P-component was trimmed to the time window beyond which the envelope fell below 2% of the maximum amplitude and a cosine taper was applied. Receiver Function Processing The three-component P-wave spectra were scaled by their signal-to-noise ratio and binned according to their incidence angle in back azimuth bins of 7.5° and horizontal slowness bins of 0.002 s km −1 .Within each bin, radial and transverse receiver functions were computed through frequency domain simultaneous deconvolution (Gurrola et al., 1995), with an optimal damping factor found through generalized cross validation (Bostock, 1998).This approach mitigates the instabilities inherent in spectral division by stacking spectra prior to deconvolution. Parameter Search The continental forearc and subducting slab were parameterized as three layers over a mantle half-space, with the subduction stratigraphy bounding interfaces labeled as t (top), c (central), and m (Moho) (Figure 2).Synthetic receiver functions were calculated through ray-theoretical modeling of plane-wave scattering at the model interfaces (Bloch & Audet, 2023;Frederiksen & Bostock, 2000; Figure 2b).The thickness, S-wave velocity (V S ) and P-to S-wave velocity ratio (V P /V S ) of each layer, as well as the common strike and dip of the bottom two layers and the top of the half space (in total 11 parameters), were optimized simultaneously through a simulated annealing global parameter search scheme (Xiang et al., 1997), as implemented in the SciPy package (Virtanen et al., 2020).In analogy to the annealing process in metallurgy, the scheme samples the misfit function stochastically under a decreasing "temperature" that gradually favors low-misfit parameter combinations.In this way, the algorithm can escape local minima in the misfit function.It has proven efficient in converging toward the global minimum in problems with many independent variables (Kirkpatrick et al., 1983).The misfit was defined as the anti-correlation (1 minus the cross correlation coefficient) between the observed and predicted receiver functions, bandpass filtered between 2 and 20 s period duration. Initial thickness bounds for the continental crust (Figure 2c) were based on the slab model of Audet et al. (2010) (±10 km).Maximum Layer 1 thickness was constrained by the maximum E-layer thickness of 10 km (Nedimović et al., 2003) and maximum Layer 2 thickness with the thickness of the incoming oceanic crust of 6.5 km (Han et al., 2016).Layer 1 could attain zero-thickness if the E-layer were absent.Because the igneous oceanic crust may be part of the E-layer, Layers 1 and 2 were constrained to have a combined minimum thickness of 6 km.Velocity bounds (Figure 2c) for the continental crust and Layer 2 were based on the 2σ interval of the expected lithologies for continental and oceanic crust, respectively, from the seismic velocity database of Christensen (1996); and for Layer 1 on an analytic poro-elastic model (Bloch et al., 2018) constrained to match the V P /V S observations of the ULVZ (Audet et al., 2009). To verify convergence toward a global minimum, the global parameter search was initialized with at least three different random number seeds, which affect the distribution from which trial parameter estimates are drawn.The resulting data predictions and models were checked for consistency with neighboring stations, previous tomographic profiles (Guo et al., 2021;Kan et al., 2023;Merrill et al., 2020;Savard et al., 2018), hypocentral locations of low-frequency earthquakes (LFEs) within tremor (Armbruster et al., 2014;Plourde et al., 2015;Royer & Bostock, 2014;Savard et al., 2018Savard et al., , 2020) ) and offshore marine seismic profiles (Suzanne Carbotte, pers. comm;Carbotte et al., 2023).If none of the minimum misfit models of an individual station were consistent with the above constraints, the global search was repeated within narrower bounds around a preferred solution from a neighboring reliable station.Such a model was used only in case it converged toward values away from thickness bounds (Figure 3).For each of the three horizons, a quality and a nominal depth uncertainty were assigned.Quality A denotes a horizon where at least one back-scattered phase in the predicted data correlates with the observed data (Figures 3a and 3b), the predicted data are consistent among neighboring stations and the modeled horizon depth is consistent with the available external constraints.A quality B horizon shows a good phase correlation, but the predicted data are inconsistent with neighboring stations and/or the modeled depth is inconsistent with external constraints.Quality C was assigned to horizons that do not show a convincing correlation between observed and predicted data, usually due to data with low signal-to-noise levels.Stations above the forearc lowlands for which the characteristic slab signature (Figure 2b) is decisively absent and where the onset of eclogitization is expected were marked with a quality X.The nominal depth uncertainty was estimated from the scatter of the local minima in the vicinity of the preferred minimum, as determined in the global search (Figure 3c). Fitting of Interfaces In total, 171, 143 and 137 quality A nodes were determined to constrain the t, c and m interfaces, respectively.At the trench, 105 nodes at 3 km below the local bathymetry were inserted to constrain the t and c interfaces, and at 6.5 km deeper to constrain the m interface, representing typical sediment and igneous crustal thicknesses (Han et al., 2016).A spline surface (Sandwell, 1987) was fitted to these nodes to yield margin-wide depth models.The spline coefficients were found using singular value decomposition (Aster et al., 2018;Wessel & Becker, 2008), with the nominal depth uncertainties supplied as weights.The solution was damped by retaining the 116, 117, and 116 largest singular values for the t, c, and m interfaces, respectively, based on the analysis of L-curves and the Akaike information criterion (Figure S1 in Supporting Information S1). Margin-Scale Slab Morphology The signature of the subduction stratigraphy can be traced along the forearc from Brooks Peninsula on northern Vancouver Island, across Vancouver Island, the Olympic Peninsula, the Willamette Valley in Washington and Oregon, into the Klamath Mountains and to Cape Mendocino in northern California (Figures S2-S50 in Supporting Information S1).The recovered velocities of the three model layers are consistent for neighboring stations (Figure S51 in Supporting Information S1).Slab morphology suggests a division into four segments: the Klamath, Central, Olympic and Vancouver Island segments (Figure 4). The Central segment, between 44° and 47°N, reveals the steepest dip, between 10° and 20°, and overall deepest slab, with the t horizon located between 15 and 25 km depth along the coast and dipping to 35-45 km depth before losing expression in advance of the volcanic arc.The Central segment is flanked to the north and south by flatter segments.In the south, the Klamath segment, located between ∼40° and 44°N, displays a more shallowly dipping slab, a contorted t horizon beneath Cape Mendocino and a contorted m horizon along the landward projection of the Blanco Fracture zone.The Olympic segment, located between 47° and 49°N, exhibits a shallow dipping (0-5°) slab beneath the coastal region, and is delimited to the south by a steep downward bend in the t and m horizons near Grays Harbor and by a bend in the slab strike just north of the Juan de Fuca Strait.Along the dip, the slab steepens as it approaches Puget Sound, where it begins to lose expression (Abers et al., 2009).The northernmost Vancouver Island segment is characterized by a moderately dipping slab.Near the northern terminus of subduction, north of Nootka Island, the t and c conversions appear disturbed.In summary, from north to south, the slab (a) dips gently and steepens down dip under Vancouver Island, (b) dips shallowly beneath the Olympic Peninsula, (c) steepens significantly beneath the Oregon Coastal Mountains, (d) subducts in a step-like fashion in front of Klamath Mountains, and (e) becomes contorted in the Cape Mendocino area.A comparison with previous slab models is shown in Figure S52 in Supporting Information S1. Central Segment Across the Central segment, the slab has been imaged with seismological methods using data from the CASC'93 experiment that comprised a temporary broadband array of ∼30 stations deployed across the Oregon forearc (Nabelek et al., 1993).It yielded the first dense receiver function studies targeting the subduction zone structure that clearly revealed subducting oceanic crust (Bostock et al., 2002;Rondenay et al., 2001;Tauzin et al., 2016).The comparison of our model with the teleseismic full-waveform tomogram of Kan et al. (2023) yields a consistent picture of the subduction stratigraphy (Figure 5).As in previous studies, Kan et al. (2023) image the subducting Juan de Fuca plate as a distinctive low-V S zone, which attains velocities as low as 3.3 km s −1 .All three horizons parallel this structure, with t marking the top of the LVZ and c and m marking two steps in the gradual increase toward high V S , characteristic for oceanic mantle of the order of 4.3 km s −1 .This structure has a very clear and characteristic expression in the receiver functions, which weakens near station XZ.A18, beneath the Willamette Valley, as in the tomogram.The entire stratigraphic sequence (t, c, m) brackets weak slab-related seismicity in the offshore area (Morton et al., 2023).It has a thickness of about 7 km near the coast and thickens arc-ward to about 13 km, with the two layers possessing comparable thickness. Klamath Segment Beneath the Mendocino region, the subduction stratigraphy has been imaged as a moderately high-V P /V S zone (1.8-1.9,Guo et al., 2021) complemented by relatively abundant intraslab seismicity defining a tightly confined Wadati-Benioff zone (e.g., Waldhauser, 2009;Wang & Rogers, 1994; Figure 6).The t and m horizons encapsulate the seismically active, moderately high-V P /V S zone, with the m horizon falling in good agreement with the V P / V S = 1.7 contour.Where it projects beneath the Franciscan terrane, the high-V P /V S -zone loses expression and the density of earthquakes diminishes (60 km from coast in Figure 6a).Our slab model here indicates a generally shallower dip that steepens again under the Klamath terrane (100 km from the coast), where it indicates that a low-V P /V S anomaly is located within the subduction stratigraphy.Layer 1 is absent between the coast and the Franciscan terrane and attains a thickness of a few kilometers farther landward.Notably, no seismicity is located within Layer 1.The c horizon defining the base of Layer 1 approximately aligns with the location of LFEs (Plourde et al., 2015).The entire subduction stratigraphy has a fairly uniform thickness of 10 km.The receiver function slab signature is difficult to correlate laterally, presumably due to some combination of variation in overburden and slab properties (Figure 6b). Olympic Segment A profile along the dip from the western end of the Olympic Peninsula, across the Juan de Fuca Strait, southern Vancouver Island, and into the Strait of Georgia reveals a flat lying slab beneath the Olympic Peninsula that continues under the Juan de Fuca Strait and gradually steepens under southern Vancouver Island (Figure 7a).The t and m horizons encompass the moderately high-V P /V S zones previously interpreted as the subducting crust in local seismic tomograms (Merrill et al., 2020;Savard et al., 2018).Under the Olympic Peninsula, this zone is seismically active and m agrees well with the V P /V S = 1.7 contour.Beneath southern Vancouver Island, m bounds the top of seismic activity previously interpreted to occur within the subducting mantle (Savard et al., 2018).Layer 1 is absent or very thin beneath the Olympic Peninsula and attains a thickness of about 5 km beneath southern Vancouver Island, where it is aseismic.The c horizon is located 2-3 km above a prominent band of LFE locations (Armbruster et al., 2014;Savard et al., 2018).Tremor hypocenters from Bombardier et al. (2023), see also Kao et al. (2005)) scatter within and above the subduction stratigraphy.The complex overburden structure of the Olympic Peninsula hampers clear identification of c and m; however, correlations of seismic phases along strike and along dip yield a laterally coherent picture.Beneath southern Vancouver Island, the slab reveals a clear and simple receiver function signature that can be traced beneath the Gulf Islands in the Strait of Georgia and loses expression toward the British Columbia Lower Mainland (Figure 7b). Vancouver Island Segment The Vancouver Island segment exhibits t and m horizons that bracket NE-dipping regions of elevated V P /V S evident in local seismic tomograms.The m horizon coincides with the V P /V S = 1.7 contour, that also bounds the top of seismicity which has been inferred to reside in the oceanic mantle (Figure 8a and Figures S3-S16 in Supporting Information S1, Merrill et al., 2022;Savard et al., 2018).c can best be seen as a pronounced and distinct horizon in southern and south-central Vancouver Island, where it lies 2-4 km underneath t and decisively above LFE locations (Savard et al., 2018).Toward north-central Vancouver Island, the subduction stratigraphy appears to thicken substantially downdip, from ∼8 km near the coast, to ∼16 km inland.Layer 1 and Layer 2 contribute in equal part to the combined thickness.The c horizon generally follows the LFE locations (Savard et al., 2020).Substantial scatter in the station measurements and difficulties in reconciling phase correlations across closely spaced stations attest to the complex subsurface structures that are also evident in the local seismic tomography and may be related to the subduction of the Nootka Fault Zone as the northern terminus of JdF subduction (Figure 8b, Merrill et al., 2022;Savard et al., 2018). Interpretation of the Subduction Stratigraphy The combined thickness of the stratigraphic package comprising t, c, and m horizons exceeds almost everywhere the nominal thickness of the incoming oceanic crust of ∼6.5 km by 2-12 km (Figure 9a).A thickness of ∼7 km is resolved only along the southern Central segment, between ∼43° and 44°N.Model regularization may dampen slab complexity and smooth over interface steps on a ∼20 km scale (e.g., m in Figure 6a), but the excessive thickness of the slab stratigraphy is a robust feature of the model and is almost always underpinned by individual point station measurements.Additional material, other than actively subducting igneous oceanic crust, must therefore make up the subduction stratigraphy. Layer 2 and the underlying mantle half-space, separated at m, were designed to correspond to igneous oceanic crust and pristine mantle.Where seismic velocities and seismicity images are available, the model appears to have captured this contrast appropriately, so that we confidently interpret m as the oceanic Moho.We cannot exclude the possibility that, where the plate is hydrated, m is biased into the oceanic mantle, lying deeper than the Moho.Signs of mantle hydration may be present under the Cape Mendocino coast and offshore northern Vancouver Island, suggested by a diffuse tomographic Moho, abundant mantle seismicity and the subduction of major fracture zones (Figures 6 and 8, e.g., Chaytor et al., 2004;Merrill et al., 2022;Rohr et al., 2018;Wilson, 1989).Such signatures are, however, not universally present. The excess thickness is more likely to develop above the plane of active subduction, that is, in Layer 1.Where the thickness of Layer 1 is substantial (i.e., from t to c; Figure 9b), the E-layer (or a reflective zone above the slab) has been detected in reflection seismic surveys (Figure 9b, Clowes et al., 1987a;Keach et al., 1989;Nedimović et al., 2003;Tréhu et al., 1994).Nedimović et al. (2003) suggest that the emergence of the E-layer is related to the occurrence of ETS.The E-layer is typically thicker than Layer 1, which suggests that Layer 1 is part of the E-layer (Calvert et al., 2020).Within the tremor zone, defined by the 0.1 tremor yr −1 km −2 contour (Figures 9b-9d; downloaded from https://pnsn.org;Wech, 2010), the mean and median V P /V S in Layer 1 are 2.49 ± 0.14 (2σ) and 2.44.Outside the tremor zone V P /V S is lower, with a mean value of 2.28 ± 0.14 and median value of 1.95 (Figures 9b,10a,and 10b).A two-sample Kolmogorov-Smirnov test yields a p-value of 5 ⋅ 10 −5 , indicating that the distribution of V P /V S values of Layer 1 from inside and outside the tremor zone are statistically different with >99% confidence.This suggests that the development of Layer 1 as a high-V P / V S ULVZ is related to tremor, in agreement with previous findings (Audet et al., 2009;Song et al., 2009).We interpret t in the tremor zone as the top of this ULVZ.Projecting the tremor epicenters (Wech, 2010) onto the t and c horizons yields tremor depths of 32 ± 10.8 and 38 ± 10.2 km (2σ), respectively (Figures 10c and 10d).Tremor depths are concentrated more tightly when projected to the c horizon, suggesting that tremor occurs closer to the base of Layer 1 (Figure 9c).Inside the tremor zone, where Layer 1 corresponds to the ULVZ, c marks a stark material contrast against the underlying oceanic crust and we interpret c as the base of the ULVZ. Between the coast and the tremor zone, except between 44° and 45°N, Layer 1 is typically thinner (Figure 9b) and its V P /V S is lower (Figures 9d and 10b), attaining normal values for basaltic material (∼1.8).Layers 1 and 2 still exhibit a combined thickness in excess of the incoming oceanic crust, with Layer 1 displaying properties that are nevertheless similar to oceanic crust.The t horizon is here the top of this excess volume.The c horizon here usually marks a less prominent material contrast than inside the tremor zone.It may seem natural to interpret c as the base of a possible sedimentary blanket above an underlying igneous oceanic crust (e.g., Delph et al., 2018), but we note here that Layer 2 is frequently thicker than oceanic crust, hence the interpretation of c as the base of sediments is possible but not universal.Horizon c may alternatively represent a velocity gradient within a sedimentary layer or the base of altered material belonging to the overriding continental crust.O" mark places where sediment subduction has been detected on marine seismic surveys, "X" where sediment subduction is absent (Han et al., 2016;Tréhu et al., 2012).The thickness of the subduction stratigraphy exceeds the thickness of the igneous oceanic crust.(b) Layer 1 (t-to-c) thickness and tremor zone (Wech, 2010).Downdip thickening of Layer 1 correlates with tremor locations.(c) Depth to c horizon correlates closely with tremor occurrence (Figures 10c and 10d).(d) V P /V S of Layer 1. Possible Controls on Slab Morphology The overall slab morphology exhibits a first-order correlation with the location of the static backstops in the Cascadia subduction system (Figure 11, Watt & Brothers, 2020).These are kinematic discontinuities that are related to distinct strength contrasts within the continental crust formed by accreted crystalline terranes.The most important terrane is Siletzia, a basaltic large igneous province that formed offshore as an oceanic plateau, possibly related to magmatism of the Yellowstone hotspot.It can be mapped along coastal Oregon, Washington and British Columbia (Wells et al., 2014).An associated aeromagnetic anomaly indicates that Siletzia is most voluminous in central and northern Oregon (Wells et al., 1998).Reflection seismic together with wide-angle seismic data and geomorphologic markers reveal that the base of Siletzia is up to 35 km deep and possibly extends down to the plate interface (Tréhu et al., 1994).This inference is substantiated by magneto-telluric data that image a voluminous resistive body, interpreted as representing Siletzia, that meets the plate interface in coastal Oregon (Egbert et al., 2022).Kinematically, the thickened Siletz terrane forms a distinct block that rotates clockwise with respect to stable North America (Wells et al., 1998) and displays the lowest interseismic vertical uplift rates along the entire forearc (Mitchell et al., 1994).Where the Siletz terrane recedes far inland on the eastern side of the Olympic Peninsula, giving way to the Olympic complex formed by underthrust marine sediments (e.g., Brandon & Calderwood, 1990), the slab lies shallower and flatter than anywhere else along the entire onshore forearc.Conversely, where the western boundary of the Siletz terrane is located off-shore and Siletzia is thickest, the slab is deepest and has its steepest dip (Figure 11).This suggests that the competence and rigidity of the Siletz block force the descent of the Juan de Fuca slab.It has been suggested that the Kumano pluton influences the subducting Philippine Sea Plate in a similar manner below southwest Japan (Arnulf et al., 2022). In between the shallowly dipping Olympic and steeply dipping Central segments, a pronounced southward downward bend in the slab is evident along a line extending between Grays Harbor and the southern end of Puget Sound.The bend can be seen in the raw receiver functions, where the timing of the P m S conversion increases, for example, from ∼3 to ∼4 s for rays arriving from NNW relative to those arriving from SSE azimuths at station US.NLWA and again from 4 to 4.5 s just south of that at station UW.WISH (Figure 12).Perhaps significantly, the three largest intermediate depth earthquakes in Cascadia, the 1949 M6.7 Olympia (Nuttli, 1952(Nuttli, ), 1965 M6.7 M6.7 Puget Sound (Langston & Blum, 1977), and 2001M6.8 (Ichinose et al., 2004;Kao et al., 2008) earthquakes occurred near the down-dip continuation of this bend, at depths at or immediately below those projected for the oceanic Moho.Along the Klamath segment to the south (south of 44°N), the slab structure is complex.The Gorda Plate, a relatively young and highly deformed plate (Chaytor et al., 2004;Wilson, 1989), encounters two static backstops, namely the western boundaries of the Franciscan complex and the Klamath terrane (Figure 11, Clarke, 1992;Watt & Brothers, 2020), and is bounded to the south by the Mendocino Fracture Zone.The southern and eastward trending seaward boundary of the thickened Siletz terrane has a reduced impact on slab morphology, resulting in the southward transition to a more gently dipping slab (Figure 11).This geometry is interrupted with the emergence of the Klamath terrane, where the steepest dip of the slab is located near the coast, bending behind the first (seaward) backstop, and unbending beneath the second (landward) backstop.In the Cape Mendocino area, the slab top is contorted in a fashion that yields a flat-lying segment just behind the coast.Because of the generally lower dip in advance of the volcanic arc and its unbending beneath the southern Siletz and Klamath terranes, it appears as if the Gorda plate does not subduct as readily as the Juan de Fuca plate.A possible cause for this behavior is increased buoyancy of the youngest subducting lithosphere (5-6 Ma at the trench, e.g., Wilson, 1993). Excess Thickness of Subduction Stratigraphy The nature and origin of the E-layer as a prominent element of the subduction zone stratigraphy that emerges abruptly along the dip trajectory in the vicinity of southern Vancouver Island is a long-standing conundrum in the understanding of the Cascadia subduction zone (e.g., Calvert, 1996;Calvert et al., 2011Calvert et al., , 2020;;Clowes et al., 1987a;Nedimović et al., 2003).Our data show a qualitative correlation between a thick Layer 1 and a thick (>4 km) E-layer where the latter has been imaged (Figure 9b).We also suggest that the reflective zone mapped by COCORP in central Oregon (Figure 9b, Keach et al., 1989) may manifest the presence of a structure with similar origin since it also coincides with a thick Layer 1. Assuming this association holds true along the entire margin, our data suggest that the E-layer is ubiquitous.Its abrupt emergence along dip is likewise reflected in our data: Coastal stations have a tendency to exhibit a thin or absent Layer 1, whereas inland stations generally possess a thick one (Figure 9b), consistent with previous inferences of Layer 1 thickening near the coast line from an amphibious receiver function study (Audet & Schaeffer, 2018).Interestingly, the combined (Layer 1 + Layer 2) thickness of the subduction stratigraphy does not obey the same trend.Places of a thin or absent Layer 1 may have an overall thick subduction stratigraphy coastal Olympic Peninsula and Cape Mendocino) and a significantly thick Layer 1 may correspond to a subduction stratigraphy that does not much exceed the thickness of the incoming oceanic crust (e.g., ∼7 km thickness between 43° and 44°N; Figure 13a). Sediments entering the subduction system may contribute to the subduction stratigraphy (e.g., Delph et al., 2018), but information about the amount of subducting sediment at the time of writing is scarce.Tréhu et al. (2012) interpret sediments subducting beneath Siletzia on two seismic lines near 45°N (circles on Figure 9a), but not on a third line closer to 44°N (cross on Figure 9a).Within the same latitude interval, the characteristic transition from thickened to normal subduction stratigraphy occurs, suggesting that these subducting sediments make up for the extra thickness (Figure 13b).In contrast, Han et al. (2016) document no sediment subduction at the latitude of the Juan de Fuca Strait, where we image thick (∼11 km) subduction stratigraphy.However, it is possible that sediment subduction occurred at the trench in the latter region at 3 Ma ago, and subsequently ceased.More data are required to conclusively define where sediment subduction contributes to subduction stratigraphy thickness. We note that Layer 1 emerges at around 30 km depth and gains thickness along the subduction trajectory, and that this thickness is unrelated to the thickness of the subduction stratigraphy updip of this depth (Figures 9a and 9b).This observation suggests that Layer 1 thickens in situ and develops a ULVZ through some depth-activated process.Elevated V P /V S > 2.0 (Figures 9b and 9d) suggest that the medium is fractured and saturated with pressurized fluids (Christensen, 1984), implying that it has lost structural integrity and strength.As a weak zone, the ULVZ is likely to host slip (e.g., Luo & Liu, 2021;Wech & Creager, 2011).LFE hypocenters are located near the base of the ULVZ (Figure 14, Calvert et al., 2020), suggesting that the plate boundary is located near c.Excess thickness may be due to underplating of subducting material, either of sediments atop the oceanic crust (e.g., Delph et al., 2018) or of the upper basaltic crust, which may lose structural integrity through wear (Figure 13c, see also e.g., Calvert et al., 2020;Clowes et al., 1987a).Moderately high seismic velocities (V S > 3.2 and V P /V S < 1.9) indicated by our inverse modeling results for Layer 2 (Figure 2 and Figure S51 in Supporting Information S1) preclude the presence of pervasive fracturing and pressurized fluids (Christensen, 1984).Instead, the presence of slivers of oceanic crust, large enough to not reduce seismic velocities significantly, would be consistent with LFE occurrence inside Layer 2. The subordinate slip represented by the LFEs during ETS episodes is consistent with the process of initiating detachment of the subducting oceanic crust at the LFE horizon (Figure 13c).Slow slip, which makes up the main share of the slip budget at depth (Bostock et al., 2015;Dragert et al., 2001Dragert et al., , 2004;;Kao et al., 2010), may well be located at or above c, that is, at the base or inside the 4-10 km thick ULVZ. Subcretion and underplating are consistent with earlier inferences made for the onshore Cascadia forearc from a wealth of geophysical data.Calvert et al. (2011) interpret underplating of sediments as taking place south of Puget Sound.Calvert (2004) and Clowes et al. (1987a) inferred that the E-layer constitutes underplated metabasaltic material beneath southern Vancouver Island based on high seismic velocities.The correspondence between these inferred sites of underplating with the thick ULVZ detected here and the widespread distribution of the ULVZ suggest that underplating is occurring through the majority of the entire Cascadia forearc (e.g., Delph et al., 2021). Conclusion Receiver functions provide valuable insights into the subduction of the Juan de Fuca and Gorda plates in the Cascadia region.Based on previous studies of receiver-side forward and back-scattered mode conversions, we parameterize subduction stratigraphy in three horizons t, c, and m.Mapping these horizons across the forearc reveals flatter slab segments beneath the Olympic Peninsula and Cape Mendocino, central Oregon exhibits a steeply dipping slab.Below most of Vancouver Island, the slab is marked by modest dips (∼7°-12°).This slab morphology appears to be influenced by the mechanical strength and density of accreted crystalline terranes.A notable Moho step south of the Olympic Peninsula may relate to recurrent, large, intermediate-depth earthquakes beneath Puget Sound.In addition, the presence of a thick topmost layer in the subduction stratigraphy may indicate the widespread occurrence of the E-layer.Previous interpretations suggest that the E-layer represents underplated slab material, including both sediments and metabasalt, implying that underplating occurs through most of the Cascadia forearc. Figure 2 . Figure 2. (a) Forearc stratigraphy with the previously identified interfaces.(b) Schematic radial receiver function with the forward and back scattered mode conversions used to constrain the model.Phases may interfere and cancel out in some cases.The absence of specific phase combinations may therefore be meaningful.Upper case letters indicate upgoing rays, lower case letters down-going rays, and subscript the scattering interface.(c) Parameterization of the subsurface model.The possible presence of additional interfaces complicates the phase associations. Figure 3 . Figure 3. Global search for subsurface parameters.(a) Receiver function data for station C8.TWBB.(b) Predicted data from the best fitting model with phase labels as in Figure 2b.(c) Local minima encountered in the global search for the 11 subsurface parameters (thickness against V P /V S in the left column and against V S in the right column) using a simulated annealing scheme with the preferred solution marked with a green circle and nominal depth uncertainties with a gray bar.Note the presence of a local minimum.If such minimum proved more consistent with external constraints and neighboring stations, the global search was repeated within bounds around that minimum. Figure 4 . Figure 4. Depth to the t, c, and m horizons.Top row: Data points by quality (black frames: A; white frames: B; not used for fitting the interface; and grayed out: C).Stations marked X do not show the respective interface and are interpreted as the location of the eclogitization front.Bottom row: modeled interfaces (Section 2.3) and profile locations (Figures 5-8). Figure 5 . Figure 5. (a) Profile A (Figure 4) with slab model and control points superimposed on the V S model of Kan et al. (2023) with seismicity from Morton et al. (2023).Comparison with the V P /V S image is shown in Figure S53 in Supporting Information S1.(b) Receiver function sections of individual stations sorted along the profile, with the receiver functions within each section sorted by the angular distance of the ray back azimuth from the profile azimuth (90°).1.5-20 s bandpass filter was applied.Phase labels correspond as in Figure 2. Figure 6 . Figure 6.Red octagon marks the intersection of the profile with the terrane backstop. Figure 8 . Figure 8.As Figure 5, but for profile D across Northern Vancouver Island.Tomogram and seismicity from Merrill et al. (2022).Low-frequency earthquake locations from Savard et al. (2020).Comparison with V S is shown in Figure S56 in Supporting Information S1.Receiver functions filtered between 2 and 20 s. Figure 9 . Figure 9. Select properties of slab stratigraphy.(a) Combined Layers 1 and 2 (t-to-m) thickness."O" mark places where sediment subduction has been detected on marine seismic surveys, "X" where sediment subduction is absent(Han et al., 2016;Tréhu et al., 2012).The thickness of the subduction stratigraphy exceeds the thickness of the igneous oceanic crust.(b) Layer 1 (t-to-c) thickness and tremor zone(Wech, 2010).Downdip thickening of Layer 1 correlates with tremor locations.(c) Depth to c horizon correlates closely with tremor occurrence (Figures10c and 10d).(d) V P /V S of Layer 1. Figure 10 . Figure 10.Properties of Layer 1 in relation to tremor (a, b) V P /V S of Layer 1 at stations (a) inside and (b) outside the tremor zone (0.1 tremor km −2 yr −1 contour; Figure 9) (c, d).Depth distribution of tremor epicenters projected onto the (c) t and (d) c horizons.Numbers at the base indicate the 5%, 50%, and 95% quantiles of the depth distribution. Figure 11 . Figure 11.(a) Dip and (b) depth of the t horizon.Static backstop (line with red octagons in panel (a)) and terrane boundaries (thick lines in panel (b)) modified after Watt and Brothers (2020).Shaded area enclosed by white dashed line represents thickened Siletz terrane detected in aeromagnetic data (after Wells et al. (1998)).The location of the terrane backstop correlates with and may exert a first order control on slab morphology. Figure 12 . Figure 12.Downwarped Moho from the Olympic Peninsula to Grays Harbor.(a) Map view with Moho depth contours as well as locations and receiver function ray back azimuths of stations shown on the right.Earthquake locations and focal mechanisms from International Seismological Centre (2023).(b) Radial receiver functions sorted by back azimuth.Rays arriving from NNW colored blue and from SSW colored gold.Note the southward down Moho-steps (P m S) at stations coincident with a thickening low velocity zone above at stations NLWA, WISH, WHGC, and RADR. Figure 13 . Figure 13.Possible subduction stratigraphies present in the Cascadia subduction zone.(a) Subduction of undisturbed oceanic crust (e.g., Central-South Oregon).(b) Sediment subduction, c may represent the base of the sedimentary later or a horizon within the sediments (e.g., Olympic Peninsula, Northern Oregon).(c) E-layer on top of the subducting crust.Low-frequency earthquake locations may indicate a detachment horizon at or below the base of the ultralow S-wave velocity zone.Low seismic velocities and in situ thickening above suggest ongoing underplating (e.g., Southern Vancouver Island). Figure 14 . Figure 14.Histograms of the depth of t, c, and m horizons relative to low-frequency earthquake (LFE) locations for different regions.Bin width is 2 km.LFEs are most closely located to the c horizon.For Vancouver Island, the data indicate that LFEs occur in Layer 2 between c and m.
9,200.8
2023-10-01T00:00:00.000
[ "Geology" ]
Rankclust: An R package for clustering multivariate partial rankings Rankcluster is the first R package proposing both modelling and clustering tools for ranking data, potentially multivariate and partial. Ranking data are modelled by the Insertion Sorting Rank ( isr ) model, which is a meaningful model parametrized by a central ranking and a dispersion parameter. A conditional independence assumption allows to take into account multivariate rankings, and clustering is performed by the mean of mixtures of multivariate isr model. The clusters’ parameters (central rankings and dispersion parameters) help the practitioners in the interpretation of the clustering. Moreover, the Rankcluster package provides an estimation of the missing ranking positions when rankings are partial. After an overview of the mixture of multivariate isr model, the Rankcluster package is described and its use is illustrated through two real datasets analysis. Introduction Ranking data occur when a number of subjects are asked to rank a list of objects O 1 , . . ., O m according to their personal order of preference.The resulting ranking can be designed by its ordering representation x = (x 1 , . . ., x m ) ∈ P m which signifies that Object O x h is the hth (h = 1, . . ., m), where P m is the set of the permutations of the first m integers.These data are of great interest in human activities involving preferences, attitudes or choices like Politics, Economics, Biology, Psychology, Marketing, etc.For instance, the voting system single transferable vote occurring in Ireland, Australia and New Zeeland, is based on preferential voting. Mixture of multivariate ISR model Starting from the assumption that a rank datum is the result of a sorting algorithm based on paired comparisons, and that the judge who ranks the objects uses the insertion sort because of its optimality properties, [1] state the following isr model: where µ ∈ P m is a location parameter and π ∈ [ 1 2 , 1] is a scale parameter.The numbers G(x, y, µ) and A(x, y) are respectively the number of good paired comparisons and the total number of paired comparisons of objects during the sorting process (see [1] for more details).Recently, [2] propose a model-based clustering algorithm for multivariate rankings, i.e. when a datum is composed of several rankings, potentially partial (when some objects have not been ranked).For this, they extend the isr model by assuming that, given a group k, the components of a multivariate ranking are independent: where the model parameter θ = (π j k , µ j k , p k ) k=1,...,K ,j=1,...,p are estimated by the mean of a SEM-Gibbs algorithm.The resulting algorithm is able to cluster ranking data sets with full and/or partial rankings, univariate or multivariate.To the best of our knowledge, this is the only clustering algorithm for ranking data with a so wide application scope. The Rankclust package This algorithm has been implemented in C++ and is available through the Rankclust package for R, available on the author webpage 1 and soon on the CRAN website 2 . The main function rankclust() performs cluster analysis for multivariate rankings and is able to take into account partial ranking.This function has only one mandatory arguments: data, which is a matrix composed of the observed ranks in their ordering representation.The user can specify the number of clusters (1 by default) he wants to estimate or provide a list of clusters numbers.In that case, the user can choose either the BIC or ICL criterion to select the best number of clusters among his list.The outputs of rankclust() are of different nature: • the estimation of the model parameters as well as the 'distances' between the final estimation and the current value at each iteration of the SEM-Gibbs algorithm.These distances can be used as indicators of the estimation variability. • the estimated partition.Additionally, for each cluster, the probability and the entropy for all the cluster's members are given.This information helps the user in its interpretation of the clusters. • for each partial ranking, an estimation of the missing positions. Application The use of the Rankclust package will be illustrated by the analysis of the European countries votes at the Eurovision song contest from 2007 to 2012.
982.8
2013-01-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
Microstructure and Nanoindentation Behavior of Ti 40 Zr 40 Ni 20 Quasicrystal Alloy by Casting and Rapid Solidification : A Ti 40 Zr 40 Ni 20 quasicrystal (QCs) rod and ribbons were prepared by conventional casting and rapid solidification. The X-ray diffractometry (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM) and differential scanning calorimeter (DSC) techniques were used to investigate the microtissue, phase composition, and solidification features of the samples; the nano-indentation test was carried out at room temperature. The results show that a mixture of the α -Ti(Zr) phase and the icosahedral quasicrystal (I-phase) was formed in the Ti 40 Zr 40 Ni 20 rod; the microstructure of Ti 40 Zr 40 Ni 20 ribbons mainly consisted of the I-phase. The solidification mechanism of the I-phase was different in the two alloys. The I-phase in the quasicrystalline rod was formed by packet reaction while in the ribbons it was generated directly from the liquid. At room temperature, both samples had relatively high hardness and elastic modulus; the elastic modulus of the ribbons is 76 GPa, higher than the 45 GPa of the rod. The hardness of the ribbons was more than twice that of the rod. Introduction Quasicrystals (QCs), a particular solid-state ordered phase, have a quasi-periodic translational order and amorphous symmetry [1][2][3][4]. Due to their distinctive atomic structure, QCs possess many desirable properties [2,3] such as low thermal conductivity [5,6], low surface energy [7], low friction coefficients [8,9], high hardness and high wear resistance [10,11], etc. These critical properties promote the application of QCs as thin films [10,12], coatings [13,14], and reinforced particles in many fields such as solar energy [13][14][15][16][17][18], automobile manufacturing, aerospace technologies, etc. Together with superconductors, quasicrystals are regarded as the two most significant advances in condensed matter physics in the 1980s, and are still regarded as a frontier discipline in this field [19]. Ti-based QCs containing a thermodynamically stable icosahedral phase (I-phase) have attractive properties such as high strength at elevated temperatures and low friction coefficients [20,21]. Furthermore, Ti-based QCs not only show the original characteristics of QCs but also show superior hydrogen storage capacity [21][22][23][24]. Therefore, as one of the hydrogen storage and negative electrode materials for nickel/metal hydride (Ni/MH) secondary batteries, Ti-based QCs have potential applications in both power generation and in environmental protection. However, there is little research on Ti-based QCs as the reinforcing phase of structural materials. Among Ti-based QCs, the TiZrNiQCs, first reported by Molokanov et al. [25], are strong candidates for practical applications due to their characteristics of easy formation, relatively large grain size and excellent performance, as mentioned in [23]. While the I-phase is thermodynamically stable [26], the I-phase material in the Ti-Zr-Ni system is hard to fabricate through conventional casting. Nevertheless, Qiang successfully prepared high-quality block quasicrystalline alloy with self-made split water-cooled cooper mold suction casting equipment [27]. Certainly, rapid solidification is the most common method to prepare a single I-phase [28]. Due to size problems with the prepared sample, it is difficult to characterize the hardness and elastic modulus of TiZrNiQCs. In this paper, Ti 40 Zr 40 Ni 20 QCs were prepared by the Cu-mold cooled blow casting and rapid solidification technique. In addition to thermal stability, the effect of structure on elastic modulus as well as on the hardness indexes of Ti 40 Zr 40 Ni 20 QCs alloys was investigated. Arc Melting The main raw materials (Ti 99.9, Zr 99.5 and Ni 99.9 mass% purity) were obtained from the following commercial sources: Beijing Yanbang New Material Technology Co. LTD (Bejing, China). Alloy ingots of Ti 40 Zr 40 Ni 20 (at%) were prepared by arc-melting under an argon atmosphere (Purity > 89.9%). The working voltage and current of the arc melting furnace were 70 V and 600 Amps. To ensure the uniformity of the ingots, each ingot was refined at least three times. Copper Blow Casting From these ingots, the Ti 40 Zr 40 Ni 20 alloy rod with Φ4 mm was prepared using copper mold blow-casting equipment (Shenyang Haosiduo New Material Preparation Co., LTD, Shenyang, China). The small pieces (10 g) of the chopped alloy ingot were put into a quartz tube with a nozzle diameter of 0.5 mm, and the heating power was gradually increased to 28 kW, which was maintained for 1 min and then reduced to 20 kW for the blow casting experiment under an argon atmosphere (Purity > 89.9%). Then, the pressure regulator valve was opened and the pressure difference used to spray the molten alloy liquid into the cylindrical cavity of the water-cooled copper mold to solidify. The cooling rate near the bottom of the copper mold was higher than that at the top, with a maximum over 10 3 K/s. The rod samples were subjected to annealing experiments in vacuum annealing furnaces. Rapid Quenching Metallic ribbons with 40-110 µm thickness and 2-4 cm in length were prepared by rapid solidification of the melt on a single copper roller with 2500 rotations per minute. The alloy block (10 g) was placed into a quartz pipe with the nozzle shape of a flat port. In an environment of high pure argon protection (Purity > 89.9%), an alloy block was melted into liquid by magnetic induction smelting heating. The heating power was gradually increased to 33 kW, which was maintained for 1 min and then reduced to 30 kW for the rapid quenching experiment. The molten liquid was sprayed onto the surface of a rapidly rolling copper roller with a diameter of 200 mm using gas pressure differences. Because the cooling speed can reach an order of 10 4 -10 6 K/s, this process required precise timing. Analysis and Detection Methods The microstructure and rotational symmetry of the rod and ribbon samples were examined by X-ray diffractometry (XRD-6000, Shimadzu, Japan) with monochromatic Cu Kα radiation (λ = 0.1542 nm) and a JEOL JEM-2010 transmission electron microscopy (TEM, JEOL, Tokyo Metropolitan, Japan). Specimens for TEM observation were prepared by the ion polishing technique. The morphology and composition of the samples were characterized using scanning electron microscopy (SEM, TESCAN, ORSAY HOLDING, Brno, Czech Republic). To analyze the components of the specimen, an energy dispersive spectrometer (EDS, TESCAN, ORSAY HOLDING, Brno, Czech Republic) was used. Hydrofluoric acid and nitric acid aqueous solution (10 mL water + 8 mL HNO 3 + 3 mL HF) was used as an etchant on the polished samples. The phase transformation was investigated by a high-temperature differential scanning calorimeter (TGA/DSC1, METTLER, Toledo, Switzerland). The DSC curves were obtained by placing about 60 mg of sample in an open ceramic crucible at a heating rate of 10 K/min in an argon flow of 50 mL/min. The nanoindentation test was conducted using the Bruker Hysitron TI Premier system for samples with load levels ranging from 1 mN to 10 mN. The measured hardness and modulus were the average values of eight points. Structure Characterization and Morphology The X-ray diffraction patterns of Ti-40at.%Zr-20at.%Ni alloy prepared by blow-casting and melt-spinning tools are shown in Figure 1. A hump-like a region appeared in all the XRD patterns, which are a result of the XRD sample holder. It can be seen from Figure 1a that the XRD pattern reveals the presence of both icosahedral quasicrystals (Iphase indexed according to Cahn scheme [29]) and α-solid solution phase. The sharp peaks of the icosahedral quasicrystalline phase are around 35-39 • , and the strongest diffraction peak of the α-solid solution phase are identified at 34 • , 40 • and 41 • . The weak peak of α-solid solution phase in the XRD measurements ( Figure 1a) might be attributed to small grain size or its low content in the alloys. The XRD pattern of the ribbons (as shown in Figure 1b) has a peak index with only the I-phase. The lattice constant of the I-phase of both samples was to be 0.508 nm, conforming to the value quasilattice of this alloy [27], and the α-solid solution phase was calculated as a = 0.311 nm and c = 0.493 nm, respectively. was used as an etchant on the polished samples. The phase transformation was investigated by a high-temperature differential scanning calorimeter (TGA/DSC1, METTLER, Toledo, Switzerland). The DSC curves were obtained by placing about 60 mg of sample in an open ceramic crucible at a heating rate of 10 K/min in an argon flow of 50 mL/min. The nanoindentation test was conducted using the Bruker Hysitron TI Premier system for samples with load levels ranging from 1 mN to 10 mN. The measured hardness and modulus were the average values of eight points. Structure Characterization and Morphology The X-ray diffraction patterns of Ti-40at.%Zr-20at.%Ni alloy prepared by blow-casting and melt-spinning tools are shown in Figure 1. A hump-like a region appeared in all the XRD patterns, which are a result of the XRD sample holder. It can be seen from Figure 1a that the XRD pattern reveals the presence of both icosahedral quasicrystals (I-phase indexed according to Cahn scheme [29]) and α-solid solution phase. The sharp peaks of the icosahedral quasicrystalline phase are around 35-39°, and the strongest diffraction peak of the α-solid solution phase are identified at 34°, 40° and 41°. The weak peak of αsolid solution phase in the XRD measurements ( Figure 1a) might be attributed to small grain size or its low content in the alloys. The XRD pattern of the ribbons (as shown in Figure 1b) has a peak index with only the I-phase. The lattice constant of the I-phase of both samples was to be 0.508 nm, conforming to the value quasilattice of this alloy [27], and the α-solid solution phase was calculated as a = 0.311 nm and c = 0.493 nm, respectively. To further confirm the existence of the I-phase, the ribbons were investigated by TEM, as shown by a bright-field electron micrograph from the central zone of the Ti40Zr40Ni20 rod in Figure 3a. The corresponding SAED patterns show the characteristic two-fold and five-fold symmetry of the I-phase at different tilts (shown in Figure 3c,d) corresponding to the X-ray diffraction peaks as indexed (shown in Figure 1a); this demonstrates that the samples maintained a quasicrystal structure. The TEM micrograph of the Ti40Zr40Ni20 ribbons, as shown in Figure 3b, reveals the formation of the I-phase grains with a chemical composition of Ti41±0.1Zr26±0.2Ni32±0.3 ranging from 64-184 nm. The representative SAED pattern of the I-phase reveals more distorted two-fold and five-fold symmetry in Figure 3e,f. Significantly, the grain size of the ribbons is smaller than that of the rod. This is due to the faster cooling of the single-roll ribbon, its greater nucleation rate, and its smaller grain. Totally, no O element was observed in the EDS patterns, indicating that the probability of the bulk oxidation of the rod sample and ribbons was very slight, and further confirming that the vacuum used to produce the sample was sufficient. The alloy composition was found to be close to the nominal stoichiometry taken. The EDS analysis results of the I-phase (C) of the Ti40Zr40Ni20 ribbons in Figure 3b are shown in Figure 3g. Via image processing using Image-Pro Plus, we were able to quantificationally determine the relative volume fraction of the phases. Although the rod alloy was not a single IQC phase, the IQC formed more than 84.5% of the volume, whereas the other phases formed nearly 15.5%, Confirming that the two alloys fabricated by blow-casting and rapid solidification were both quasicrystal alloys. To further confirm the existence of the I-phase, the ribbons were investigated by TEM, as shown by a bright-field electron micrograph from the central zone of the Ti 40 Zr 40 Ni 20 rod in Figure 3a. The corresponding SAED patterns show the characteristic two-fold and five-fold symmetry of the I-phase at different tilts (shown in Figure 3c,d) corresponding to the X-ray diffraction peaks as indexed (shown in Figure 1a); this demonstrates that the samples maintained a quasicrystal structure. The TEM micrograph of the Ti 40 Zr 40 Ni 20 ribbons, as shown in Figure 3b, reveals the formation of the I-phase grains with a chemical composition of Ti 41±0.1 Zr 26±0.2 Ni 32±0.3 ranging from 64-184 nm. The representative SAED pattern of the I-phase reveals more distorted two-fold and five-fold symmetry in Figure 3e,f. Significantly, the grain size of the ribbons is smaller than that of the rod. This is due to the faster cooling of the single-roll ribbon, its greater nucleation rate, and its smaller grain. Totally, no O element was observed in the EDS patterns, indicating that the probability of the bulk oxidation of the rod sample and ribbons was very slight, and further confirming that the vacuum used to produce the sample was sufficient. The alloy composition was found to be close to the nominal stoichiometry taken. The EDS analysis results of the I-phase (C) of the Ti 40 Z r40 Ni 20 ribbons in Figure 3b are shown in Figure 3g. Via image processing using Image-Pro Plus, we were able to quantificationally determine the relative volume fraction of the phases. Although the rod alloy was not a single IQC phase, the IQC formed more than 84.5% of the volume, whereas the other phases formed nearly 15.5%, Confirming that the two alloys fabricated by blow-casting and rapid solidification were both quasicrystal alloys. DSC Analysis Subsequently, thermal analysis tests were carried out on the Ti40Zr40Ni20 rod and ribbons. Figure 4 exhibits the DSC curve of two specimens with a heating rate of 10 K/min. It can be seen from the DSC curve that there are three endothermic events (867 K, 950 K and 1109 K) of the rod specimen shown in Figure 4a. According to the phase diagram [30] of Ti-Zr-Ni and Qiang et al. [31], during the continuous heating process of the T40Zr40Ni20 rod an isomorphic transition between α-solid solution and β-solid solution occurs first near 867 K, then the I-phase begins to decompose into the C14 phase and the β-solid solution at 950 K. Eventually, near 1109 K, the whole tissue is melted to produce a liquid phase. Based on XRD data, the rod is composed of the I-phase and α-solid solution phase at room temperature. Two endothermic peaks occurred (867 K, 950 K) during the DSC testing process, which is consistent with the two phases calibrated in the XRD pattern. In order to further confirm the formation of the C14 phase, the T40Zr40Ni20 rod was vacuum annealed at 1020 K for 2 h, followed by furnace cooling. The X-ray diffraction spectrum of the annealed sample is shown in Figure 5a; the quasicrystalline phase disappears, the peak strength of the α-solid solution phase increases, and the C14 phase appears. The results of the backscattered electron image and energy spectrum analysis of the annealed tissue are shown in Figure 5b. The gray phase (D) is the C14 phase (Ti39±0.2Zr33±0.4Ni27±0.3), with the MgZn2 structure type [30,32]. The phase transition near each temperature is shown in Table 1. For T40Zr40Ni20 ribbons, a sharp endothermic event is observed between 1073-1123 K on the DSC curve of the specimens in Figure 4b. The I-phase is generated directly from the liquid at around 1103 K, bypassing the C14 phase and β-solid solution. Therefore, the formation mechanism of the quasicrystal phase is different in the two solidification DSC Analysis Subsequently, thermal analysis tests were carried out on the Ti 40 Z r40 Ni 20 rod and ribbons. Figure 4 exhibits the DSC curve of two specimens with a heating rate of 10 K/min. It can be seen from the DSC curve that there are three endothermic events (867 K, 950 K and 1109 K) of the rod specimen shown in Figure 4a. According to the phase diagram [30] of Ti-Zr-Ni and Qiang et al. [31], during the continuous heating process of the T 40 Zr 40 Ni 20 rod an isomorphic transition between α-solid solution and β-solid solution occurs first near 867 K, then the I-phase begins to decompose into the C14 phase and the β-solid solution at 950 K. Eventually, near 1109 K, the whole tissue is melted to produce a liquid phase. Based on XRD data, the rod is composed of the I-phase and α-solid solution phase at room temperature. Two endothermic peaks occurred (867 K, 950 K) during the DSC testing process, which is consistent with the two phases calibrated in the XRD pattern. In order to further confirm the formation of the C14 phase, the T 40 Zr 40 Ni 20 rod was vacuum annealed at 1020 K for 2 h, followed by furnace cooling. The X-ray diffraction spectrum of the annealed sample is shown in Figure 5a; the quasicrystalline phase disappears, the peak strength of the α-solid solution phase increases, and the C14 phase appears. The results of the backscattered electron image and energy spectrum analysis of the annealed tissue are shown in Figure 5b. The gray phase (D) is the C14 phase (Ti 39±0.2 Zr 33±0.4 Ni 27±0.3 ), with the MgZn 2 structure type [30,32]. The phase transition near each temperature is shown in Table 1. Metals 2021, 11, x FOR PEER REVIEW 6 of 10 modes, with rapid solidification able to obtain more thermodynamically stable quasicrystalline alloy than blow casting. Nanoindentation Study The representative displacement(h)-load(P) for the two materials was studied. The results are given in Figure 6a; the curves of the rod alloy at different peak loads (1.5 and 10 mN) are curves 1, 2 and 3, respectively. The hardness values of nanoindentation tested are respectively 2.3, 2.26 and 2.0 GPa. The residual indentation depth from 96 to 348 nm can be clearly seen, but hardness decreases from 2.3 GPa to 2 GPa, which is due to a size effect in the sample during the nanoindentation process, such that the H drops with the rise of indentation depth [33]. Metals 2021, 11, x FOR PEER REVIEW 6 of 10 modes, with rapid solidification able to obtain more thermodynamically stable quasicrystalline alloy than blow casting. Nanoindentation Study The representative displacement(h)-load(P) for the two materials was studied. The results are given in Figure 6a; the curves of the rod alloy at different peak loads (1.5 and 10 mN) are curves 1, 2 and 3, respectively. The hardness values of nanoindentation tested are respectively 2.3, 2.26 and 2.0 GPa. The residual indentation depth from 96 to 348 nm can be clearly seen, but hardness decreases from 2.3 GPa to 2 GPa, which is due to a size effect in the sample during the nanoindentation process, such that the H drops with the rise of indentation depth [33]. Table 1. The phase transition process corresponding to each heat absorption peak. Rod Ribbons I→β + C14 phase -1109 β + C14 phase→Liquid I→Liquid For T 40 Zr 40 Ni 20 ribbons, a sharp endothermic event is observed between 1073-1123 K on the DSC curve of the specimens in Figure 4b. The I-phase is generated directly from the liquid at around 1103 K, bypassing the C14 phase and β-solid solution. Therefore, the formation mechanism of the quasicrystal phase is different in the two solidification modes, with rapid solidification able to obtain more thermodynamically stable quasicrystalline alloy than blow casting. Nanoindentation Study The representative displacement(h)-load(P) for the two materials was studied. The results are given in Figure 6a; the curves of the rod alloy at different peak loads (1.5 and 10 mN) are curves 1, 2 and 3, respectively. The hardness values of nanoindentation tested Metals 2021, 11, 1563 7 of 10 are respectively 2.3, 2.26 and 2.0 GPa. The residual indentation depth from 96 to 348 nm can be clearly seen, but hardness decreases from 2.3 GPa to 2 GPa, which is due to a size effect in the sample during the nanoindentation process, such that the H drops with the rise of indentation depth [33]. It also can be observed that there is a creep platform in holding time, and that the elastic recovery phenomenon occurs during the unloading process (shown in the unloading curve) on each P-h curve. This phenomenon indicates it can undergo creep deformation at room temperature. The creep displacement curves of the rod specimen were obtained at different peak loads during the holding load period (in Figure 6b). It can be seen that the nanoindent creep process has transient creep in the early stage of holding time, as well as steady-state creep. Furthermore, creep displacement increases with the rise of peak load, and ranges from 8 nm (appears at 1 mN) to 43 nm (appears at 10 mN). The nanoindentation creep behaviour of the test alloys corresponds to the results of Li and Zheng et al., indicating that creep displacement of the alloy is closely related to the peak load [33,34]. The P-h curves for the two materials with a load limit of 10 mN are also shown in Figure 7. The residual indentation depth is around 350 nm for the rod alloy and 176 nm for the ribbons, respectively. The hardness value of nanoindentation tested was 2.0 GPa (rod), less than 6.2 GPa (ribbons). The hardness was similar to that of Ti(Zr)-based amorphous alloy [35,36], which is 1.5 times the hardness value of common Ti alloy [37]. In comparing the hardness value with micron-sized quasicrystals in other systems, we find that the average hardness value is less than the reported hardness (9 GPa) in Al-based QCs [38]. In addition, the experimental value of the elastic modulus of the rod sample was 45 GPa, which is slightly higher than the 43 GPa and the 21.4 GPa reported by Qiang and Zhao [39,40]. Moreover, there are few studies on the elastic modulus of such ribbons; the value measured in the nanoindentation experiment was 76 GPa. Compared with the Ti40Zr40Ni20 rod, the elastic modulus of the ribbons increased by 68.9%. The elastic modulus of a material is affected by the bond strength between atoms, ions or molecules, crystal structure and micro-organization, etc. [41]. The micro-organization of the rod and ribbons were obviously different; there was no solid solution, and the content of the quasicrystalline phase increased in the ribbons. This not only affects the bonding strength of the overall material but also leads to changes in the local crystal structure and free volume in the alloy [42][43][44][45]. The above factors may be responsible for the differences in elastic modulus. This indicates that the increase in the I-phase is not detrimental to improving the elastic modulus of the alloy. It also can be observed that there is a creep platform in holding time, and that the elastic recovery phenomenon occurs during the unloading process (shown in the unloading curve) on each P-h curve. This phenomenon indicates it can undergo creep deformation at room temperature. The creep displacement curves of the rod specimen were obtained at different peak loads during the holding load period (in Figure 6b). It can be seen that the nanoindent creep process has transient creep in the early stage of holding time, as well as steady-state creep. Furthermore, creep displacement increases with the rise of peak load, and ranges from 8 nm (appears at 1 mN) to 43 nm (appears at 10 mN). The nanoindentation creep behaviour of the test alloys corresponds to the results of Li and Zheng et al., indicating that creep displacement of the alloy is closely related to the peak load [33,34]. The P-h curves for the two materials with a load limit of 10 mN are also shown in Figure 7. The residual indentation depth is around 350 nm for the rod alloy and 176 nm for the ribbons, respectively. The hardness value of nanoindentation tested was 2.0 GPa (rod), less than 6.2 GPa (ribbons). The hardness was similar to that of Ti(Zr)-based amorphous alloy [35,36], which is 1.5 times the hardness value of common Ti alloy [37]. In comparing the hardness value with micron-sized quasicrystals in other systems, we find that the average hardness value is less than the reported hardness (9 GPa) in Al-based QCs [38]. In addition, the experimental value of the elastic modulus of the rod sample was 45 GPa, which is slightly higher than the 43 GPa and the 21.4 GPa reported by Qiang and Zhao [39,40]. Moreover, there are few studies on the elastic modulus of such ribbons; the value measured in the nanoindentation experiment was 76 GPa. Compared with the Ti40Zr40Ni20 rod, the elastic modulus of the ribbons increased by 68.9%. The elastic modulus of a material is affected by the bond strength between atoms, ions or molecules, crystal structure and micro-organization, etc. [41]. The micro-organization of the rod and ribbons were obviously different; there was no solid solution, and the content of the quasicrystalline phase increased in the ribbons. This not only affects the bonding strength of the overall material but also leads to changes in the local crystal structure and free volume in the alloy [42][43][44][45]. The above factors may be responsible for the differences in elastic modulus. This indicates that the increase in the I-phase is not detrimental to improving the elastic modulus of the alloy. Conclusions In summary, we successfully synthesized quasicrystalline Ti40Zr40Ni20 alloy via the common blow casting and rapid solidification technique, where the I-phase is the dominant phase. It was found that the product in the Ti40Zr40Ni20 quasicrystal rod mainly contained 84.5% icosahedral quasicrystal, as well as 15.5% α-Ti(Zr) phase. The Ti40Zr40Ni20 quasicrystal ribbons were mainly composed of a single I-phase, with the grain reaching 64-184 nm. Additionally, the solidification mechanisms and decomposition temperatures of the quasicrystals were different during two preparation processes. The quasicrystal phase of the Ti40Zr40Ni20 rod was generated by the peritectoid reaction of the C14 phase and β-solid solution, while in the Ti40Zr40Ni20 ribbons the I-phase was generated directly from the liquid; the decomposition temperatures of the rod and ribbon quasicrystals were 950 K and 1109 K. respectively. This shows that the rapid solidification method can obtain a more thermodynamically stable quasicrystal alloy. The mean room temperature hardness for the quasicrystalline ribbons was around 6.2 GPa, which is slightly greater than the rod (2 GPa) in 10 mN. The mean elastic modulus of the ribbons was about 76 GPa, whereas that of the rod was about 45 GPa. Compared with the rod alloy, the hardness of the ribbons was more than twice as high, and the elastic modulus increased by 68.9%. Acknowledgments: The authors acknowledge to all the authors who contributed to this article and the teachers who provided the test analysis. Conflicts of Interest: The authors declare no conflict of interest. Conclusions In summary, we successfully synthesized quasicrystalline Ti 40 Zr 40 Ni 20 alloy via the common blow casting and rapid solidification technique, where the I-phase is the dominant phase. It was found that the product in the Ti 40 Zr 40 Ni 20 quasicrystal rod mainly contained 84.5% icosahedral quasicrystal, as well as 15.5% α-Ti(Zr) phase. The Ti 40 Zr 40 Ni 20 quasicrystal ribbons were mainly composed of a single I-phase, with the grain reaching 64-184 nm. Additionally, the solidification mechanisms and decomposition temperatures of the quasicrystals were different during two preparation processes. The quasicrystal phase of the Ti 40 Zr 40 Ni 20 rod was generated by the peritectoid reaction of the C14 phase and β-solid solution, while in the Ti 40 Zr 40 Ni 20 ribbons the Iphase was generated directly from the liquid; the decomposition temperatures of the rod and ribbon quasicrystals were 950 K and 1109 K. respectively. This shows that the rapid solidification method can obtain a more thermodynamically stable quasicrystal alloy. The mean room temperature hardness for the quasicrystalline ribbons was around 6.2 GPa, which is slightly greater than the rod (2 GPa) in 10 mN. The mean elastic modulus of the ribbons was about 76 GPa, whereas that of the rod was about 45 GPa. Compared with the rod alloy, the hardness of the ribbons was more than twice as high, and the elastic modulus increased by 68.9%.
6,493.2
2021-09-30T00:00:00.000
[ "Materials Science" ]
New Family of Centers of Planar Polynomial Differential Systems of Arbitrary Even Degree The problem of distinguishing between a focus and a center is one of the classical problems in the qualitative theory of planar differential systems. In this paper, we provide a new family of centers of polynomial differential systems of arbitrary even degree. Moreover, we classify the global phase portraits in the Poincaré disc of the centers of this family having degree 2, 4, and 6. Introduction and Statement of the Main Results Let P (x, y) and Q(x, y) be two real polynomials. In this work, we deal with polynomial differential systems in R 2 of the forṁ x = P (x, y),ẏ = Q(x, y), (1) where the dot denotes derivative with respect to an independent real variable t, usually called the time. The degree of the polynomial differential system (1) is the maximum of the degrees of the polynomials P (x, y) and Q(x, y). The origin O = (0, 0) of R 2 is a singular point for system (1) if P (0, 0) = Q(0, 0) = 0. When all the orbits of system (1) in a neighborhood U \ {O} of the singular point O are periodic, we say that the origin O is a center. If all the orbits of system (1) in a neighborhood U \ {O} of the singular point O spiral to O when t → +∞ or when t → −∞, we say that the origin is a focus. The center-focus problem consists in distinguishing when the singular point O is either a center or a focus. The center-focus problem started with Poincaré [13] and Dulac [1], and in the present days many questions about them remain open. More recent results on the center-focus problem can be found in [3][4][5][6][7]9] and in their references. In this paper, we consider the planar polynomial differential systems of the forṁ of degree 2k depending of k parameters r i for i = 1, 2, . . . , k such that 0 < r 1 < r 2 < · · · < r k . We denote the vector field of this system by X. It is easy to show that the function satisfies the equality ∂V ∂xẋ Therefore V is an inverse integrating factor of system (for more details see for instance [2].) By multiplying the vector field X by the integrating factor, 1/V system (2) becomes a Hamiltonian system. If we compute the Hamiltonian H of that system for k = 1 we obtain H (x, y) = e −2x (x 2 + y 2 − r 2 1 ), for k = 2 we have and for k = 3 we get An important property of systems (2), which will help for characterizing their phase portraits, is that all the circles f i (x, y) = x 2 + y 2 − r 2 i = 0 for i = 1, 2, . . . , k are invariant algebraic curves of system (2), i.e., they are formed by orbits of systems (2), because they satisfy that where K i is the polynomial 2y k j =1,j =i (x 2 + y 2 − r 2 j ) (see Chapter 8 of [2] for additional information on the invariant algebraic curves.) In this paper, we prove that polynomial differential systems (2) provide a new family of centers of degree 2k for all k = 1, 2, . . .. Moreover, we classify the global phase portraits of systems (2) in the Poincaré disc for k = 1, 2, 3. Theorem 1 For k = 1, 2, ... the polynomial differential systems (2) have a unique singular point in the interior of the circle x 2 + y 2 = r 2 1 and this singular point is a center. Theorem 1 is proved in Section 3. Theorem 2 For k = 1 the polynomial differential systems (2) have a phase portrait in the Poincaré disc topologically equivalent to the phase portrait of Fig. 1. Theorem 3 For k = 2 the polynomial differential systems (2) have a phase portrait in the Poincaré disc topologically equivalent to one of the three phase portraits of Fig. 3. Theorem 4 For k = 3 the polynomial differential systems (2) have a phase portrait in the Poincaré disc topologically equivalent to one of the seven phase portraits of Fig. 5. In Section 2, we recall basic definitions and results for proving our theorems. Poincaré Compactification In this section, we summarize some basic results about the Poincaré compactification, which was done by Poincaré in [13]. He provided a tool for studying the behavior of a planar polynomial differential system near the infinity. (For more details on the Poincaré compactification, see Chapter 5 of [2].) be a polynomial vector field of degree d. We consider the Poincaré sphere S 2 = {y = (y 1 , y 2 , y 3 ) ∈ R 3 : y 2 1 + y 2 2 + y 2 3 = 1}; its tangent plane to the point (0, 0, 1) is identified with R 2 . Now we consider the central projection f : R 2 → S 2 of the vector field X, which sends every point x ∈ R 2 to the two intersection points of the straight line passing through the point x and the origin of coordinates with the sphere S 2 . We note that the equator S 1 = {y ∈ S 2 : y 3 = 0} of the sphere is in bijection with the infinity of R 2 . The differential Df sends the vector field X on R 2 into a vector field X defined on S 2 \ S 1 , which is formed by two symmetric copies of X with respect to the origin of coordinates. We can extend the vector field X analytically to a vector field on S 2 multiplying X by y d 3 . This new vector field is denoted by p(X) and it is called the Poincaré compactification of the polynomial vector field X on R 2 . The dynamics of p(X) near S 1 corresponds with the dynamics of X in the neighborhood of the infinity. Since S 2 is a curved surface, for working with the vector field p(X) on S 2 , we need the expressions of this vector field in the local charts (U i , φ i ) and (V i , ψ i ), where U i = {y ∈ S 2 : y i > 0}, V i = {y ∈ S 2 : y i < 0}, φ i : U i −→ R 2 and ψ i : V i −→ R 2 for i = 1, 2, 3, with φ i (y) = −ψ i (y) = (y m /y i , y n /y i ) for m < n and m, n = i. In the local chart (U 1 , φ 1 ), the expression of p(X) iṡ In (U 2 , φ 2 ), the expression of p(X) iṡ The expressions for p(X) in the local chart The points of S 1 in any local chart have its v coordinate equal to zero. We note that the equator S 1 is invariant by the vector field p(X). The infinite singular points of X are the singular points of p(X) which lie in S 1 . Note that if y ∈ S 1 is an infinite singular point, then −y is also an infinite singular point and these two points have the same stability if the degree of vector field is odd. Such stability change to the opposite if the degree of the vector field is even. The image of the northern hemisphere of S 2 onto the plane y 3 = 0 under the projection π(y 1 , y 2 , y 3 ) = (y 1 , y 2 ) is called the Poincaré disc which is denoted by D. The integral curves of S 2 are symmetric with respect to the origin, therefore it is sufficient to investigate the flow of p(X) only in the closed northern hemisphere. In order to draw the phase portrait on the Poincaré disc, it is needed to project by π the phase portrait of p(X) on the northern hemisphere of S 2 . We note that the points (u, 0) are the points at infinity in the local charts U i and V i with i = 1, 2. Moreover, we remark that for studying the infinite singularities it is sufficient to study them on the local chart U 1 , and to check if the origin of the local chart U 2 is or not a singularity. Topological Equivalence of Two Polynomial Vector Fields Let X 1 and X 2 be two polynomial vector fields on R 2 . We say that they are topologically equivalent if there exists a homeomorphism on the Poincaré disc D which preserves the infinity S 1 and sends the orbits of π(p(X 1 )) to orbits of π(p(X 2 )), preserving or reversing the orientation of all the orbits. A separatrix of the Poincaré compactification π(p(X)) is one of following orbits: all the orbits at the infinity S 1 , the finite singular points, the limit cycles, and the two orbits at the boundary of a hyperbolic sector at a finite or an infinite singular point (see for more details on the separatrices [8,10]). The set of all separatrices of π(p(X)), which we denote by X , is a closed set (see [10]). A canonical region of π(p(X)) is an open connected component of D \ X . The union of the set X with an orbit of each canonical region form the separatrix configuration of π(p(X)) and is denoted by X . We denote the number of separatrices of a phase portrait in the Poincaré disc by S, and its number of canonical regions by R. Two separatrix configurations X 1 and X 2 are topologically equivalent if there is a homeomorphism h : According to the following theorem which was proved by Markus [8], Neumann [10] and Peixoto [11], it is sufficient to investigate the separatrix configuration of a polynomial differential system, for determining its global phase portrait. Theorem 5 Two Poincaré compactified polynomial vector fields π(p(X 1 )) and π(p(X 2 )) with finitely many separatrices are topologically equivalent if and only if their separatrix configurations X 1 and X 2 are topologically equivalent. Proof of Theorem 1 It is easy to see that all singular points of system (2) and these imply that the function f (x) for all k has at least one zero in the interval (−r 1 , r 1 ). If k is even and x ∈ (0, r 1 ), then f (x) is strictly decreasing and, if x ∈ (−r 1 , 0) then f (x) is positive. Hence for k even, the equation f (x) = 0 has only one root in the interval (−r 1 , r 1 ). By similar argument, we can easily show that if k is odd, then the equation f (x) = 0 has exactly one root in the interval (−r 1 , r 1 ). Thus, system (2) has a unique singular point inside the circle x 2 + y 2 = r 2 1 . The Jacobian matrix of the system at any singular point (x, 0) is as follows tr and det represent the trace and determinant of a matrix, respectively. Let (x, 0) be the singular point inside the disc of radius r 1 . If k is even, then x ∈ (0, r 1 ) and det(M) > 0. If k is odd, then x ∈ (−r 1 , 0) and det(M) > 0. Hence, the singular point (x, 0) is either a focus or a center, because its eigenvalues are purely imaginary. Since system (2) Infinite Singular Points Here, we study the infinite singular points of system (2) for all k. The Poincaré compactifi- It is obvious that there is no singular point in this local chart. The expression for p(X) in the local chart (U 2 , φ 2 ) has the forṁ So the unique infinite singular point in U 2 is the origin which is a hyperbolic stable node. Since the degree of system (2) is even, the origin of the chart V 2 is a hyperbolic unstable node. Proofs of Theorems 2, 3, and 4 In general system (2) has two important properties that we use for drawing its phase portrait. These properties are: (i) Since system (2) has the inverse integrating factor (3), its corresponding first integral is defined in the whole plane except perhaps on the circles x 2 + y 2 = r 2 i . Therefore system (2) cannot have any focus as a singular point. (ii) System (2) is invariant by the change (x, y, t) → (x, −y, −t). Thus, the phase portrait of this system is symmetric with respect to the x-axis. Proof of Theorem 2 System (2) with k = 1 has the two finite singular points P ± = ⎛ ⎜ ⎝ The Jacobian matrix at the point P ± is Therefore P + is a hyperbolic saddle, and P − is a center. By using the symmetry (x, y, t) → (x, −y, −t), the first integral (4), and the result of Section 4.1, it follows that the global phase portrait of system (2) for k = 1 in the Poincaré disc is topologically equivalent to the phase portrait of Fig. 1. Proof of Theorem 3 For finding the finite singular points (x, y) of system (2) with k = 2, we must take y = 0, and find the real zeros of the equation f (x) = −x + (x 2 − r 2 1 )(x 2 − Fig. 1 The phase portrait in the Poincaré disc of system (2) for k = 1 r 2 2 ) = 0. In other words, it is enough to find the fixed points of the polynomial g(x) = (x 2 − r 2 1 )(x 2 − r 2 2 ). Since g(0) = r 2 1 r 2 2 > 0 and function g has four real roots ±r 1 , ±r 2 , we have exactly one of the following three cases: (i) f has two simple positive roots (see Fig. 2a). (ii) f has one double negative and two simple positive roots (see Fig. 2b). (iii) f has two simple negative and two simple positive roots (see Fig. 2c). The Jacobian matrix at every singular point (x, 0) of system (2) with k = 2 is It is easy to see that tr M = 0 and det M = −f (x). In case (i) the polynomial f (x) has only two simple positive roots x = a and x = b satisfying 0 < a < r 1 < r 2 < b, −f (a) > 0 and −f (b) < 0 (see Fig. 2a). Therefore in this case system (2) with k = 2 has two singular points (a, 0) and (b, 0), which are a center and a hyperbolic saddle, respectively. For in case (ii) the polynomial f (x) has one negative double root x = a, and two simple positive roots Fig. 2b). Thus in this case, system (2) with k = 2 has three singular points (a, 0), (b, 0), and (c, 0), where (a, 0) is a nilpotent singular point, and (b, 0) and (c, 0) are a center and a hyperbolic saddle, respectively. Here, for determining the local phase portrait of the nilpotent singular point (a, 0), we use the index theory. Based on the Poincaré-Hopf theorem, for every vector field on S 2 with finitely many singular points, the sum of their (topological) indices is two (see for instance [2]). By applying this theorem to the Poincaré sphere with the Poincaré compactification of our Fig. 2 The graphics for all different cases of fixed points of g(x) when k = 2 system, it is easy to see that the index of the singular point (a, 0) is zero. Since the flow of a Hamiltonian system preserve the area, and the unique nilpotent singular points with index zero are the saddle-nodes and the cusps (see Theorem 3.5 of [2]), it follows that the singular point (a, 0) is a cusp. Hence, by using the symmetry (x, y, t) → (x, −y, −t), the first integral (5), and that at infinity, we have a pair of nodes at the origins of the local charts U 2 and V 2 , the first stable and the second unstable (see Section 4.1), it follows that the global phase portrait of system (2) for k = 2 for each of three cases (i), (ii), and (iii) in the Poincaré disc is topologically equivalent to the one of the phase portrait (a), (b), and (c) of Fig. 3, respectively. Proof of Theorem 4 Similar to the proof of Theorem 3, for finding the finite singular points (x, y) of system (2) with k = 3, we must have y = 0 and x must be a real zero of the equation Hence, it is enough to find Fig. 3 The phase portraits in the Poincaré disc of system (2) for k = 2 the fixed points of the polynomial function g(x) = (x 2 − r 2 1 )(x 2 − r 2 2 )(x 2 − r 2 3 ). Since g(0) = −r 2 1 r 2 2 r 2 3 < 0 and the polynomial g(x) has six real roots ±r 1 , ±r 2 and ±r 3 , we have exactly one of the following nine cases for the roots of the polynomial f (x). (i) One simple negative and one simple positive roots (see Fig. 4a). (ii) One simple negative, one double positive, and one simple positive roots (see Fig. 4b). (iii) One simple negative and three simple positive roots (see Fig. 4c). (iv) Three simple negative and three simple positive roots (see Fig. 4d). (v) Three simple negative and one simple positive roots (see Fig. 4e). (vi) One double negative, one simple negative, and one simple positive roots (see Fig. 4f). (vii) One double negative, one simple negative, one double positive, and one simple positive roots (see Fig. 4g). (viii) One double negative, one simple negative, and three simple positive roots (see Fig. 4h). (ix) Three simple negative, one double positive, and one simple positive roots (see Fig. 4i). The three invariant algebraic curves x 2 + y 2 = r 2 i for i = 1, 2, 3, play an important role in drawing the phase portraits for system (2) with k = 3. Actually, if there is one singular point inside and one singular point outside of an invariant algebraic curve, then these two singular points do not have any connection, i.e., there are no orbits going from one to the other. Case (i): In this case, we have two singular points (a, 0) and (b, 0) where −r 3 < −r 2 < −r 1 < a < 0 < r 1 < r 2 < r 3 < b. By computing the Jacobian matrix in each singular point, we can conclude that (a, 0) is a center and (b, 0) is a hyperbolic saddle. The symmetry (x, y, t) → (x, −y, −t) and the first integral (6) together with the result of Section 4.1 force to the system to have a phase portrait topologically equivalent to the phase portrait of Fig. 5a. Case (ii): Then system (2) has three singular points (a, 0), (b, 0), and (c, 0), where −r 3 < −r 2 < −r 1 < a < 0 < r 1 < b < r 2 < r 3 < c. Again, by computing the Jacobian matrix in each singular point we have (a, 0) is a center and (c, 0) is a hyperbolic saddle. Using the index theory as it is done in case (ii) for k = 2, we can conclude that (b, 0) is a nilpotent cusp. Since the cusp (b, 0) is the only singular point between the two invariant algebraic curves x 2 + y 2 = r 2 1 and x 2 + y 2 = r 2 2 , it implies the existence of a cuspidal loop which surrounds the center (a, 0) and the invariant algebraic curve x 2 + y 2 = r 2 1 . By using the symmetry (x, y, t) → (x, −y, −t) and the first integral (6), we also obtain a homoclinic loop passing through (c, 0) and surrounding all the finite singular points and all the three invariant algebraic curves. By taking into account the result of Section 4.1, the phase portrait of system (2) in this case is topologically equivalent to the phase portrait of Fig. 5b. Case (iii): Then the system has four singular points (a, 0), (b, 0), (c, 0), and (d, 0), where −r 3 < −r 2 < −r 1 < a < 0 < r 1 < b < c < r 2 < r 3 < d. By computing the Jacobian matrix in each of these singular points, we have that (a, 0) and (c, 0) are centers, and (b, 0) and (d, 0) are hyperbolic saddles. Due to the fact that the singular point (b, 0) is located between the two centers (a, 0) and (c, 0), and inside the invariant algebraic curve x 2 + y 2 = r 2 2 , the point (b, 0) is a saddle having two homoclinic loops. The left homoclinic loop surrounds the center (a, 0) and the invariant algebraic (d, 0) is a hyperbolic saddle on the right side of r 3 . Two of its separatrics form a homoclinic loop which surrounds the other three singular points and the three algebraic invariant circles. The symmetry (x, y, t) → (x, −y, −t), the first integral (6) and the result of Section 4.1 show that the phase portrait of system (2) in this case is topologically equivalent to the phase portrait of Fig. 5c. Case (iv): We have six singular points (a, 0), (b, 0), (c, 0), (d, 0), (e, 0), and (f, 0), where −r 3 < a < b < −r 2 < −r 1 < c < 0 < r 1 < d < e < r 2 < r 3 < f . The singular points (a, 0), (c, 0), and (e, 0) are centers and (b, 0), (d, 0), and (f, 0) are hyperbolic saddles. With a similar discussion as to the one of the previous case, we obtain that the phase portrait of the system is topologically equivalent to the phase portrait of Fig. 5d. Case (v): Doing a similar discussion to the case (iii), we obtain that the phase portrait of the system is topologically equivalent to the phase portrait of Fig. 5c. Case (vi): Working in a similar way to the case (ii), we obtain that the phase portrait of the system is topologically equivalent to the phase portrait of Fig. 5b. Case (vii): We have four singular points (a, 0), (b, 0), (c, 0), and (d, 0), where −r 3 < a < −r 2 < −r 1 < b < 0 < r 1 < c < r 2 < r 3 < d. By obtaining the Jacobian matrix in each singular point, we get that (b, 0) is a center and (d, 0) is a hyperbolic saddle. By using the index theory and Corollary 2 in chapter 3 of [12], it follows that the singular point (b, 0) inside the invariant algebraic curve x 2 +y 2 = r 2 1 , and the singular point (a, 0) inside the invariant algebraic curve x 2 +y 2 = r 2 3 , are cusps. Using the invariant algebraic curves together with the symmetry (x, y, t) → (x, −y, −t) and the first integral (6), we obtain that the phase portrait of the system is topologically equivalent to the phase portrait of Fig. 5(e). Case (viii): Again in this case using similar arguments to previous cases, we conclude that there are five singular points (a, 0), (b, 0), (c, 0), (d, 0), and (e, 0), where −r 3 < a < −r 2 < −r 1 < b < 0 < r 1 < c < d < r 2 < r 3 < e. The singular points (b, 0) and (d, 0) are centers, (e, 0) and (c, 0) are hyperbolic saddles and (a, 0) is a cusp. In this case the phase portrait is topologically equivalent to the one of Fig. 5f. Case (ix): The phase portrait of system (2) in this case is topologically equivalent to the phase portrait that it is shown in Fig. 5g. Similar to the case (viii), we have five singular points (a, 0), (b, 0), (c, 0), (d, 0), and (e, 0), where −r 3 < a < b < −r 2 < −r 1 < c < 0 < r 1 < d < r 2 < r 3 Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
5,968.2
2019-02-05T00:00:00.000
[ "Mathematics" ]
miRNA expression patterns in blood leukocytes and milk somatic cells of goats infected with small ruminant lentivirus (SRLV) The study aims to determine the selected miRNAs expression in milk somatic cells (MSC) and blood leukocytes (BL) of SRLV-seronegative (SRLV-SN) and SRLV-seropositive (SRLV-SP) goats. A functional in silico analysis of their target genes was also conducted. MiR-93-5p and miR-30e-5p were expressed only in BL, while miR-144 was expressed only in MSC, regardless of SRLV infection. In the SRLV-SP goats, higher miR-214-3p and miR-221-5p levels were found in the MSC than in the BL. Only miR-30e-5p was influenced by the lactation stage in BL in both groups, while only miR-93-5p was altered in BL of SRLV-SN goats. The target gene protein products exhibited contradictory functions, protecting the host from virus on the one hand and assisting viruses in their life cycle on the other. The differential expression of the miRNAs observed between the MSC and BL of SRLV-SP goats may suggest that the local immune response to the infection in the udder differs from the systemic response, and acts independently. Some miRNAs demonstrated different expression between lactation stages. It may be influenced by the metabolic burden occurring in early lactation and its peak. Some of the studied miRNAs may influence viral infection by regulating the expression of their target genes. . SRLV itself is not regarded as an immunodeficiency virus 2,3 . SRLV is capable of spreading imperceptibly through herds because it has a long incubation period, and clinical symptoms occur a long time after infection 4 . Thus, the control of transmission is very difficult 5,6 . CAE is caused by a systemic infection, with the main targets of the virus being monocytes, macrophages and dendritic cells, but not lymphocytes. Infection affects the mammary gland and respiratory and musculoskeletal systems in adult goats and the central nervous system in kids, though clinical symptoms are extremely rare in offspring 1,7 . Our previous findings suggest that SRLV can evade the immune system, or that it activates the immunity only to a slight extent. Differences have been found between SRLV-seronegative (SRLV-SN) and SRLV-seropositive (SRLV-SP) goats regarding the some cytokines or acute phase proteins (APPs) concentrations, with SRLV-SP goats demonstrating lower concentrations of interleukin 1α, (IL-1α) and interleukin 1β (IL-1β) and a higher concentration of serum amyloid A (SAA) in blood serum. In addition, higher concentrations of Il-1α, interleukin 6 (Il-6), and interferon β (IFN-β), and lower concentrations of SAA and ceruloplasmin (Cp) have been observed Scientific Reports | (2022) 12:13239 | https://doi.org/10.1038/s41598-022-17276-y www.nature.com/scientificreports/ in milk 8,9 . Elevated concentrations of SAA and haptoglobin (Hp) in the blood serum of goats with clinical CAE compared to healthy or asymptomatic infected goats were also found 10 . It should be stressed that SAA may foster virus multiplication. Jarczak et al. 8 and Reczyńska et al. 9 reported that the studied genes demonstrated a slightly different pattern of expression at the transcription level. Some differences at the mRNA level were found in the expression of the Il-1α, Il-1β, Il-6, interferon γ (IFN-γ), tumour necrosis factor α (TNF-α), SAA, and Hp genes in blood leukocytes (BL) and of the IFN-β, IFN-γ, TNF-α, and Hp genes in milk somatic cells (MSC). These differences between mRNA and protein expression indicate that some differential post-transcriptional regulation exists between SRLV-SN vs. SRLV-SP goats. However, it is not clear if they are connected with the effect of the virus but for sure they concern post-transcriptional or post-translational modifications in the MSC and BL. Epigenetic modification studies have recently advanced to a new level of development. Epigenetic mechanisms such as DNA demethylation, RNA methylation, cytidine acetylation in mRNA, and chromatin modifications may soon become the primary focus of researchers. However, microRNAs (miRNAs) activity is still being investigated as a key regulator of target gene expression. The targeted analysis must focus on a specific pathway and explain the details of the selected process. There is still a lack of knowledge about the molecular processes involved in SRLV infection, including the basis of the pathophysiological processes. MiRNAs are short sections of non-coding, single-stranded RNA around 21 to 23 nucleotides in length 11 . It is estimated that between 30% 12 and 60% 13 of human genes may undergo miRNA regulation, and that approximately 3% of genes encode miRNAs 12 . One miRNA type can be complementary to one or more genes, and one mRNA strand may have binding sites for several miRNAs. MiRNA paired to the 3′ untranslated region of the target gene mRNA can induce mRNA deadenylation, degrade the mRNA, or inhibit the translation process without mRNA degradation. In humans, miRNAs very rarely degrade or cleave mRNA; they are more likely to inhibit the translation process [12][13][14] . MiRNAs are known to influence a range of biological processes, such as cell proliferation, differentiation, growth, development, and apoptosis 13,15 . In mammals, many non-coding RNAs participate in the immune response to viral infections and have been found to control virus multiplication. The effect of viral infection of the human respiratory system have been examined 16 , while lungs are one of the main target organs of SRLV in sheep and goats 1,3 . For example, in an in vitro study on A549 human lung epithelial cells (ATCC) and human epithelial type 2 (HEp-2), miR-24 was found to act against human respiratory syncytial virus (RSV) and influenza virus (IAV) 16 . Moreover, IAV infection was found to be associated with increased miR-29 expression, which is known to activate the host antiviral response by modulating the pro-inflammatory signalling network 17 , but with reduced miR-30 expression, which also influences cell apoptosis and proliferation. MiR-30 also regulates the host cellular response to human metapneumovirus infection 13 . In turn, RSV silences miR-221 expression, thus inhibiting the apoptosis of infected cells and increasing viral replication and infectivity 18 . Increased expression of miR-214 is observed in bronchoalveolar stem cells during severe acute respiratory syndrome-associated coronavirus (SARS-CoV) infection; this is believed to stimulate the production of E1A protein and inhibit virus replication 13,19,20 . MiR-214 is also thought to help the SARS virus particles evade removal by the immune system by inhibiting its replication until the virus transmission is successful 19 . In addition, miR-24 and miR-93 have been found to inhibit the replication of vesicular stomatitis virus (VSV) in mice by targeting viral genes 21 . Elevated levels of miR-24 and miR-30a-5p were found in the blood serum of patients with rheumatoid arthritis (RA) 22 . It should be stressed that, although the diseases have different aetiologies, CAE is considered the animal model of this disorder 23 . Moreover, miR-214, which plays contradictory roles in SARS-CoV infection in humans, also downregulates the expression of the lactoferrin gene in human mammary epithelial cells 24 . Elevated miR-141 expression has been observed during enterovirus infection of rhabdomyosarcoma cells 25 . Summing up, elevated miRNA expression does not always work in favour of the host immune system: it can also facilitate the replication or spread of pathogens in the host organism. In the studies of caprine and ovine biology, the subject of research in relation to several forms of RNA e.g.: miRNAs, circle RNAs and lncRNAs (long noncoding RNAs) are mainly those about animals pregnancy [26][27][28][29] , functioning of the tissues and organs [30][31][32][33] , mammary gland health state 34,35 , or the function of the milk extracellular vesicles 36 . However, currently, only very limited information exists on the participation of miRNAs in SRLV infection. In the most recent study 37 , the analysis of miRNA expression in the lung of sheep infected with VM virus was presented. It revealed several miRNAs that were differentially expressed between VM seropositive groups and uninfected groups. Results describing the expression of miRNAs in the mammary gland and blood of goats with SRLV infection are still not available. Therefore, the present study is restricted to selected miRNAs known to take part in the host response to other viral infections in mammals and for which the sequences for goats were known. The findings are supplemented by an in silico analysis to identify their target genes, also currently known to be involved in the response to viral infections. Furthermore, following on from our previous studies, the present work contrasts the systemic immune response to SRLV infection and the response demonstrated in the udder based on miRNA expression profile. The aim of this study was to determine the expression of seven miRNAs (chi-mir-214-3p, chi-mir-221-5p, chimir-24-5p, chi-mir-29b-3p, chi-mir-93-5p, chi-mir-141-3p, chi-mir-30e-5p) in the BL and MSC of goats infected with SRLV. We also performed an in silico analysis of their probable influence on the genes of the immune system. As no information regarding the miRNA target genes is currently available for goats or other related species in TarBase 38 , the present analysis is based on information taken from human databases. Animals. The study was conducted on the same animal material as described by Reczyńska et al. 9 . It included 12 Polish White Improved (PWI) and 12 Polish Fawn Improved (PFI) dairy goats. The animals were kept in a loose barn under constant veterinary supervision. Moreover, the animals were examined by two specialists who 39 . Their basic diet consisted of maize silage, wilted grass silage, and concentrates. Goats in this herd have been routinely serologically tested for SRLV for more than twenty years (each December and June) using commercial ELISA (enzyme-linked immunosorbent assay) 40 , as part of the SRLV eradication program initiated in 1997 following the discovery of seropositive goats. The presence of the virus in the herd was confirmed by its isolation 41,42 . All kids were isolated from their mothers, irrespective of the maternal serological status, and fed colostrum followed by cow milk or milk replacement (Sprayfo Primo Goat Kid, Trouw Nutrition, Grodzisk Mazowiecki, Poland) depending on the year of rearing. Confirmation of infection was based on at least two consecutive positive serological tests (ID Screen MVV/CAEV Indirect-screening test, IDvet, Grabels, France) conducted at six-month intervals. The tests were also performed twice a year during the study to identify new potential infections and to eliminate infected animals from the control group; however, the goats in the SRLV-SN group had registered no positive tests during their entire lives. Irrespective of their serological status, all goats enrolled in the study were asymptomatic probably because of the low virus load: RT-qPCR analysis, performed according to Brinkhof et al. 43 found the number of the virus copies to be below detection level (data not shown). The goats were divided into two groups: SRLV-SN (N = 12) and SRLV-SP (naturally infected with SRLV; N = 12). Both groups were identical in terms of breed (6 PWI and 6 PFI) and parity: all were second parity, i.e. young goats, but not primiparous, or more than second parity, i.e. animals who had finished their somatic growth. None of the studied goats were euthanized because of the study and remained in the herd for further commercial use. All goats were kept under the same environmental conditions, but the groups of SRLV-SN and SRLV-SP goats were separated to avoid possible cross-infection. All were machine milked twice a day, with the SRLV-SN goats being milked first to eliminate the risk of SRLV transfer via milking equipment. Sample collection. The milk and blood samples were collected five times during lactation: just after kidding, and on day 30 (early lactation), day 60 (peak of lactation), day 140 (mid-lactation), and day 200 (late lactation) of lactation. Just before the morning milking, a small amount of foremilk was also collected by hand in a sterile manner. To identify the bacterial pathogens, Columbia agar supplemented with 5% sheep blood and MacConkey agar (bioMérieux, France) were used. Both media were inoculated with 100 μL of milk, and the plates were incubated at 37 °C for 48 h. The bacterial species was identified using VITEK 2 equipment (bioMérieux, France). One litre of udder milk from each goat was collected in a plastic RNase-free bottle during morning machine milking and centrifuged to obtain a pellet from MSC for RNA isolation. However, before that, 20 ml of representative milk samples from the whole udder milk was collected to the tube with preservative (Microtabs, Bentley, Chaska, Minnesota, USA) to establish the somatic cell count (SCC) in milk using IBCm device (Bentley, Chaska, Minnesota, USA). The whole blood samples were collected by a veterinarian in 9-ml EDTA tubes for RNA isolation one day after milk sampling. Milk somatic cell (MSC) isolation. Just after sampling, one litre of milk was centrifuged at 1500 rpm for 30 min, and the lipid layer and skim milk were discarded. The MSC pellet was transferred to 50-mL Falcon™ Conical Centrifuge Tubes (Fisher Scientific, Warsaw, Poland) and washed with phosphate buffered saline (PBS). Next, the tubes were centrifuged at 1,100 rpm for 15 min; the procedure was repeated twice. The obtained MSC were suspended in 1 ml of TRIzol reagent (Invitrogen, USA) and stored at -80 °C for further analysis. RNA isolation from milk somatic cells and blood leukocytes. The MSC pellet, obtained from 1 L of milk centrifugation and washed 2-times in PBS, was homogenized in a tube with silica beads using a Fast-Prep®-24 Instrument (MP Biomedicals, USA). Total RNA from MSC and whole blood was isolated using an NucleoSpin miRNA Kit (Macherey-Nagel, Düren, Germany), enabling the isolation of Small and Large RNAs, according to the manufacturer's protocol. Qualitative and quantitative analyses of RNA were performed using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Wilmington, DE, USA) and the Bioanalyser 2100 (Agilent, Santa Clara, USA) according to the attached protocols. Only RNA samples with ratio A260nm/280 nm between 1.9 and 2.2 and RIN ≥ 7.0 were selected for further analysis. The RNA samples were stored at − 80 °C. miRNA gene expression. Seven miRNAs (chi-mir-214-3p, chi-mir-221-5p, chi-mir-24-5p, chi-mir-29b-3p, chi-mir-93-5p, chi-mir-141-3p, and chi-mir-30e-5p) were selected based on their known involvement in viral diseases; in addition, of the mi-RNAs given above, only these seven sequences were available in miRBase at that time 44 , and could serve as templates for primers. The expressions of genes was measured according to the manufacturer's instructions using EPIK™ miRNA Select Assays (Bioline, Germany). The total 100 ng RNA from each samples were converted into cDNA using the EPIK™ miRNA RT kit (Bioline, Germany). The cDNA was diluted 1:10. Total volume of sample was 20 µl and was contained 4 µl of cDNA, 10 µl PCR Master Mix and 2 µl primer. The sequences of mature miRNAs used in the analysis are shown in Table 1 was used to select the genes whose functions are limited to immunity-related, antiviral response or viral processes 45 . Following this, a functional in silico analysis for human miRNA genes was conducted based on the DIANA-tools-TarBase v8.0, (The Database for Annotation, Visualization and Integrated Discovery (DAVID) v6.8) 38 , UniProtKB/Swiss-Prot UniProt release 2020_03 Apr-22, 2020 46 and GeneCards databases 47 (term of the accession-May-June 2020 and April 2021). The target genes were selected on the basis of a prediction score higher than 0.80 based on information available in TarBase 38 . In addition, miRNet software 48 (term of the accession-April 2021) was used to visualize the networks between studied miRNA and their target genes and the networks between miRNAs and diseases. Using miRNet software and then KEGG pathway module, functional analysis was performed for all target genes for all miRNAs expressed in MSC and BL, separately. Ethical approval. The study was carried out in compliance with the ARRIVE guidelines. All applicable international, national and/or institutional guidelines for the Care and Use of Animals were followed. The experimental protocols were approved by the the 3rd Local Ethics Committee for Animal Experimentation in Warsaw; (permission No. 31/2013), following all relevant guidelines and regulations. All goats belonged to the Experimental Farm at the Institute of Genetics and Animal Breeding in Jastrzębiec, near Warsaw, Poland. The Experimental Farm is an integral part of the Institute and maintains a flock of 60 dairy goats of the Polish White Improved, and Polish Fawn Improved. The animals remained under the constant care of a veterinarian (employee of the Institute). Furthermore, the study did not employ anesthesia and euthanasia methods. The owner of the herd gave us a written permission regarding the use of the goats in this study. This article does not contain any experiments on human subjects performed by any of the co-authors. Results Comparison of miRNA transcript levels between the MSC and BL of SRLV-seropositive vs. SRLV-seronegative goats. Table S1 shows the average SCC (SD) in goat milk collected at each sampling time for the study. The SCC in all points of sampling was low and range between 313 × 10 3 (SD = 256 × 10 3 ) at the beginning of lactation till 620 × 10 3 (SD = 606 × 10 3 ) at the end of lactation. All studied milk samples were free from bacterial pathogens. While environmental bacteria were present at similar levels in the milk of both groups of animals included in that study, they did not influence miRNA gene expression (data not shown). Low SCC and lack of environmental bacterial influence on miRNA expression both imply that animals had no subclinical mastitis. The first part of the present study compared the differences in miRNA levels between the BL and MSC of both SRLV-SN or SRLV-SP goats in separate analyses. In both groups of material, as well as in both groups of goats, four of the seven analyzed miRNAs were found: miR-214-3p, miR221-5p, miR-24-5p, and miR-29b-5p. However, in both the SRLV-SN and SRLV-SP goats, miR-141-3p transcripts were found only in MSC, while miR-93-5p and miR-30e-5p were found only in BL; these miRNAs were found at similar levels in both groups of animals, (Fig. 1). In the SRLV-SN goats, four miRNAs did not demonstrate any differences in expression between the MSC and BL; however, in the SRLV-SP goats, miR-214-3p (p ≤ 0.05) and miR-221-5p (0.05 < p < 0.1) were lower in the BL than in the MSC. Comparison of miRNA transcript levels in MSC or BL between SRLV-seropositive and SRLV-seronegative goats. The second part of the study compared the miRNA expression patterns between SRLV-SP and SRLV-SN goats in their BL or MSC, separately. No differences in the expressions of miR-214-3p, miR-221-5p, miR-24-5p, or miR-29b-3p were found between SRLV-SP and SRLV-SN goats in either MSC or BL (Fig. 2). However, as was previously mentioned, in both the SRLV-SN and SRLV-SP goats, miR-141-3p expression was noted in the MSC and not in the BL; conversely, miR-93-5p and miR-30e-5p expression was observed only in the BL, and not the MSC (Fig. 1). Nevertheless, the expression of each miRNA, in both MSC and BL, was similar between both SRLV-SN and SRLV-SP. Table S2 (supplementary files). Human databases were selected due to the paucity of information available in the goat and bovine databases. All biological processes involving the studied miRNAs, or rather the protein products of their target genes (Table S2), are listed and described in Table S3. MiR-24-5p, and miR-141-3p are not included due to the lack of information available on their target genes in TarBase v.8 38 . The full lists of the target genes for miR-29b-3p, miR-30e-5p, miR-93-5p, mir-214-3p, and mir-221-5p are given in Tables S4, S5, S6, S7, and S8 (supplementary files) respectively. Moreover, the relationship between these miRNAs and their target genes identified using miRNet software are visualised in Fig. S1 for MSC, and in Figs. S2 and S3 for BL. Three miRNAs, namely miR-29b-3p, miR-141-3p and mir-214-3p, expressed in MSC, have two common genes: hepatoma-derived growth factor (HDGF) and phosphatidylinositol 3,4,5-trisphosphate 3-phosphatase and dual-specificity protein phosphatase PTEN (PTEN). In contrast, miR-221-5p has only several common genes with miR-141-3p, and mir-214-3p, but not with miR-29b-3p (Fig. S1). In BL, nuclear receptor coactivator 3 (NCOA3) was found as a common gene for four miRNAs: miR-29b-3p, miR-93-5p, and mir-214-3p, mir-30e-5p (Figs. S2 and S3). Functional analysis of studied miRNAs. A functional analysis of the studied miRNAs and their target genes involved in viral infections, based on human databases, is shown in Despite the lack of information of the target genes for mir-141-3p in TarBase 38 , miRNet identified 145 targets, including insulin-like growth factor 1 receptor (IGFR1), stearoyl-CoA desaturase 5 (SCD5), peroxisome proliferator-activated receptor alpha (PPARA ), two isoforms of CDC25 genes, viz. M-phase inducer phosphatase 1 (CDC25A) and M-phase inducer phosphatase 3 (CDC25C), as well as genes from STAT or MAPK families (Fig. S4). According to miRNet, this miRNA is involved in hepatitis B virus (HBV) infection (Fig. S5). The Gene Ontology (GO) details regarding the Biological Processes, Cellular Components and Molecular Functions of the target genes for all the above-mentioned miRNAs are shown in Tables S9, S10, S11, S12, S13, and S14a-c while the results of the KEGG pathway analysis for target genes of all miRNAs expressed in MSC and BL are gathered in Tables S15 and S16, respectively. While no GO category was directly associated with any processes concerning SRLV infection, several other pathways were identified, e.g. HTLV-I infection, Influenza A, Hepatitis C pathways in both types of biological material, and Herpes simplex infection pathway in MSC, while Epstein-Barr virus infection pathway in BL. miRNA expression pattern during lactation in the MSC or BL of SRLV-seropositive or SRLV-seronegative goats. The third part of the study examined the expression of five miRs in the MSC of SRLV-SN and SRLV-SP (Fig. 3) goats over the course of lactation. No differences were found between the stages of lactation. The expression profiles of the six miRNAs identified in the BL of SRLV-SN and SRLV-SP goats during lactation are shown in Fig. 4. The expression of miR-93-3p (p ≤ 0.05) and miR-30e-5p (p ≤ 0.01) in the BL of SRLV-SN goats and miR-30e-5p (p ≤ 0.05) in the BL of SRLV-SP goats differed between some stages of lactation. In the www.nature.com/scientificreports/ SRLV-SN goats, miR-93-3p expression was higher on day 1 of lactation, i.e. immediately after kidding, and on day 60 at the peak of lactation, compared to day 30, i.e. early in lactation; however, no expression was observed on day 200 (late lactation). In contrast, miR-30e-5p expression peaked on day 1 of lactation compared to day 60 in both groups of animals; however, no transcripts were found during mid-lactation or late lactation. A functional analysis of miR-93-3p and miR-30e-5p is presented in Table S2. A comparison of miRNA expression with functional analysis between MSC and BL for SRLV-SP or SRLV-SN goats. The differences in immune-related gene expression patterns observed previously between mRNA and protein levels 8,9 are most likely explained by related to virus post-transcriptional regulation. Gene expression is believed to be regulated by miRNA through degradation or translational silencing of mRNA 14 . Although studies on goat miRNAs are still limited, a number of miRNAs have been identified in other animals that are also responsible for regulating the expression of genes involved in viral infections. For example, a number of miRNAs that are differentially regulated between SRLV-SN and -SP sheep, such as oar-miR-21, oar-miR-148a and oar-let-7f as well as has-miR-28a/b, hsa-miR-146a, hsa-miR148a, hsa-miR-155, and hsa-let-7c, may have potential implications for the host-virus interaction 37 . Unfortunately, this information was published too late to influence the design of the present study, and the goat homologs of those miRNAs were not included, except for chi-miR-29b-3p which was also investigated in cited research. In the present study, miR-214-3p and miR-221-5p were more abundant in the MSC of the SRLV-SP goats than in the BL. In humans, miR-214-3p appears to target the signal transducer and activator of transcription 3 (STAT3), La-related protein 1 (LARP1), and protein cornichon homolog 1 (CN1H1) genes while miR-221-5 targets heterogeneous nuclear ribonucleoprotein A1 (HNRNPA1). These genes may play a role in hepatitis C virus (HCV), human herpesvirus 8 (HHV-8), VACV, influenza A virus (H1HN1), Dengue (DENV) or Zika virus infections (Table S2). The STAT3 gene expression inhibition due to the elevated level of miRNA-214-3p found in the MSC of SRLV-SP goats might inhibit the infection of further cells by the virus, since the STAT3 exhibits proviral activity (i.a. HBV, herpes simplex virus 1 (HSV-1), varicella zoster virus, measles virus, and cytomegalovirus (CMV)); however, it also demonstrates antiviral activity against enterovirus 71, SARS-CoV and human metapneumovirus infection 48 . Nevertheless, this protein is involved in a range of biological processes including viral processes (GO:0016032) (Table S3). This may mean that STAT3 assists the entry of the virus into the cell, its transport to replication sites or its exit from the cell through specific interactions with the virus. STAT3 is also activated by the Nef protein of HIV 48 as well as in response to various cytokines including INFs and Il-6 (Table S2). Interestingly, elevated concentrations of INF-β and Il-6 have previously been observed in the MSC of SRLV-SP compared to SRLV-SN goats 8 . This may mean that although the expression of cytokines which activate STAT3 may be elevated, the elevated expression of miR-214-3p may inhibit STAT3 gene expression at the transcription stage. However, pairing a miRNA with the mRNA of the target gene could cause either its degradation or elevate its copy number 14 . Therefore, to identify the processes occurring in MSC infected with SRLV, further studies examining the expression of miRNA-214-3p, STAT3 and a broad panel of cytokines are needed. LARP1 is the next target gene of miR-214-3p. The protein supports the replication of DENV and increases VACV infection in parallel with CN1H1 and HNRNPA1, which also increases VCAV infection and may play a role in HCV RNA replication; it is also believed to be involved in viral processes (GO:0016032) (Tables S2 and S3). Therefore, their inhibition by miRNA may protect host cells from infection. To summarise, elevated expression of miR-214-3p and miR-221-5p in the MSC may inhibit the activity of the proteins that directly support invasion by viruses, and it has been suggested that in the udder, it is the immune cells that fight the virus 8,9 . Moreover, Reczyńska et al. 9 propose that the presence of a higher SAA level in the blood serum of SRLV-SP goats may support viral replication, as SAA demonstrates chemotactic activity toward leukocytes, potentially inhibits the production of antibodies by B lymphocytes and inflammatory reactions, and stimulates the differentiation of monocytes to macrophages, which is essential for viral multiplication. Similarly, www.nature.com/scientificreports/ Jarczak et al. 8 report decreased expression of IL-1α and IL-1β proteins in the blood serum of infected goats, suggesting that the virus has the ability to impair their immune system and prevent it from fighting the disease. Hence, it is difficult to conclude clearly whether overexpression of these two miRNAs in MSC supports or inhibits virus infection. On the one hand, they inhibit certain target genes whose product demonstrates proviral activity, while on the other, they inhibit genes whose products protect the host cells from virus infection. However, our present results are consistent with previous findings that the local immune response of the mammary gland differs from the systemic immune response 8 . Unfortunately, our in silico analysis did not reveal any APP or cytokine as a target gene of the studied miRNAs; however, STAT3, a protein produced by one of the target genes, is activated by a range of cytokines, including the IL-6 cytokine family. Moreover, the protein product of the elongation factor 1-alpha 1 (EEF1A1), a target gene of miR-30e-5p, directly regulates transcription of a number of genes: IFN-gene and sequestosome 1 (SQSTM1), ATP-dependent DNA/RNA helicase DHX36 (DHX36) and phosphatidylinositol 3-kinase regulatory subunit alpha (PIK3R1). These genes are targets of miR-93-5p, miR-303-5p, and miR-29b-3p, respectively, and are involved in different cytokine signalling pathways. Therefore, it would be beneficial for future studies to expand the number of analysed miRNAs to include those found to be associated with cytokine or APP expression in other diseases. Recent studies, for example, found miR-146a and miR-155 to regulate the major protein cascades of cytokine signalling pathways and inflammation in human RA: CAE serves as a model for RA 49 . Unfortunately, the experimental part of our study had already been finished when this information was published. The fact that miR-141-3p was only expressed in MSC and miR-93-5p and miR-30e-5p in BL indicates that their expression in goats is tissue specific. MiR-93-5p, observed only in BL, and miR-24, observed in both BL and MSC, are involved in the direct inhibition of VSV replication by attaching to the regulatory parts of two genes associated with the virus 21 . However, miR-30e-5p influences the activity of a number of human viruses, including HIV-1, HCV, IAV and HHV-1, through its target genes (Table S1). It was found that miR-30e intensifies the innate immune response in cells and reduces HBV load 50 . However, it is unlikely that these miRNAs play a similar role in SRLV because no differences in expression could be seen between the SRLV-SN and SRLV-SP groups. The in silico analysis (Table S2) identified five target genes involved in the immune defence, for miR-93-5p. Of these, mitogen-activated protein kinase kinase kinase 5 (MAP3K5) and chromobox protein homolog 5 (CBX5) are involved in viral processes (GO:0016032). In addition, the MAP3K5/ASK1 complex supports the host defence against various pathogens, while CBX5 interacts with Human polyomavirus 2 agnoprotein, and hence probably with the agnoprotein of Simian virus 40 (SV40); these processes disrupt CBX5 and lamin-B receptor association, resulting in destabilisation of the host cell nuclear envelope. SQSTM1 interacts with vif, a protein of HIV; serine/ threonine-protein kinase N2 (PKN2) stimulates HCV replication by interacting with HCV NS5B protein, and www.nature.com/scientificreports/ as such is involved in viral RNA genome replication (GO:0039694). Rho-associated protein kinase 2 (ROCK2) inhibits HCV infection while increasing VAVC infection. Therefore, further studies are needed to explain the role of miR-95-5p in SRLV infection. An increased level of miR-141 has been noted in enterovirus infection; it is believed to accelerate viral translation and facilitate the spread of the virus in the body 25 . Moreover, the miRNet in silico analysis found this miRNA to be involved in HBV infection (Fig. S5). In the present study, similar expression levels of miR-141-3p were observed in the MSC of both SRLV-SN and SRLV-SP goats with no expression in BL, suggesting that this miRNA does not participate in the process of SRLV infection. TarBase in silico analysis did not identify any target gene for miR-141-3p; however, almost 150 target genes were identified using miRNet (Fig. S4), including IGFR1, SCD5, PPARA , isoforms A and C of CDC25, as well as signal transducer and activator of transcription 4 and 5 (STAT4, STAT5), mitogen-activated protein kinase 9 (MAPK9), and MAPK14. According to the GenomeRNAi database 51 , the protein products of IGFR1, SCD5, PPARA and STAT4 increased VACV infection, while those of SCD5 decreased Sindbis virus (SINV) infection and those of PPARA influenza A virus infections. The roles of CDC25 and MAPK14 also appear ambiguous with regard to VACV infection; however, a study of the UniProt database 46 found them to be involved in viral processes (GO:0016032; Table S2) or cellular response to virus (GO:0098586; Table S2), respectively. In turn, STAT5A was found to resist VACV-A4L infection and decrease its replication 51 . Interestingly, MAPK9 appears to support VACV infection but resist VACV-A4L infection; moreover, it was found to decrease Simian viru s 40 (SV40) infection while increasing Human cytomegalovirus (HCMV) strain AD169 replication. As miRNAs may regulate more than 60% of human genes, the expression of the target genes identified by the present in silico analysis, requires further study. The target genes of the overexpressed miRNAs identified herein show ambiguous activity toward viral infections, but it is possible that not all of those target genes are triggered by overexpressed miRNAs. In addition, since miRNA-mRNA pairing influences mRNA translation, as well as the stability of the miRNA 14 , further extensive in vivo studies are needed to fully understand the dependencies between mRNAs and miRNAs. A comparison of miRNA expression with functional analysis over the lactation course for MSC or BL and SRLV-SP or SRLV-SN goats. MiR-30e-5p was the only miRNA to be influenced by the stage of lactation in the BL of both SRLV-SN and SRLV-SP goats. Its presence regulates the expression of genes involved in the biology of a number of human viruses, including HIV-1, HCV, influenza viruses and HHV-1 (Table S2). Most of the protein products of these target genes, such as DHX36, DNA damage-inducible transcript 4 protein (DDIT4), Msx2-interacting protein (SPEN), EEF1A1, or ubinuclein-1 (UBN1) act against viruses, being involved in the defense response to virus biological processes (GO:0051607). However, while SNARE-associated protein Snapin (SNAPIN), nuclear ubiquitous casein and cyclin-dependent kinase substrate 1 (NUCKS1) and cyclin T2 (CCNT2) positively regulate the processes essential for the viral life cycle, ROCK2, Ras-related protein Rab-7a (RAB7A), E3 SUMO-protein ligase RanBP2 (RANBP2), or SPEN both inhibit and promote viral spread. Zhu et al. 53 report the presence of an elevated level of miR-30e* during DENV infection and conclude that this miRNA inhibits virus replication by promoting IFN-β expression; however, in the present study, its expression varied with the stage of lactation but not the presence of SRLV infection. Therefore, only changes in its expression observed in the BL may be associated with the metabolic burden of the body and any disturbances in homeostasis at critical points during lactation (the perinatal period, the lactation peak); these factors do not appear to be associated with any changes in the MSCs. This is supported by the fact that the protein products of some target genes of miR-30e-5p are associated with processes essential for ensuring the fat content in milk, such as the de novo biosynthesis of long-chain saturated fatty acids (fatty acid synthase, FASN), transport of longchain fatty acids (FABP3), or the synthesis of triglycerides (diacylglycerol O-acyltransferase 2. DGAT2) 47 . The lack of expression of this miRNA in MSC may suggest that its target genes in milk are not inhibited throughout the whole lactation period. The results obtained in our study are not entirely consistent with those presented by Chen et al. 54 , who note that miR-30e-5p expression peaks during the early lactation and subsequently decreases in Saanen goats' mammary epithelial cells (GMEC); in contrast, in the present study, miR-30e-5p was not found in MSC, which also includes living exfoliated epithelial cells 55 . Chen et al. 54 identified miR-30e-5p transcripts in many dairy goat organs (heart, liver, spleen, lungs, kidneys, muscles, stomach tissues, GMEC) with the highest expression in GMEC; miR-30e-5p expression was observed in BL in the present study. In humans, miR-93-5p regulates the expression of genes involved in processes related to the innate immune response against various pathogens, including several viruses (Table S2); the activities of all known target genes of miR-93-5p are discussed above, as the miRNAs are expressed in BL but not in MSC. In the present study, in the BL of SRLV-SN goats, its expression was lowest during early lactation and absent during late lactation, but highest during the perinatal period and at the lactation peak. Contrary to our findings, a higher level of miR-93 transcripts was previously found during the late lactation stage in GMECs, with a lower level observed at lactation peak 56 . Therefore, further research is needed on the role and functions of this miRNA in both SRLV-SN and SRLV-SP goats at different stages of lactation. www.nature.com/scientificreports/ The first steps of our study were conducted by Jarczak et al. 8 and Reczyńska et al. 9 on cytokine, APP and cathelicidin genes expression at the mRNA and protein levels; however, our knowledge of the target genes is still fragmentary. Nevertheless, Il-6 was found to be indirectly influenced by STAT3, which is the protein product of one of the target genes of miR-214-3p, while EEF1A1, a target of miR-30e-5p, directly regulates the transcription of the IFN-γ gene. Moreover, SQSTM1, DHX36 and PIK3R1 are involved in different cytokine signalling pathways. The next step of our study will be to analyse the pairs miRNA-target genes identified in the present in silico analysis. Conclusions MSC and BL demonstrated slight differences in miRNA expression, regardless of SRLV infection. Our present findings lend support to our earlier hypothesis that the immune response in the udder is local and acts independently of the systemic immune response. We found that some miRNAs are not only involved in regulating goat immune gene expression in lentivirus infections, but also influence lactation processes, regardless of the health status of the host, and their expression does appear to be influenced by the metabolic burdens at the early lactation stage or at the lactation peak. Furthermore, some of the studied miRNAs may influence virus replication in the host cells by regulating the expression of their target genes. However, the identified target genes are capable of performing ambiguous roles, i.e. both protecting the host against virus infection and facilitating virus replication and host cell infection; furthermore, while some of them facilitate only one course of action, others facilitate both.
8,596
2022-08-02T00:00:00.000
[ "Medicine", "Biology" ]
In niCortex-From Proof-of-concept to Production The global e ort to build ever more powerful supercomputers is faced with the challenge of ramping up High Performance Computing systems to ExaScale capabilities and, at the same time, keeping the electrical power consumption for a system of that scale at less than 20 MW level. One possible solution, bypassing this local energy limit, is to use distributed supercomputers to alleviate intense power requirements at any single location. The other critical challenge faced by the global computer industry and international scienti c collaborations is the requirement of streaming colossal amounts of time-critical data. Examples abound: i) transfer of astrophysical data collected by the Square Kilometre Array to the international partners, ii) streaming of large facilities experimental data through the Paci c Research Platform collaboration of DoE, ESnet and other partners in the US and elsewhere, iii) the Super cilities vision expressed by DoE, iv) new architecture for CERN LHC data processing pipeline focussing on more powerful processing facilities connected by higher throughput connectivity. The In niCortex project led by A*STAR Computational Resource Centre demonstrates a worldwide In niBand fabric circumnavigating the globe and bringing together, as one concurrent globally distributer HPC system, several supercomputing facilities spanned across four continents (Asia, Australia, Europe and North America). Using global scale In niBand connections, with bandwidth utilisation approaching 98% link capacity, we have established a new architectural approach which might lead to the next generation supercomputing systems capable of solving the most complex problems through the aggregation and parallelisation of many globally distributed supercomputers into a single hive-mind of enormous scale. Introduction This article is a nal report of InniCortex I project led by A*STAR Computational Resource Centre in Singapore.We document an implementation of a worldwide InniBand fabric bringing together several supercomputing facilities spanned across the globe to create a galaxy of supercomputers [12].InniCortex I project represents a huge collaboration eort of several agencies and universities in Singapore (A*STAR, NSCC, NUS, NTU, SingAREN) together with more than 20 international partners around the globe. After successfully demonstrating the rst 100Gbps transcontinental InniBand connection connecting Singapore and United States of America at the annual Supercomputing Conference 2014, in New Orleans, USA [10], the award winning InniCortex project [1] grew rapidly demonstrating every year novel capabilities. Over the last few years several unprecedented elements have been showcased: • largest ever spanning InniBand network a ring-around-the-world with most of the segments at 100Gbps and few at 10-30Gbps; • eight InniBand subnets created using InniBand routers and demonstrated InniBand routing using BGFC (Bowman Global Fabric Controllers [5]); • InniCloud: the rst ever true high throughput global span HPC cloud allowing instances provisioning across four continents: Asia, Australia, North America and Europe [79].During the last three years, InniCortex and numerous applications utilising this concept and infrastructure, have been successfully demonstrated at several international events: Supercomputing Frontiers 2015 and 2016 in Singapore; ISC15 and ISC16 in Frankfurt, Germany; TNC15 in Porto, Portugal and TNC16 in Prague, Czech Republic and nally at SC14 (New Orleans) and SC15 (Austin), USA. Hence we have furnished a sucient proof of concept demonstrations exhibiting the eectiveness of the proposed solutions.Several elements are already being implemented in Singapore and elsewhere as production solutions enabling higher bandwidth and security. In the next section we will describe in some detail the third stage on our InniCortex I project which took place during the Supercomputing 2016 conference in Salt Lake City, USA.In Sub-Section 1.1 we will provide a list of all our collaborators in this project, followed in Sub-Section 1.2 with a description of a global scale network infrastructure, and, in Section 1.3, details of a number of application demonstrations prepared with our partners from the Oak Ridge National Laboratory, Fermilab, Stony Brook University, George Washington University, USA; University of Reims Champagne-Ardenne, France; Pozna« Supercomputing and Network Centre and Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, Poland.In Section 2 we online our plans for InniCortex 2 phase of our project, and nally Section 3 contains conclusions of this report. InniCortex Demonstrations at SuperComputing 2016 The International Conference for High Performance Computing, Networking, Storage and Analysis -SC16, the 28th annual international conference of high performance computing, networking, storage and analysis, celebrated the contributions of researchers and scientists from those just starting their careers to those whose contributions have made lasting impacts.The conference drew more than 11,100 registered attendees and featured a technical program spanning six days.The exhibit hall featured 349 exhibitors from industry, academia and research organizations from around the world. During the conference, Salt Lake City also became the hub for the world's fastest computer network: SCinet, SC16's custom-built network which delivered 3.15 terabits per second in bandwidth.The network featured 56 miles of ber deployed throughout the convention center and 32 million dollars in loaned equipment.InniCortex is build on top of the SCinet network with support from the SCinet team and in collaboration with various netowrking orgnizations. Partners The following partners were involved in the InniCortex demonstrations at SC16: Figure 2. Global RDMA test between Singapore and Salt Lake City using Longbows E100 • An aggregated total bandwidth of 75 -80 Gbps was utilised with link sharing with Tokyo-Tech University • The whole SC16 exhibit utilised just over 800 GBps bandwidth. • A dsync+ test was attempted to transfer a large dataset from storage in A*CRC to SC16.The initial transfer was 6MB/s due to heavy packet loss on the link -despite no recorded packet loss during the bandwidth stress tests.There was no time left to diagnose the issue. 100G Ethernet Andrew Howard from NSCC, Canberra, Australia conducted the following additional tests: • iperf3 network test showed 16-23Gbps per UDP stream, 17.2Gbps for TCP. • Lim Seng from A*CRC did further Ethernet bandwidth tests and was able to add additional bandwidth to the network. Demonstrations During SC16 A*CRC and several partners demonstrated ve applications that were using the long range InniBand connectivity.The following sections will briey describe the details and achievements of each applications.The demonstration with ORNL and SBU called Remote Fusion Experiment Data Analysis Through Wide-Area Network consisted in remote data processing capability of large and highthroughput science experiment through cross-Pacic wide area networks and showed how one can manage science workow executions remotely by using ORNL ADIOS data management system and FNAL mdtmFTP data transfer system.In this demonstration our partners presented a fusion data processing workow, called Gas Pu Imaging (GPI) analysis, to detect and trace blob movements during fusion experiment.GPI data streams were being sent from Singapore to Fermilab for near-real time analysis, while ADIOS was managing analysis workows and mdtmFTP transports stream data. Accomplishments and problems encountered: We encountered several problems with this demo especially because the Stony Brook servers were available only a few days before SC16.Most of the tests have been done between Singapore and Fermilab, however even in this scenario a lot of problems arose from the fact this was not a dedicated L2 circuit and several rewalls were blocking the communication on each side. Ultimately the problems have been solved.ORNL team have had plans to show this demo once again in 2017.The demo is part of a bigger collaboration between ORNL and Japan in the ITER project. Demonstrations with George Washington University (GWU) A Preliminary Study of Executing Parallel Applications over a Long-Range-IB network was showcased using mpiBLAST -a freely available, open-source, parallel implementation of NCBI BLAST.The mpiblast was run on multiple nodes on a cluster comprising of 4 nodes at GWU and 3 nodes at A*CRC and communicating was done via mpi/LHIB. The experiment consisted of the following steps: • A protein reference database (524603 protein entries, size of 153MB) was prepared and distributed to all of the compute nodes of the cluster (in both USA and Singapore).• The database was fragmented into 64 smaller fragments by running the program: mpiformatdb.• A subset of the protein sequences (from the reference database) was used for the protein blast search (blastp).The total number of sequences used was 7516. • The total execution time of the blastp (protein blast search) was measured against the dierent number of compute nodes used.Measurements were conducted using two cases: i) case1: compute nodes on USA only and ii) case2: compute nodes in both USA and Singapore.The following conclusions were drawn after analizing the results: • Up to 32 individual tasks were running on dierent nodes.In gures 6 and 7, the computational scalability is shown with dierent problem sizes using dierent numbers of computing nodes, and also reveal the dierence of localising the process on one side versus distributed process across InniCortex infrastructure.• Super linear scalability is observed for the processes running only on the US nodes.Super linear scalability is unusual and there is no guarantee if cluster size is expanded.The main reason behind this super linear scalability is unclear yet, but it may be because the overhead of the initial data distribution is smaller amongst the nodes connected within the same InniBand switch.• Basically, a linear scalability for the task distributed across InniCotex is observed, and it is the ideal case for a parallel computational process.So it should be considered as a successful demonstration.• Many large scale scientic and engineering computational tasks can be divided into many small sub-tasks using data partitioning strategy, and then computed in parallel using MPI (Message Passing Interface) protocol to distribute tasks and data on dierent computer nodes.However, the eciency of the rapid data exchange amongst the sub-task is very sensitive to the network latency, and thus certain computational tasks are inherently not scalable on the InniCortex infrastructure.• To hide the inevitable latency due to the distance with a large data transfer, we have successfully demonstrated a number of workows since SC14 (i.e.pipelining dierent stages of a task on the systems in dierent locations), but this is the rst time we demonstrate solving a single computational task on two HPC clusters across continents using MPI.This specic application was succesfully run because it is inherently embarrassingly parallel, no data exchange required among the sub-tasks.• Despite the success of this demonstration, we have encountered several diculties: OpenMPI was unable to build on the cluster on US side, and the issue was eventually resolved by the A*CRC software team.It was mainly due to the version of OpenMPI being too new for the building scripts that were used. We observed a big overhead (up to 2 minutes) for the initialisation of MPI if the job is distributed across InniCortex.It was conrmed that this overhead does not aect the accuracy of the computation, but the cause is not clear yet. Because the servers in the US and Singapore were using dierent types of processors additional time was necessary for tuning the data set between the two clusters.Such heterogeneity made dicult a fair comparison.rCUDA (http://www.rcuda.net) is a development of the Parallel Architectures Group from Universitat Politecnica de Valencia (Spain).rCUDA enables the concurrent remote usage of CUDA-enabled devices in a transparent way.Thus, the source code of applications does not need to be modied in order to use remote GPUs but rCUDA takes care of all the necessary details.Furthermore, the overhead introduced by using a remote GPU is very small.ROMEO HPC Center succesfully tested and run rCUDA inside their 260 GPUs cluster for the past year.The purpose of the SC16 demonstration was to test rCUDA over InniCortex and analyse the behaviour of the framework over extremely long distances.For the demo purpose the rCUDA server was installed in Singapore on a machine that didn't have any GPUs attached.The rCUDA client was installed on 18 servers in Reims each having 2 K20x.A standard matrixmatrix multiplication example from the CUDA SDK was then run on the Singapore machine which transparently sent all CUDA calls to the servers in Reims where the computation was actually taking place on the GPUs and then it was collecting the nal result. The rCUDA developers from Universitat Politecnica de Valencia developed a small graphical interface that was showing the performance of one single node compared to the performance of running the same code over several nodes.This was clearly showing the performance and benets of running the rCUDA framework.The main advantage is that all the GPUs in the cluster are exposed as a big pool of resources for each node.The demo was eventuallly congured to run on IPoIB.For reasons that were not determined before the beginning of the conference the IB version of rCUDA was always freezing in an initialization stage.The presence of the Obsidian R400 router and BGFC and dierent OFED stacks were initially thought to be the problem.However even after the router was removed from the conguration and the OFED stacks synchronized the rCUDA was not able to work in native IB mode although other tests like ib_pingpong were succesful.rCUDA developers suggested that further tests are run after SC16 in order to determine why their framework is not able to function properly in a native long range IB setup.A typical scenario of Vitrall usage is a real-time visualization of a complex 3D content using remote servers equipped with modern multi GPU solutions, like nVidia Tesla.Following frames are compressed as JPEG pictures and sent over HTTP protocol to clients.Still, they may be displayed on an attached screen or projector.Users may watch the same content from dierent points of view simultaneously.Information about users' input is sent back to the Vitrall Visualization Server using WebSocket protocol part of HTML5 specication. Rendering process was distributed among two locations: Pozna« and Singapore -every second frame will be rendered in Singapore, and every other frame in Pozna« and then accessed by client through Singapore.At the exhibition oor in SC a regular web browser was used to connect to the Vitrall instances and and visitors were able to interact with the presented 3D scene providing a smooth animation.Web browser uses WebSocket to connect with Vitrall server controller instance and after a new rendering session is established, client starts to send input messages.Controller instance interprets those messages and continuously applies changes to the authoritative state of presented 3D scene.That state is then incrementally replicated to both rendering instances -only those need access to a GPU device.One such instance runs locally with the controller instance (in Singapore), and the other runs in Pozna«.Controller instance decides which frame to render where, sends rendering requests to rendering instances and noties the client where following frames will be available.The client then requests those frame using HTTP in the way that frame rendered in Pozna« are accessed through Singapore.Accomplishments and problems encountered: The demo was succesfully run between PSNC and A*CRC without any major problems.Figure 10 shows a screenshot of the demo running in SC16.The performance was quite good as the interactivity with the scene was almost seameless.The only problem is that all servers involved in the demo requier quite powerfull GPUs.The concept of demo was to show a basic proof-of-concept solution for globally remote visualization, where the simulation (or datasets), the visualization pipeline and the user are globally dispersed and connected only by a global network like InniCortex.A scientic example chosen for this demo was the numerical weather forecasting.The model itself (run by the Interdisciplinary Centre for Mathematical and Computational Modelling at the University of Warsaw (ICM)) is an iterative process of predicting an atmosphere state condition dynamics based on initial weather.In each step a signicant number of data is created being a multivariate dataset on a three dimensional non-uniform grid over a modelled area.Observations of the evolution of a running simulation are possible by visualization of the consecutive iterations and provide the insight to the simulation.For the purpose of this a demo a dynamics of cloud coverage over central Europe was chosen.From the implementation perspective the demo consists of three layers: a simulation layer (or data layer), a visualization layer and end-user layer.The three layers were globally spatially disjoint and combined by a global interconnect.The simulation layer was physically located in Singapore (ACRC) and the running simulation was mimicked by incremental creation of new time step data les in the shared lesystem (each new le represents a simulation dump of a single iteration).The lesystem based on BeeGFS was remotely shared via InniCortex network and used by the visualization layer to access datasets.The visualization layer was physically located in Poland (ICM, Warsaw) and based on a dedicated visualization server running both the remote visualization middleware and the visualization software.VisNow (http://visnow.icm.edu.pl) was used as the visualization platform for implementation of the whole visualization pipeline.Visualization was created to show the orography of central Europe (static baseline layer) and the semi-transparent representation of cloud coverage was animated looping over the available iterations.A dedicated data access module in VisNow was monitoring the remote lesystem for presence of new time steps.Incremental dataset dis were read in remotely via a shared lesystem.The remote visualization middleware was based on NICE Desktop Cloud Visualization (DCV) platform, providing a remote desktop with dedicated data streaming for 3D OpenGL graphics and hardware compression.The end-user layer, being the interactive visualization, was physically located in the USA (SC16, Salt Lake City) and was running on a DCV thin client.On the DCV client-server path both InniCortex and Internet connections were tested. Figure 11. Remote visualization workow Accomplishments and problems encountered: The proposed demo was successfully congured and run on a basic dataset of numerical weather forecast.As a proof-of-concept solution the demo showed the possible application of a globally connected supercomputer based on InniCortex network.At the same time novel knowledge was gathered on technical bottlenecks of the proposed visualization ecosystem and several concepts of improvement solutions were dened. Figure 12. ICM demonstration screenshot InfiniCortex -From Proof-of-concept to Production Some problems were encountered due to the fact that a BeeGFS parallel le system was being mounted over the InniCortex.This conguration was not tested and ne tuned before the demonstration leading to access time outs to the le system. InniCortex Phase 2 Although over the last three years the InniCortex project ran as a proof-of-concept to demonstrate several unprecedented features, currently it is entering phase "InniCortex 2" where many of the features demonstrated start being integrated in production systems creating symbiotic collaborations and developing new projects [11]. At the beginning of the project the international connectivity was obtained through the goodwill of several international carriers.Since 2015 Singapore has its own permanent links with US and Europe dedicated to research.The National Supercomputing Centre of Singapore (NSCC) currently funds the enhancement of the connectivity and through SingAREN MoUs have been signed for co-funding of permanent international links (100 Gbps towards US with Internet2 and 10Gbps towards Europe with TEIN*CC).These links will open the world and benet the entire research community in Singapore, thus creating a symbiotic relationship between several local entities.The link to US set to open new oppotunities such as the participation in the Global Research Platform, an international extension of the Pacic Research Platform [3].Similar international connectivity are currently being negotiated with Australia and Japan. In the next years, NSCC and A*CRC are set to work on implementing a nation-wide Inni-Band fabric to interconnect several academic and industrial sites in Singapore which will provide high throughput, low latency direct connection to the supercomputing facilities in Fusionopolis.The rst steps have already been made with the launch of the NSCC who has its main stakeholders campuses (NUS, NTU, GIS) connected through InniBand directly to the main supercomputer.This infrastructure provides researchers in remote campuses an unparalleled ngertip access to HPC resources.This initiative to allow a wide access to the HPC resources will continue in the future under NRF funding for National Research Infrastructure. Genome Institute of Singapore (GIS) has the largest sequencing facility in South-East Asia at their facilities at the Genome Building, Biopolis.GIS relies on in-house storage as well as storage in Matrix Building Biopolis (100m away) and on storage and high performance compute facilities 2km away at the A*STAR Computational Resource Centre, Fusionopolis.Going forward, GIS will be processing up to thousands of exomes on a regular basis, processing capacity needs to be ramped up to cope with this demand of several Terabytes of data per day emerging from their sequencing labs.This means that in-house generated sequence data must be safely stored for data regulation compliance reasons, as well as transferred and stored at remote location pending computational processes such as the quality control step, read mapping step (high memory), variant call steps (embarrassingly parallel) and annotation steps involving dierent types of software with dierent hardware requirements.To avoid a data-bottleneck, we have constructed on top of standard TCP/IP network of A*STAR's next generation ExaNet a 500Gbps Innera CloudExpress 1 Ethernet link plus a Mellanox MetroX and Obsidian Longbow InniBand interconnections between Biopolis and Fusionopolis.The 500 Gbps link runs over a dedicated dark bre and is the fastest point-to-point link in Asia, and the fastest known link in the world dedicated for Next Generation Sequencing (NGS) analytics.By 2017 the whole bandwidth capacity between Biopolis and Fusionopolis will exceed 1.2 Tbps.In 2016, a 1 Terabyte RAM node was installed in GIS Biopolis linked by InniBand to the new NSCC supercomputer with 10 PByte storage (up to 500Gbytes/sec I/O using DDN's state-of-the-art Innite Memory Engine, accessing a dedicated genome analytic queue on ASPIRE 1, the 1PFLOPS NSCC supercomputer at Fusionopolis.This will become one the world fastest and biggest distributed genome processing system and will ensure that genome analytics can be scaled up to thousands of genomes to be processed routinely per month for biomedical research which will be a current practice in the framework of projects like Genome Asia 100k. Conclusions InniCortex project started by A*CRC under leadership of Dr Marek Michalewicz in 2014 has gained a lot of interes both in Singapore and abroad.The A*CRC group which was responsible for the InniCortex project was recognized through several awards during the past few years, the most prestigious one being the Innovative Project Gold Award from the Ministry of Trade and Industry of Singapore in 2015 [1].InniBand connectivity is still regarded as a high-end HPC oriented interconnect however features such as high-bandwidth and security which are demonstrated advantages over the classic TCP/IP are now recommending it for production environments outside the walls of a datacentre. InniCortex project could serve as a very useful prototypical infrastructure for a number of Big Scientic Data projects currently being developed: i) distribution of data to the international partners from the Square Kilometre Array [4], ii) streaming of large facilities experimental data through the Pacic Research Platform collaboration of DoE, ESnet and other partners in the US and elsewhere [3], iii) the Supercilities vision expressed by DoE [6], and iv) new architecture for CERN LHC data processing pipeline focussing on more powerful processing facilities connected by higher throughput connectivity [13] . Data connectivity between key HPC centres and countries has been dened as one of the priority areas of newly established EuroHPC programme."HPC is developing to cope with the constant increase in data volumes and ows.A recent report projects that annual global IP trac will reach 2.3 zettabytes by 2020 or 504 billion DVDs per year."[2] The authors are rmly convinced that InniCortex I project provides very well dened and tested path to realising some of the goals of EuroHPC connectivity agenda. InfiniCortex -From Proof-of-concept to Production • A*STAR Computational Resource Centre* -Singapore • Oak Ridge National Laboratory (ORNL) -USA • Fermilab -USA • Stony Brook University (SBU)* -USA • George Washington University (GWU)* -USA • University of Reims Champagne-Ardenne (URCA)* -France • Pozna« Supercomputing and Network Centre (PSNC)* -Poland • Interdisciplinary Centre for Mathematical and Computational Modelling (ICM)* -Poland Locations marked with an asterisk denote locations where a Longbow InniBand range extenders were installed for the SC16 demo.1.2.Network Infrastructure The InniBand network ran on top of the dark-bre network infrastructure prepared by A*CRC Network team in collaboration with various networking organisations (SingAREN, TEIN, GEANT, PIONEER, RENATER, Internet2, SCinet).A total of ve E100 Longbows were used to connect the SC16 show oor to A*CRC in Singapore providing a 50Gbps InniBand link.A global WAN InniBand network has been setup with 4 distinct subnets: • National Supercomputer Centre, Singapore • Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), Poland • A*CRC, Singapore + URCA, France + George Washington University, USA • Stony Brook University, USA using Obsidian's BGFC InniBand subnet manager [5] capable of InniBand routing between subnets. Figure 3 . Figure 3. Screen-shot of the bandwidth utilisation during SC16 demos showing 75-80 Gbps data transfer between Tokyo University of Technology and A*CRC booth at the show oor Figure 5 . Figure 5. ORNL demonstration screenshot The mpiblast command is as follows: mpirun -np 9 mca btl openib,self hostfile hostfile mpiblast copy-via=cp /concurrent=8 use-parallel-write query-segment-size=1000 -p blastp /-d AR.faa -i testInput.faa-o testOutput.txtwhere: np changing from 9, 17, 33 (for 1 node, 2 nodes and 4 nodes) concurrent changing from 8, 16, 32 (for 1 node, 2 nodes and 4 nodes) copy-via specify to use system cp command to copy fragment database les onto nodes.mca btl species to use openib as the communication protocol hostle species the host machines being used, for example: 10.1.1.30(node at USA) 10.1.1.20(node at Singapore) AR.faa is the protein reference database testInput.faa is the protein blast input sequence le segment-size species the job size (no. of sequence send from master mpi process to the workers mpi processes to work; i.e. for controlling task granularity) Results of the tests are shown in gures 6 and 7. Figure 8 . Figure 8. rCUDA performance running on 18 GPUs as compared to standard CUDA running on 1 GPU 1. 3 . 4 . Demonstrations with Pozna« Supercomputing and Networking Centre (PSNC) Vitrall (http://apps.man.poznan.pl/trac/vitrall-test) is a distributed web based visualization system.It is under development at the Applications Department of the Pozna« Supercomputing and Networking Center. Figure 9 . Figure 9. Vitrall distributed web based visualization system
5,971
2017-05-12T00:00:00.000
[ "Computer Science" ]
Pulling an intruder from a granular material: a novel depinning experiment Two-dimensional impact experiments by Clark et al. [2] identified the source of inertial drag to be caused by ‘collisions’ with a latent force network, leading to large fluctuations of the force experienced by the impactor. These collisions provided the major drag on an impacting intruder until the intruder was nearly at rest. As a complement, we consider controlled pull-out experiments where a buried intruder is pulled out of a material, starting from rest. This provides a means to better understand the non-inertial part of the drag force, and to explore the mechanisms associated with the force fluctuations. To some extent, the pull out process is a time reversed version of the impact process. In order to visualize this pulling process, we use 2D photoelastic disks from which circular intruders of different radii are pulled out. We present results about the dynamics of the intruder and the structures of the force chains inside the granular system as captured by slow and high speed imaging. Introduction Impact and the corresponding energy loss in granular material have attracted attention for some time [1]. Recently, there has been considerable work [2][3] [4] [5][6]about impact in granular material. When an intruder hits a granular material, such as sand, the velocity of the intruder decreases due to the force exerted by the granular particles. Impact in granular materials is particularly common in many applications, including ballistic applications, meteorite strikes and spaceship landings. The goal of past research has been to understand the dynamics of the impact in this complex system of granular material. There are many methods used to extract this force law empirically. A common version of the force law is To observe the intruder motion and the impactor dynamics, Clark intruder. They derived equations for the force exerted on the intruder, and the results match quite well with the experimental data. In this model, both the multiplicative factor in the inertial force(ż 2 term of equation) depends on the intruder and grains properties. In addition, multiplicative fluctuations in both static and inertial forces exerted on the intruder were discovered in these experiments [2] [7] [8]. However, the characteristics of the force fluctuation still remains unknown. As a comparison and supplement, we carried out controlled pull-out experiments to explore the mechanisms associated with these fluctuations and the nature of the static force. The pull out process is, to some extent, the time reverse of the impact process. The pull out process of an object buried in a granular material also has many applications for pilings and buildings. Pull Out Experiment For the pull out problem, we immediately come across several questions. First, what is the smallest force needed to trigger the pull out process? Second, how does the granular material respond? Third, what are the dynamics of the intruder? In order to answer these questions, we have carried out pull-out experiments using photoelastic techniques and image processing. Apparatus and Techniques The sketch above ( fig.2) shows the main apparatus used in this experiment. A bidisperse mixture of about 3000 photoelastic particles (with thickness about 0.32cm, diameters about 0.89cm and 0.56cm) are sandwiched between two sheets of transparent Plexiglas. The reason for using bidisperse particles instead of monodisperse particles is to avoid crystallization in this granular system. The distance between the two Plexiglas sheets is 0.41cm so that the particles can move smoothly while loosely constrained by Plexiglas sheets. The size of each Plexiglas sheet is 122cm x 92cm. We prepare the buried intruder in each run by tipping the sheets, placing the intruder in a given position and letting particles rain back in. As a result, the particles are randomly packed. The intruder is pulled by a thread connected to a bottle whose weight can be changed by adding water to provide different pulling forces. The recording techniques used in a pull out experiment is almost the same as in an impact experiment ( fig.1), which also can visualize the force structures inside the granular system. Light from several bulbs passes through the first circular polarizer, one Plexiglas sheet, the layer of photoelastic particles, the other Plexiglas sheet and the second circular polarizer before arriving at a camera. Then we acquire high speed video and single images. From these images we first obtain the position of the intruder. From this we track the trajectory of the intruder and derive the velocity and acceleration. The second type of information is the photoelastic visualization of force chains from the polarized image. From the force chains, we can estimate the granular structure and force acting on the intruder. Procedures and Results In a typical experiment, the intruder is buried at a given position in a 2D Plexiglas bed, which is filled with randomly packed photoelastic disks. Then we add load to the puller by adding small amount of water to the bottle. The process is repeated until the intruder is pulled out. As is shown in figure 3, the force chains build up step by step.We can determine the smallest pulling force needed to pull out the intruder. Then we conduct dynamic pull experiments with the smallest pulling force. High speed video visualizes the fast process once the intruder begins to move. At first, the intruder is stuck, but then it starts to accelerate and escapes quickly from the bed. We track the intruder so that we obtain velocity and acceleration curves, as shown in figure 4. For the dynamic pull out experiment, a fast camera taking images with a speed of 1000fps is used to capture the dynamic pull process which lasts about 1 second. As is shown in figure 5, the main part of the v vs. a curves can be fitted with exponential functions v = a * exp (bt) − c , indicating a linear relationship between velocity and acceleration as in figure 4. The slope in figure 4 is the reciprocal of the b factor in the exponential fitting function of velocity. Some loops can be found in figure 4, which is because the grains reorganize during the process creating renewed resistance to the intruder. During that time, the acceleration drops significantly. Additionally, to visualize what is happening during the fluctuations of the acceleration, we plot space-time graphs figure 6, the x-axis is time with unit of frames, and the y-axis is the distance from above the intruder in units of pixels. The right graph of figure 6 is part of a typical experimental image. We average the intensity on an arc, which gives us one point value in the space time graph. Discontinuities in the space time graph are shown in figure 7. At those moments the force chains break and during those moments, there is a local fluctuation in acceleration. After repeating the experiment with different circular intruders, we also find b, the reciprocal of the slope in the velocity vs. acceleration graph, which decreases as the ra- dius increases, as shown in figure 8. However, parameters a and c in the fitting function of velocity are independent of the intruder radius. In order to understand what is happening inside the granular system, we also extract the force chains (bright parts) from high speed photoelastic images, as shown in figure 9. Then we calculate each force chain's curvature, and obtain the distribution of curvatures. The force chain curvatures in the first and last half of the runs obey the same distribution for different intruder sizes, as shown figure 10. Different colors stand for different intruder sizes, while triangle " " and square " " represent first and last half run, respectively. After normalization, the distributions can be collapsed into a single fitting function y = a * exp(−b * |log(x) − c| d ), where a =0.03, b=1.3, c=2.7 and d =1.9. This curvature distribution and its effect on the stick-slips during the pull out process remains a subject for future investigation. Conclusion We have observed the force chain build-up in a 2D granular system during pre-pull experiments, and also investigated the dynamics of the intruder during the pull-out process. The velocity and the acceleration of the intruder follow a linear relationship, as the velocity increases exponentially with time. The b factor in the exponential function decreases with the radius of the intruder. Additionally, fluctuations in the acceleration have been observed and explained through simultaneous breaking of force chains in the granular system. The curvature of each force chain can be calculated, and those curvatures are found to follow the same distribution function for different intruder sizes in both first and last half experimental run.
2,009
2017-01-01T00:00:00.000
[ "Physics" ]
RURAL POVERTY AND ITS IMPACT IN SHEN CONGWEN’S SHORT STORY THE HUSBAND Shen Congwen (1902-1988) is a modern Chinese author and poet who is known for his native-soil literature (xiangtu wenxue). The short story The Husband (Zhangfu, 1930) is one of his works in native-soil literature. It tells a story about the character “Husband” who lived as a peasant in the barren and poor Huang Village. Like other husbands in the village, he sent his wife to work in the city to improve the family finances. At the end of the story, the character “Husband” decided to take his wife home to the village after he witnessed firsthand the life in the city and how his wife worked. This study examines how Shen Congwen discussed rural poverty and its impact on Huang Village. This study uses an intrinsic approach that focuses on the description of the social situation and the characterization presented by Shen in the short story. It is then strengthened with the description of the social condition happening in China outside of the literary works. The study finds that the main problem that caused the husbands to send their wives to work in the city is poverty and their desire to improve family finances. The root cause of the poverty is the barren village land and the tax to be paid by the villagers for the upper class. Shen sympathized with the villagers by telling the story of “Husband” and the events happening in the short story to reveal the disarray in 1930’s China. INTRODUCTION Shen Yue Huan better known by his pen name Shen Congwen is one of the popular modern Chinese novelists, short story writers, and poets. Shen Congwen is among the top known as a writer of native-soil literature (乡土文学xiangtu wenxue) 2 . Shen and xiangtu concept are inseparable (Jin, 2014: 33). According to Jiang Chenghao (2019), most of Shen's works are based on characters of the commoners in West Hunan. He used his memory of his hometown and turned the joy and sorrow of the commoners into the background of his story, and presented different life circumstances and conditions in a pure and beautiful manner. By recounting West Hunan as he knew it in his childhood in writing, Shen succeeds in creating "West Hunan World" Xiangxi Shijie, a poetic and beautiful world in the history of Chinese modern literature. Through his works, Shen expressed his appreciation for the beauty of human nature and showed the purity of the human soul (Zhu, 2016:18). Shen Congwen is known for his famous novel The Border Town (Bian Cheng) published in 1934. Besides Bian Cheng, Shen also produced numerous famous works including The Husband (Zhangfu), Long River (Chang He), Xiao Xiao (Xiao Xiao), Sansan (San San), Husband Wife (Fu Fu), Little White Sheep (Xiao Bai Yang), Light (Deng), Long Zhu (Long Zhu) (Li, 2014: 26). The short story The Husband (Zhangfu) (1930) is also categorized as native-soil literature (xiangtu wenxue) (from here on, it will be referred to as the short story The Husband). The short story The Husband tells the story of the character "Zhangfu" or "Husband", one of the residents in Huang Village whose majority of the population worked as peasants. The lives of peasants in Huang Village were not easy, the land there was barren, and they had to give up more than half of their harvest to the people referred to as "upper-class people" (shangmian de ren) in the short story. The villagers had to settle for eating only rice husks mixed with sweet potato leaves to satisfy their hunger. The husbands in Huang Village then sent their wives to the city to work for extra money. Similar to other husbands in Huang Village, the character "Husband", who is the main character in the short story, also sent his wife named Lao Qi to work in the city. When "Husband" missed his wife who cannot return home, he decided to visit his wife who worked in a boat. "Husband"'s visit opened his eyes to the work done by his wife. During his short visit, the "Husband" witnessed how his wife sold her body and the things that his wife had to experience while working in the boat. The study of the short story The Husband was carried out by Chen Youming (2004). Chen discussed the intrinsic elements of the short story The Husband. He divided his study into four chapters. The first and second chapters briefly analyze the emotional changes experienced by the character "Husband". The third chapter discusses the writing style and techniques used by Shen Congwen in the short story. The fourth chapter discusses Chen Youming's view after reading the short story The Husband. Chen's study states that one of the unique aspects of the short story The Husband is Chen's wit in using comedic nuances to express a tragedy. Song Guiyou (2006) argued that the major theme of the short story The Husband is the search of "Husband"'s identity. Song wrote that by sending his wife to do "business" by selling her body, "Husband" lost his identity. According to Song, "Husband"'s identity-seeking process is more about a process of finding oneself because the search of "Husband"'s identity carries deeper meaning: the search of hospitality, purity, and the simplicity of one's "hometown". Li Dingchun and Yang Xiayu (2017) discussed in their article several interpretations and studies of the short story The Husband done by scholars. They quoted the works of the scholars which were published in Appreciations of Famous Literary Works (Mingzuo Xinshang) as the data. Li and Yang's discussion is merely about a general summary and not a detailed review of the studies of the scholars, and also not a study of the short story The Husband in particular. Overall, the studies above focused on the discussion of the character "Husband", such as husband's emotional changes, the loss of husband's identity, and the search for husband's identity. There is one thing that I consider important but have not been discussed, namely the main reason the husband decided to send his wife to work in the city. Li and Yang's studies mentioned that the poverty is the reason behind the husband's action to send his wife working in the city. The root cause of poverty among the residents of Huang Village was not discussed. I found that Shen Congwen through the main character "Husband" intends to highlight the rural poverty that befalls Huang Village. This study will reveal the root cause of poverty in Huang Village that made the husbands send their wives to work in the city for extra money. This study examines the intrinsic elements including the characterization and social condition presented by Shen in the short story The Husband. The discussion will also be supported by the description of the social situation happening in China acquired from some data outside of the literary work. The majority of Huang Village's residents were peasants. For peasants, the quality of land holds a crucial role in their livelihood. In the short story, it is told that the land in Huang Village was barren and causing poor yields. Taxes to be Paid Despite the poor land conditions, the people of Huang Village, especially the men, still worked hard to cultivate the land which was the only hope they could do to survive. However, no matter how hard people worked on the land, the harvest could never make them live properly. In the end, the people of Huang Village remained deprived and destitute. Apart from the fact that the land was poor that the yields were low, the poverty also cannot be separated from the interference of the people who in the story are referred to as "upper-class people". 饥,总还是不容易对付下去。(236) and more than a half of their poor harvest, as usual, would be taken by the upper-class people. No matter how hard they worked to cultivate the land, the village residents' hands and feet were tied to the land, three months in a year, although they ate rice husks mixed with sweet potato leaves to satisfy their hunger, it was still not easy to make ends meet. Common Phenomenon: Sending Wives to the City The poverty that afflicted the people of Huang Village forced the local residents to find ways to get additional income apart from farming. The solution adopted by young husbands in Huang Village was by involving their wives in earning a living, by sending them to work in the city. 所以许多年青的丈夫,在娶媳妇以后,把她送出来,自己留在家中耕田 种地,安分过日子,也竟是极其平常的事情。 (232) Therefore, a lot of young husbands, after getting married, later sent his wives away (to work), while the husband stayed home, barely made a living and cultivated the land (farming), this was a very common thing. Husbands sending their wives to the city to work were a very common phenomenon done by husbands in Huang Village. This was to earn extra money to improve family finances. The husbands generally did not mind as their "property" rights to their wives and children remained, and the husbands even enjoyed the extra money earned by the wives. This way, the economic responsibility of the family shifted to the wife. The wife became the breadwinner to support the family. (1) 他们从乡下来,从那些种田挖园的人家,离了乡村,离了石磨小牛,离 了那年青而强健的丈夫,跟随了一个同乡熟人,就来到这船上做生意 了。 (230) They came from the village, from peasant families, left their village, left their stone mill and calf, left their young husbands who were strong and healthy, to join people from the same village that they know very well, came to these boats to do business. Although located in mountains, only 30 li from the river pier, it was common for women to leave their village to earn a living, and the men understood all the benefits of doing the business. He understood very well, that by status, his wife would still be his, the children she raised would also be his, after she got the money, he would always get his share." (3) 事情非常简单,一个不亟亟于生养孩子的妇人,到了城市,能够每月把 从城市里两个晚上所得的钱,送给那留在乡下诚实耐劳、种田为生的丈 夫,在那方面就过了好日子,名分不失,利益存在。(230;232) It is all so simple, a wife that did not hurry to bear and raise a child, when she arrived in the city, every month she could send the money she earned for two nights in the city to her honest and hard-working husband who cultivated the land in the village. From that side, they could live in comfort. The husband could get additional money, and he was still the legal husband of his wife. THE REALITY THAT "HUSBAND" SAW IN THE CITY The character "Husband" written by Shen Congwen was one of the husbands from Huang Village. This character does not have a name, and it is only referred to as "Zhangfu" or "Husband". Similar to the husbands in his village, he sent her wife named Lao Qi to work in the city. His wife worked "doing business" in a boat that usually docks on the banks of the river in the city. The character "Husband", who is the main character in the short story, is depicted a little different from the husbands in Huang Village in general. It is shown by how "Husband" came to the city, intending to visit his wife who could not return to the village. In her workplace, "Husband" gradually witnessed things that bothered his mind and started to feel relentless. 这时节,女人在丈夫眼下自然已完全不同了。(232) For by now, naturally in her husband's eyes the wife had changed out of all recognition. Once in the city and met Lao Qi, "Husband" was surprised to see the changes in his wife. After working in the city, Lao Qi was no longer like the Lao Qi he knew in the village. Both Lao Qi's appearance and manner seemed to follow the style of the city people. These were the changes in wives seen by "Husband" in terms of appearance and speech: The big shiny hair bun, the thin eyebrows plucked using tiny tweezers, white powder and bright red blushes on her face, and the atmosphere and style of city people, all of these things surely make "Husband" who was from the village feel greatly surprised, slightly dazed. The wife understood that very easily. The wife then opened the conversation by asking: "have you received the money, the five kuai?" or asked: "has our pig gave birth?" When her wife spoke, her accent was completely different, changed into the generous and free style of city ladies, completely different from the shy and timid wives from the village. From the quote above, it is clear that in "Husband"'s eyes, his wife drastically changed. Lao Qi is no longer a village woman who was shy and timid. She had changed into a city woman who is generous and free. The Treatment of the Guests to the Wife During his stay for three days and two nights in the boat where his wife worked, "Husband" met the guests of the boat. "Husband" finally saw how his wife worked in the boat, serving the coming guests. He finally realized that his wife earned money from prostitution. In the boat, "Husband" witnessed with his own eyes the mistreatment of the guests to his wife. The following is the depiction of the mistreatment of the wife's guests who used their power and authority to their personal pleasure of spending the night with Lao Qi:  River Warden (水保shui bao) River warden is the person who is responsible for various affairs and has the highest position in the river area. He is the "godfather" (干爹gan die) of many prostitutes there. (1) 水保是个独眼睛的人。这独眼据说在年青时因殴斗杀过一个水上恶人, 因为杀人,同时也就被人把眼睛抠瞎了。但两只眼睛不能分明的,他一 只眼睛却办到了。一个河里都由他管事。他的权力在这些小船上,比一个 中国的皇帝、总统在地面上的权力还统一集中。 (236) The river warden was a man who only had one eye. Rumor had it that this oneeyed man was involved in a fight with a criminal in the river when he was young and killed him, that is why he also lost an eye because of that man. However, he seemed to see clearer with one eye than with two. He was in charge of the river. His power over these small boats was more integrated and concentrated compared to that of the Emperor and the President of China had over the land. (2) 做水保的人照例是水上一霸,凡是属于水面上的事情他无有不知。 (238) A river warden was the ruler of the river, there was no matter related to the river area that he did not know. The character "Husband" first met the river warden when he was on patrol checking the boats. When the river warden visited Lao Qi's boat, he cannot find Lao Qi, the pimp, and Wuduo. He instead met a stranger who claims to be Lao Qi's husband from the village. At first, "Husband" was timid and stuttered when talking to the river warden, but after the river warden asked him about his life in the village, "Husband" gradually dared to tell him stories. 问题可多咧。(244) Because of the chestnut, the young man who cannot speak finally gained sympathy. Everything he knew about his village, he told them to the river warden. 说到了。(248) The "Husband" thought that the river warden really understood him, hence he kept on talking about everything. Even his hopes to have a baby next year, the things that are more suitable to be discussed in bed with his wife were also discussed by him. "Husband" finally found someone who can sympathize with him. The river warden also treated "Husband" well, he even invited "Husband" to have dinner together as a friend. After the river warden left the boat, "Husband" started to rethink the meaning of the message the river warden left for Lao Qi. Relentlessness grew in the heart. The waist pocket that seemed like full of money, so arrogant, reappeared before his eyes, and it seized his peace of mind. The square, red, orange peel face expression, that now looked like it was made of wine and dark blood, had become hatred for him, a gaze that now burnt his memory. And what is the point of remembering? He could still remember the order, sent to his face -her husband! "Tell her to not receive any guest tonight. I am coming." To hell with that, rude words straight out of his big mouth! Why did he have to say that? What's the reason he said that? At first, "Husband" had no problem with the message from the river warden, he was too happy because he was considered as a friend by the river warden. However, after ruminating the words of the river warden, "Husband" finally realized what the message means. If you pay close attention, the sentence above has an ambiguous meaning. The sentence said by the river warden implies, "because I am coming, I forbid you to allow other people to visit, only I can meet Lao Qi tonight." As the person who had power over the boats in the area, the river warden acted capriciously. He used his power as an authority in the river area, without any reluctance to dare to say the sentence in front of "Husband". This action indicates that the river warden did not respect "Husband" as Lao Qi's legitimate husband. Lao Qi was still "Husband"'s lawful wife. After being angry with his assumption, "Husband" had decided to return to the village. However, once again he was discouraged because his wife persuaded him to stay in the boat.  Patrolmen The group of officers led by the river warden comes by at midnight. They were the patrolmen from the center who were in charge of searching the boats on the docks to find suspicious people because of the frequent theft incident. There were four heavily armed officers keeping guard outside of the boat, while the river warden and the patrolmen leader went inside. The pimp woke "Husband" up from his sleep and pulled him out. After finishing the search, they went to the next boat. Suddenly, a patrolmanreturned to give the pimp a message to Lao Qi: "大娘,你告老七,巡官要回来过细考察她一下,你懂不懂?" "Madam (the pimp), please tell Lao Qi that our officer will come back for a more in-depth search. Understand?" 大娘说,"就来么?" The pimp answers "Is he coming now?" "查完夜就来。" "Later, after he finishes the patrol." "当真吗?" "Seriously?" "我什么时候同你这老婊子说过谎?"(264;266) "Have I ever lied to you, you old bitch?" There is no explanation as to what the patrol officer meant by "in-depth search" to Lao Qi. Even so, it can be concluded that the patrol officer intended to "book" Lao Qi for the night; thus, taking away "Husband" chance to spend the night with his wife again. The Appearance of Huang Village's Important Person in the Boat Since the first night "Husband" stayed in the boat where Lao Qi worked, their boat had received a guest. It was Lao Qi's first guest seen by "Husband" since he arrived. In the eyes of "Husband", the guest's appearance looked like a boat owner or a merchant, and he had some characteristics that remind "Husband" to an important person in his village. (234) there came a guest, probably a boat owner or a merchant, wearing boots made of cowhide leather, holding the corner of his pocket that revealed a thick and shiny silver chain, he drank a lot of Shao wine (white wine) and staggered aboard the boat. Once he got on he shouted loudly asking for a kiss on his lips and wanting to sleep. His loud and arrogant voice, and his behavior, reminded "Husband" to the head of the village and the important people in his village. The quote above illustrates that the style and the voice of the guests first seen by "Husband" on the boat reminded him of the head of the village or the important person in his village. This memory made him feel that this guest is someone who holds a high position. This makes "Husband" felt reluctant. The reluctance is shown in this depiction: 于是这丈夫不必指点,也就知道往后舱钻去,躲到那后梢舱上去低低的 喘气,一面把含在口上那皮烟卷摘下来,毫无目的的眺望河中暮景。 (234) So, the "Husband", without being told, immediately knew that he had to sneak to the rear cabin, hid at the end of the rear cabin with his breath held. While taking off the cigarettes on his lips, he then saw the sight of the twilight in the river aimlessly. "HUSBAND"'S FINAL DECISION His first intention to visit his wife triggered by the feeling of longing gradually changed after witnessing how his wife worked in the city and seeing the things that his wife had to face in her workplace. On the third day of his visit to the city, "Husband" can no longer hold his feelings, and finally decided to come back to his village and bring his wife, Lao Qi, home. Husband was determined to leave; Lao Qi was in dilemma. She got off the boat for a while, she turned back and took out the pay given by the soldier last night from her purse, counted it, four in total. She put one of them on the left hand of the man. "Husband" did not say anything. Lao Qi seemed to understand the "Husband"'s silence, and said: "Madam, please give me the other three." The pimp took out the money, and Lao Qi put the money into the man's right hand. The pimp's invitation to watch the opera, the invitation to a feast by the river warden had all gone unnoticed. The money given by his wife could no longer make him stay. 男子摇摇头,把票子撒到地下去,两只大而粗的手掌捣着脸孔,象小孩 子那样莫名其妙的哭了起来。(268) The man shook his head and threw the money on the ground. His big and thick hands were pressed against his face, and he suddenly cried like a child. At the end of the story when the river warden came to the boat intending to take "Husband" to the feast he had promised, the river warden could not find "Husband" and Lao Qi. The events experienced by "Husband" during his stay to the city finally prompted "Husband" to take Lao Qi home to the village, leaving the boat where she earned extra money. The event when "Husband" cried and threw away the money from his wife can be interpreted as "Husband"'s remorse for letting his wife working in the city. The reality of the wife's life and livelihood witnessed by the character "Husband", who happens to be able to visit his wife in the city, finally prompted him to take her home to the village. This is the representation of the feelings of the husbands who are not willing to let their wives working in the city if they find out about their job. It is an occupation that he had never thought of before when he was in the village. 一点点收成照例要被上面的人拿去一大半。(236) and more than a half of their low yields were seized by the upper-class people, as usual. The narrative pieces about the "upper-class people" in the short story do not clearly refer to anyone. The "upper-class people" likely refers to the people who have power and higher position than the local residents who can make them gave up their yields. Besides that, there is also the word zhaoli (照例) in the quote which means "as usual" in the sentence "and more than half of their low yields, were seized by the upper-class people, as usual." This indicates that at that time, the residents were forced to pay taxes not only once or twice but repeatedly and regularly. In the short story, as explained in point 2.3, the description of the wife's guests in terms of their appearance, style, manner, and accent reminded "Husband" to the head of the village and powerful people in his village. Through this statement, Shen Congwen seemed to illustrate that the "upper-class people" in Huang Village also appeared in the city, through his wife's guests. China's Real Condition in the Era Shen Congwen did not specifically mention the time setting of the short story The Husband. Based on the time record when the short story The Husband was written, found at the end of the short story, 1930, it can be assumed that the description in the short story will not be far from the real condition at the time. The condition of China at the beginning of the 20 th century was indeed unstable. China was not ruled by one particular group, and at that time China did not yet have one strong leader. China, which was previously a monarchy, turned into a republic. When Xinhai Revolution succeeded in overthrowing Qing Dynasty, and Sun Yatsen was appointed as the President of the Republic of China at the beginning of 1912, it did not necessarily free Chinese people from the oppression of their own and foreign nations. At that time, the Warlords were rampant that Sun Yatsen was not able to continue ruling as the President of the Republic of China (Muas, 2015: 82). The Warlords period occurred between the years 1916 to 1928. These years are close to the time when the short story The Husband was written, the year 1930. After the death of Yuan Shikai 3 in 1916, Chinese people had to survive against a series of internal conflicts caused by the disintegrations of several factions of the Beiyang army that competed for power. These factions were called Warlords. Hundreds and even thousands of Warlords were wide spread in areas across China. Their power varied greatly, as some troops were only consisting of a handful of people, but some troops were fully-armed and consisted of hundreds of people. Because the motivation of the Warlords was power and wealth, some Warlords sometimes acted like bandits and looted their areas. It was difficult to distinguish between Warlords and bandits. Sometimes bandits were in big numbers, similar to Warlords, but the main difference between the two is bandits did not have any rights in an area, while Warlords, on the other hand, could collect taxes (Sardjito, 1987: 12-13). CONCLUSION The word husband in the short story The Husband by Shen Congwen refers to two things; first, to the husbands of Huang Village in general, and second, to the main character of the short story called "Husband". At the beginning of the story, Shen Congwen's portrayal of "Husband" is the general image of husbands who usually sent their wives in the city to work as a shortcut to earn extra money to fulfill family needs. Afterward, the portrayal of husband shifts to one character "Husband", which starts when the character "Husband" visited the city to see his wife, Lao Qi. Similar to husbands in Huang Village in general, "Husband", the main character of the short story, at the beginning also sent his wife to the city. At the end of the story, the character "Husband" decided to go back and take his wife home after seeing inappropriate things experienced by his wife in the city. The character "Husband" is presented by Shen Congwen as the tool to reveal village life and its problems. The main problems that cause husbands to send their wives to the city are the poverty and the desire of the village residents to earn extra money for family finances. The root cause of the poverty is the barren village land and taxes paid to the "upper-class people" (shangmian de ren). Upper-class people or people in power did not mind that their acts caused the peasants to live in poverty. The "upper-class people" with their power could not only be found in the village, but also in the city, who were represented by the guests that came for Lao Qi. In other words, these "upperclass people" can be found everywhere. These social problems, if linked to the time when the short story was written, 1930, were likely to refer to the Warlord period (1916)(1917)(1918)(1919)(1920)(1921)(1922)(1923)(1924)(1925)(1926)(1927)(1928). The Warlords at that time had a great desire to maintain and expand their territory. To this end, they tried to collect money by taking local income 3 Yuan Shikai is a general in Qing Dynasty. Yuan promised to give up the authority of Qing Dynasty in full to Sun Yatsen by putting forward several conditions from the Qing. Eventually, to prevent bloodshed and worse chaos, Sun Yatsen resigned as the President of the Republic of China and handed it over to Yuan Shikai (Chesneaux, 1977: 7). received from taxes from the peasants. The short story The Husband portrays how the actions of the "upper-class people" impacted the peasants in several ways, which are the increasingly impoverished life of the peasants and the missing role of wives in the family since they had to work in the city as the breadwinner. These points show why Shen Congwen is referred to as native-soil literature (xiangtu wenxue) writer. Shen has great sympathy for the village community, and through his writing, he created characters and events happening in the short story TheHusband to expose the disarray in China in 1930.
6,641.6
2021-04-25T00:00:00.000
[ "Economics" ]
An in silico deep learning approach to multi-epitope vaccine design: a SARS-CoV-2 case study The rampant spread of COVID-19, an infectious disease caused by SARS-CoV-2, all over the world has led to over millions of deaths, and devastated the social, financial and political entities around the world. Without an existing effective medical therapy, vaccines are urgently needed to avoid the spread of this disease. In this study, we propose an in silico deep learning approach for prediction and design of a multi-epitope vaccine (DeepVacPred). By combining the in silico immunoinformatics and deep neural network strategies, the DeepVacPred computational framework directly predicts 26 potential vaccine subunits from the available SARS-CoV-2 spike protein sequence. We further use in silico methods to investigate the linear B-cell epitopes, Cytotoxic T Lymphocytes (CTL) epitopes, Helper T Lymphocytes (HTL) epitopes in the 26 subunit candidates and identify the best 11 of them to construct a multi-epitope vaccine for SARS-CoV-2 virus. The human population coverage, antigenicity, allergenicity, toxicity, physicochemical properties and secondary structure of the designed vaccine are evaluated via state-of-the-art bioinformatic approaches, showing good quality of the designed vaccine. The 3D structure of the designed vaccine is predicted, refined and validated by in silico tools. Finally, we optimize and insert the codon sequence into a plasmid to ensure the cloning and expression efficiency. In conclusion, this proposed artificial intelligence (AI) based vaccine discovery framework accelerates the vaccine design process and constructs a 694aa multi-epitope vaccine containing 16 B-cell epitopes, 82 CTL epitopes and 89 HTL epitopes, which is promising to fight the SARS-CoV-2 viral infection and can be further evaluated in clinical studies. Moreover, we trace the RNA mutations of the SARS-CoV-2 and ensure that the designed vaccine can tackle the recent RNA mutations of the virus. . Schematic Diagram of In Silico Vaccine Design Process. (A) Traditional in silico vaccine design process. We have to use numerous vaccine design tools. The evaluation and subunits selection is very time consuming. No current tool is able to include all the predictions to comprehensively analyze and select out the best vaccine subunits directly. (B) In silico vaccine design by DeepVacPred framework. By replacing the many predictions, evaluations and selections with a DNN architecture inside the DeepVacPred framework, we are able to directly predict a very small number of potential vaccine subunits within a second and start the following evaluation and vaccine construction on a much smaller amount of data. www.nature.com/scientificreports/ the one step of B-cell epitope prediction, and when it comes to T-cell epitope prediction, a different tool such as NetMHCpan 26 is needed. No current tool is able to conduct multiple predictions and comprehensively analyze the results for us at once to directly identify the best vaccine subunits for further construction and evaluation. To overcome the above challenges of the in silico vaccine design, we propose DeepVacPred, a novel AI-based in silico multi-epitope vaccine design framework. We successfully replace the multiple necessary predictions and the comprehensive evaluations with a deep neural network (DNN) architecture. When the DNN takes one peptide sequence as input, it can then judge whether this input sequence can be a potential vaccine subunit. In the DeepVacPred framework, the number of potential vaccine subunits can be firstly reduced to around 30, then further evaluation and vaccine construction is done on the predicted subunits by reliable and popular in silico methods to construct the final vaccine. Our novel approach aims to achieve a much better efficiency of the in silico vaccine design. With DeepVacPred, this study designs a multi-epitope vaccine in a novel in silico fashion. We first use the DNN architecture to lock down 26 fragments in the SARS-CoV-2 spike protein as vaccine subunit candidates. Next, we predict the linear B-cell epitopes, CTL epitopes and HTL epitopes to select and construct our final vaccine. We further analyze the human population coverage, antigenicity, allergenicity, toxicity and other physicochemical properties to validate the quality. We also predict the secondary structure and 3D structure model. This model is eventually refined and validated. Finally, the codon optimization and in silico cloning are performed to check the vaccine genome and protein constructions and ensure its effective expression. In addition, DeepVacPred allows us to quickly check for newly emerging threats caused by the RNA mutations of the SARS-CoV-2. We prove that our vaccine can tackle the virus RNA mutations. DeepVacPred Background. An in silico vaccine design process can be seen as selecting good fragments of the virus proteins, then constructing them together into a final vaccine 24 . A fragment with multiple merits can be selected as a subunit of the final vaccine. For example, an ideal subunit should contain multiple B-cell epitopes and T-cell epitopes and it should have high antigenicity to trigger human protective reactions 22,23 . These merits can be predicted by in silico approaches and currently there are numerous in silico vaccine design tools. However, these tools are designed to address only one of the several predictions at a time. Consequently, researchers have to overcome the time-consuming tasks of analyzing each individual prediction result from different tools while adopting a comprehensive view of the vaccine design. No current tool can take all the necessary merits into consideration and directly predict the vaccine subunit candidates from the virus proteins. There are two drawbacks to the current situation: (i) We usually need only the best 10-20 subunits to construct the final vaccine while each prediction tool may provide us with hundreds or even thousands of potential locations to choose, which creates a large overhead to comprehensively select out the subunits we need and no current tool can achieve both the prediction and the selection for us. (ii) Nearly 90% prediction results are eventually discarded because they have only part of the merits, resulting in too much of unnecessary analysis and wasting many computing resources. Consequently, traditional approaches may produce vaccines that are too late or ineffective for pandemics. In order to improve the efficiency and reliability of the vaccine design process, we improve over state-of-theart tools by providing a DNN approach, DeepVacPred, an efficient in silico vaccine design process to address the afore-mentioned concerns. DeepVacPred directly predicts the best vaccine subunit candidates (the number is within 30) from the virus protein sequences within a second by replacing the prediction and selection with deep neural network architecture, hence promising much higher efficiencies for the vaccine design and test process. Data collection and dataset design. Reliable data is essential for the performance of supervised learning 27 , thus, it plays a crucial role in the outcome of the vaccine design process. We collected 5000 latest known B-cell epitopes (B) and 2000 known T-cell epitopes containing both MHC (major histocompatibility complex)-1 and MHC-2 binders 28 (T) from the IEDB database, combining with the same number of proteins which are not T-cell or B-cell epitopes, forming a dataset of epitopes and non-epitopes. 100 known latest viral protective antigens are selected from the IEDB database, and the same number of proteins without protective functions are randomly selected, combining with the 400 antigens from previous work 29 , forming a dataset with 600 antigens. DeepVacPred is built based on supervised learning on a subtly designed dataset. To directly predict the vaccine subunit candidates, the protein sequences in the positive dataset must contain at least one T-cell epitope and one B-cell epitope and must be protective antigens. Cartesian Product 30 is the set that contains all ordered pairs from two sets. Thus, the two Cartesian Products, T × B and B × T, which are formed between the collected B-cell epitopes dataset and the T-cell epitopes dataset can cover all the possible combinations of the known B-cell and T-cell epitopes. We use the 600 antigens to train a neural network that can identify protective antigens. We use this neural network on the Cartesian Product to sieve out 706,970 peptides sequences that are predicted to be protective antigens. Those 706,970 peptides contain both B-cell epitopes and T-cell epitopes and are protective antigens, referred in this paper as the positive vaccine dataset. The same number of peptides randomly bridged by negative T-cell and B-cell epitopes form our negative vaccine dataset. The dataset we design addresses the three most important predictions, the B-cell epitopes, T-cell epitopes and antigenicity in the vaccine design process. All the datasets we collected, designed and created for the DNNs training can be found in the Data Availability section. The descriptions of each dataset are shown in Table 1 www.nature.com/scientificreports/ tive datasets are annotated by Z-descriptors 31 , then converted to the same length of 45 vectors with auto cross covariance (ACC) transformation 32 . Trained by the transformed dataset above, the DNN achieves the classification function to predict whether the input is a protective antigen containing both the B-cell and T-cell epitopes, realizing the ability to directly judge whether a sequence can be a potential vaccine subunit. This DNN is the core part of the rapid vaccine design process of our DeepVacPred framework and we name it as DNN-V. In addition, we train another DNN with the same structure on the T-cell epitope dataset which can judge whether an input sequence can be a T-cell epitope and we name it as DNN-T. The detailed neural network structures, training process and hyper-parameters can be found in "DNN Design and Training in DeepVacPred Framework" in the Methods section. Validation. ROC curves. Receiver operating characteristic (ROC) curve is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied 33 . DNN-V is a novel approach that needs to be validated. We use the ROC curves to evaluate the DNN-V in DeepVacPred. We www.nature.com/scientificreports/ test the trained DNN-V with two datasets, namely the train set and the test set, each of which containing 200 protein sequences. The training set contains 200 proteins randomly selected from the dataset; we use to train the DNN-V, with 100 positive and 100 negative protein sequences. We also selected known B-cell epitopes and T-cell epitopes that are not in our collected data and use the above steps to form the testing set, also with 100 positive and 100 negative protein sequences. The ROC curves are shown in Fig. 2. The validation data appears in Table 2. The thresholds are ranged from 0 to 1. The accuracy reported in Table 2 is the greatest value among all thresholds. The sensitivity and specificity values in Table 2 are reported for the case with the highest accuracy. The AUC (Area Under the ROC Curve) value of 0.9703 for the test set which indicates the high accuracy of the classification of DNN-V to identify potential vaccine subunits. Vaccine design test. The false positive rate (FPR) will fall down to 0 if we set the threshold to a very low value, e.g., 0.0003, since we only care about discarding all the non-candidates. We use the DNN-V in our DeepVacPred framework on the 1273aa spike protein sequence of the SARS-CoV-2. 132 vaccine candidates are predicted. We use BepiPred 25 , NetMHCpan 26 and Vaxijen 34 to examine each candidate. All of the candidates contain both T-cell and B-cell epitopes and only 14 of them are predicted by Vaxijen to be non-protective antigens. DeepVacPred framework. Figure 1B provides the schematic diagram of the vaccine design process using DeepVacPred framework. DeepVacPred first uses DNN-V to predict a very small number of potential vaccine subunits directly from the virus protein sequences. DeepVacPred further uses DNN-T to examine all the overlapping sequences in these subunits and select the subunit candidates which have multiple T-cell epitopes. These two prediction rounds take less than 1 s and reduce the number of potential vaccine subunits to around 30. Compared to traditional approaches, the most time-consuming subunits selection part can be easily done by DeepVacPred within less than a second, saving a large amount of time and computational resources. The following steps in the DeepVacPred framework are as follows: (i) selecting the best subunits from only about 30 candidates and (ii) constructing the final vaccine based on the evaluations by various reliable in silico tools, including Linear B-cell epitopes prediction, CTL and HTL epitopes prediction, population coverage analysis, vaccine construction, evaluation of antigenicity, allergenicity, solubility, immunogenicity, toxicity and other physicochemical properties, structure prediction, 3D modeling, in silico cloning, molecular docking and molecular dynamics simulation. Compared to the popular computational process, those evaluations are done on a much smaller amount of data, hence improving the efficiency. Results Data retrieval. The genome sequence of SARS-CoV-2 isolate Wuhan-Hu-1 is retrieved from the NCBI database with accession number MN908947 35 . The protein sequences are retrieved according to their translation. Especially, the spike protein (protein ID: QHD43416.1) has a length of 1273 amino acids (aa), and the DeepVacPred vaccine subunits prediction. All the overlapping protein fragments with a length of 30aa are generated out of the 1273aa SARs-CoV-2 spike protein sequence. DeepVacPred first tests these 1244 30aa protein sequences and predicts 132 potential vaccine subunits (see Table 3). The DeepVacPred framework further predicts the T-cell epitopes at these locations and discards the subunits which have less than 8 T-cell epitopes 36 . After this prediction, our DeepVacPred provides us with 26 potential vaccine subunits for further evaluation and construction (see Table 4). These subunits are very likely to contain B-cell epitopes and multiple T-cell epitopes. They are also very likely to have high antigenicity and low allergenicity. We start the following in silico vaccine design process directly from the predicted 26 vaccine subunits, which is very efficient. Linear B-cell epitopes prediction. B-cell epitopes are portions of antigens binding to immunoglobulin or antibody to trigger the B-cells to provide immune response 37 . Linear B-cell epitopes are predicted on the 26 vaccine subunits. Linear B-cell epitopes are predicted by four online servers including BepiPred 25 , SVMtrip 38 , ABCPred 39 and BCPreds 40 . We first use BepiPred for the main prediction and we use the other three servers to check the prediction results by BepiPred. A B-cell epitope predicted by the BepiPred will be discarded if it is not predicted by any of the other three servers. B-cell epitopes must be located in the solvent-exposed region of the antigens to be possible to combine with the B-cell 37 , thus it is essential to predict the surface availability of the structural protein sequence. The surface availability is predicted by Emini tool 41,42 on the whole SARS-CoV-2 spike protein sequence, and we discarded the epitopes that are not exposed on the surface. After the predictions, we select out 14 vaccine subunits (see Table 5). We further use the RaptorX Property server to evaluate the surface accessibility of the SARS-CoV-2 to validate that the B-cell epitopes in those subunits are well-exposed (see Fig. 3). www.nature.com/scientificreports/ Cytotoxic T lymphocytes (CTL) epitopes prediction. Cytotoxic T Lymphocytes (CTL) recognize the infected cells by using the MHC class I molecules to bind with certain CTL epitopes 26 . We use NetMHCpan 4.1 server 43 to predict potential CTL epitopes. All the overlapping 9aa peptide sequences in the 14 vaccine subunits The red color represents the exposed residues, the yellow color represents the medium exposed residues and the blue color represents the buried residues. In the SARS-CoV-2 spike protein, the B-cell epitopes in the 14 vaccine subunits are well-exposed according to the surface accessibility prediction, showing good potential that the B-cell receptor is able to interact with the virus to trigger the immune response. www.nature.com/scientificreports/ are tested with the most common 12 human-leukocyte-antigen (HLA) Class I alleles including HLA-A1, HLA-A2, HLA-A3, HLA-A24, HLA-A26, HLA-B7, HLA-B8, HLA-B27, HLA-B39, HLA-B44, HLA-B58 and HLA-B62 to evaluate their binding affinities and predict potential CTL epitopes 26,44 . The total HLA score is calculated for each vaccine subunits. The results are shown in Table 6. Helper T lymphocytes (HTL) epitopes prediction. Helper T Lymphocytes (HTL) help the activity of other immune cells and they recognize the infection by using MHC class II molecules to bind with certain HTL epitopes 45 . We use NetMHCIIpan 4.0 server 46 www.nature.com/scientificreports/ HLA-DRB1-1601 to evaluate their binding affinities and predict the potential HTL epitopes 45,47 . The total HLA score is calculated for each vaccine subunits. The results appears in Table 7. Worldwide human population coverage analysis. The vaccine we design should have wide human population coverage. We use the IEDB population coverage analysis tool 48 to evaluate the worldwide human population coverage of the 14 vaccine subunits. The 25 HLA alleles we used to predict the T-cell epitopes can cover 98.39% of the human population. The human population coverage of each vaccine subunit is shown in Table 8. The results suggest that our 14 vaccine subunits can cover a very wide range of human population. Multi-epitope vaccine construction. We discard Subunits 9, 15 and 26 for their poor performance in the CTL and HTL epitope predictions. We use the remaining 11 vaccine subunits to construct a final multi-epitope vaccine (see Fig. 4). To avoid potential autoimmunity, we perform a BLASTp screening against the Uniprot database on those 11 vaccine subunits. A subunit with a higher-than-35% identity will be considered as homologous protein with human proteome. Among the 11 vaccine subunits we choose for the final vaccine construction, none of them show high degree of homology with the human proteome. The final vaccine contains an adjuvant, 50S ribosomal protein L2 49,50 (accession no. AXI95322.1), to improve the immune response 51 , linked with the amino (N) terminum of the multi-subunit sequence through an EAAAK linker 52 . The multi-subunit sequence has a CTL multi-epitope peptides region followed by an HTL multi-epitope peptides region. The CTL region is constructed by 6 subunits which have better performance in the CTL epitopes prediction. AAY linkers 52 are used in this region to fuse the subunits. The HTL region is constructed by 6 subunits which have better performance in the HTL epitopes prediction. GPGPG linkers 52 are used in this region to fuse the subunits. The two regions are linked through a GPGPG linker. In addition, Subunit 5 is used twice in both CTL and HTL region for its good performance in both CTL and HTL epitope predictions. In the end, a 6xHis tag is added at the C-terminal to Antigenicity, allergenicity and solubility evaluation. The antigenicity of the final multi-epitope vaccine sequence is evaluated by the Vaxijen 2.0 online server 34,54 and the AntigenPro server 55 . We also evaluate the antigenicity of each vaccine subunit, including the adjuvant (see Table 9). The Vaxijen score for the whole final vaccine is 0.5705 with a virus model at a threshold of 0.4, suggesting a high antigenicity of our final vaccine. The AllergenFP 1.0 server and AllerTOP 2.0 server 56 predict the final vaccine and its every subunit to be non-allergenic (see Table 9). The solubility of the final vaccine and its every subunit is evaluated by SolPro 57 and Protein-sol server 58 . The predicted values suggest that our final vaccine and its every subunit have good solubility (see Table 9). Toxicity and physicochemical properties analysis. The vaccine must not have toxicity potential and the physicochemical properties are also important to evaluate how the vaccine interacts with the environments 59 . We use the ToxinPred server 60 to predict the toxicity. Other physicochemical properties, including hydropathicity, charge, half-life, instability index, pI (theoretical isoelectric point value) and molecule wheight, are predicted by ExPASy ProtParam Tool 61 . For the whole final vaccine sequence and the adjuvant sequence, we use the pro- Table 10. Toxicity and physicochemical properties prediction results. NT: none-toxicity. We use the protein screening mode in the ToxinPred server to check the overlapping peptides in the final vaccine and adjuvant sequence and they do not contain any toxic peptide. For the rest subunits, we directly use the SVM based prediction to predict their toxicity. www.nature.com/scientificreports/ tein screening mode in the ToxinPred server to check all its overlapping peptides with length no more than 50 aa. The whole vaccine and the adjuvant do not contain any toxic part peptide. Other subunits and the 6xHis tag are checked by the SVM prediction mode in the ToxinPred server and all the subunits and the 6xHis tag are nontoxicity. The hydropathicity value of the final vaccine is predicted to be − 0.521. This negative value suggests that our final vaccine is hydrophilic in nature and can interact with water molecules easily 62 In the solvent accessibility prediction results, the red color represents the exposed residues, the yellow color represents the medium exposed residues and the blue color represents the buried residues. The peptides marked in red boxes are B-cell epitopes. The prediction results show that the B-cell epitopes in the final vaccine have good surface accessibility and also they are not close to each other. In the disorder regions prediction results, the ordered regions are in blue while the disordered regions are in red. A total of 60 residues (8%) are in disordered regions, showing good order in structure. www.nature.com/scientificreports/ being less than 40 threshold value suggests that our final vaccine is stable. The pI of the final vaccine is calculated to be 9.75, which is an alkaline value, indicating its highly basic existence in nature. The molecular weight of the final vaccine is calculated to be 76 kDa. We also check the toxocity and physicochemical properties of every subunit and the results are shown in Table 10. Scientific Reports Secondary structure prediction. We use PSIPRED 63 to generate the secondary structure of our final vaccine. Graphical representation of the secondary structure features are shown in Fig. 5. The predicted secondary stucture indicates that the final vaccine constitutes 10.8% alpha helix, 24.6% beta strand, and 64.6% coil. The solvent accessibility (ACC), and disorder regions (DISO) are predicted by RaptorX Property server 64,65 (see Fig. 6). Among the 694 amino acid residues in our final vaccine, 44% are predicted to be exposed, 27% medium exposed, and 27% are predicted to be buried. The peptides marked in red boxes in Fig. 6 are the B-cell epitopes, showing good surface accessibility and they are not close to each other. A total of 60 residues (8%) are predicted to be located in disordered regions. Vaccine 3D structure modeling. We use the RaptorX server 66 to build the 3D structure models of our final vaccine. The protein structure with PDB ID 3j3vC is predicted by RaptorX to be the best template, based on which this server constructs the 3D structure model of our final vaccine (see Fig. 7). In this model, 100% (694) amino acids in the final vaccine are modeled in four domains. The P-value quantifies the likelihood of the predicted model being worse than other models generated randomly. The P-value for this model is calculated to be 4.13 × 10 −14 , which is a very low value, suggesting high quality of this 3D model. The unnormalized Global Distance Test (uGDT) score measures the absolute model quality. The overall uGDT score is predicted to be 506 and being greater than the 50 threshold value for a protein with more than 100 amino acid residues indicates that the 3D model of our final vaccine is good for further refinement. Vaccine 3D structure refinement. We use GalaxyRefine server 67 to refine the 3D structure model of our final vaccine. Among the 5 refined models predicted by GalaxyRefine, we choose the Model 2 shown in Fig. 8 as www.nature.com/scientificreports/ our final vaccine model based on its model quality scores (see Table 11). The predicted B-cell epitopes are highlighted in yellow, showing good surface accessibility. Global Distance Test-High Accuracy (GDT-HA) score measures the similarity between two protein structures. The GDT-HA score between this refined model and the initial model reaches a high value of 0.900, indicating that they have high similarity. The distance between atoms is measured by the Root Mean Square Deviation (RMSD) score. Lower RMSD value suggests better stability and usually an RMSD score ranges between 0 and 1.2 is acceptable. This model has an RMSD score of 0.580. Such RMSD score indicates stable protein structure. Molprobity score reflects the crystallographic resolution of the model. The MolProbity score of our identified vaccine model is 2.618, which is much lower than the initial model, showing that the refinement has lowered the critical errors of the 3D model. The Clash Score reflects the number of unfavorable all-atom steric overlaps and the refinement reduced the clash score of the model from 137.8 to 33.5, improving the model stability to a high level. The Ramachandran plot score represents the size of energetically favoured regions and usually a value greater than 85% is acceptable. The Ramachandran plot score has been improved from 78.3 to 87.5% by the refinement. The quality scores of the refined model shows good overall quality. Vaccine 3D structure validation. We use ProSA-web 68 to validate the overall model quality of the refined final vaccine model. ProSA predicts a Z-score of -6.51 (see Fig. 9) for the refined model, which is lying inside the score range of the comparable sized native proteins, indicating good overall model quality. ProSA also checks the local model quality and the residue scores are plotted in Fig. 9. Negative values suggest no erroneous parts of the model structure. We also use RAMPAGE server to do the Ramachandran plot analysis and it reveals a Ramachandran plot score of 87.5%, which is consistent with the results of GalaxyRefine. Conformational B-cell epitope prediction. The structure and folding of the new protein can result in new conformational B-cell epitopes which requires additional predictions. We use ElliPro server 69 to predict the conformational B-cell epitopes in the refined 3D model. The ElliPro server predicts 6 new conformational B-cell epitopes which involved 387 residues with scores ranging from 0.531 to 0.963. The detailed 3D model and information of those 6 epitopes are shown in Fig. 10. Codon optimization and in silico cloning. We analyze the cloning and expression efficiency and optimize the codon usage of vaccine construct in E. coli (Escherichia coli) strain K12) by Java Codon Adaptation Tool 70 . The length of the optimized codon sequence is 2082 nucleotides. Its Codon Adaptation Index (CAI) is 0.997, and the average GC content is 50.73%, indicating a great potential of good expression of the final vaccine in the E. coli host. After the optimization, we use the SnapGene tool to insert the codon sequences into pET28a( +) vector for cloning 71 (see Fig. 11). The codon sequence of the final vaccine is presented in red, which is the 2082 bp gene sequence generated by the JCat server. The pET28a( +) expression vector is in black. The codon sequence is inserted between Eco53KI (188) and EcoRV (1573), forming a clone with a total length of 6066 bp. Molecular docking. Molecular docking can evaluate the interactions between a ligand molecule and the receptor molecule to check the stability and binding affinity of their docked complex. Toll-like receptor 4 is an important human protein for pathogen recognition and immune response. Consequently, we choose TLR4 as the immune receptor to perform the molecular docking. We use the ClusPro 2.0 server 72 to perform the molecular docking between the refined 3D model of our final vaccine and the TLR4 (PDB ID: 4G8A) immune receptor. Among all the generated docking model, we select the one with the lowest energy score of -1311.5 as the best docked complex, suggesting that the vaccine model occupies the receptor properly and indicating good binding affinity (see Fig. 12). Molecular dynamics simulation of the vaccine-receptor complex. To evaluate the stability and physical movements of the vaccine-TLR4 docked complex 17,73 , we perform molecular dynamics simulation by the iMOD server 74 . The main-chain deformability is shown in Fig. 13a. The locations with hinges are regions with high deformability. The B-factor values calculated by normal mode analysis are proportional to root mean square (see Fig. 13b). B-factor values quantify the uncertainty of each atom. Figure 13c presents the eigenvalues which are closely related to the energy required to deform the structure and the eigenvalue of the complex is 5.426 × 10 −6 . The covariance matrix between the pairs of residues is shown in Fig. 13d, indicating their correlations (red: correlated, white: uncorrelated, blue: anti-correlated). The elastic network model is shown in Fig. 13e, Table 11. Quality scores of the models predicted by GalaxyRefine. www.nature.com/scientificreports/ suggesting the connection between atoms and springs. The molecular dynamic simulation results suggest that our vaccine model is stable. RNA mutations. As the SARS-CoV-2 spreads all over the world, its RNA sequence is going through mutations, translating out different virus proteins. Such mutations can have influences on the epitope based vaccines, since a single amino acid difference can change the epitope prediction results. Therefore it is important to prove that the proposed final multi-epitope vaccine can tackle the mutations. With our DeepVacPred, we are also able to quickly examine the mutated protein sequences to search for new potential vaccine subunits. The RNA sequence we use to translate the spike protein and design the vaccines is from Wuhan, which is the place of the original virus 35 . The RNA mutations lead to three most frequent changes in the spike protein area of the SARS-CoV-2 and each of the changes contains one amino acid change 75 . Table 12 shows the mutation details. www.nature.com/scientificreports/ The mutation at the 614aa in spike protein from D to G is the most frequent mutation with 116 known isolates 75 . This mutation is very common in many cities in North America. In Europe and South America the D614G mutation occurs in less than 10 isolates. This change has no influence on the final multi-epitope vaccine since it does not contain the 614aa of the spike protein. With DeepVacPred, we are also able to quickly check and identify whether the mutation can create new potential vaccine subunits. We input the mutated protein sequence into DeepVacPred and the predicted subunits are the same as the original virus. At 476aa in spike protein there is a frequent mutation from G to S, which occurs in 3 isolates from Washington DC 75 . This mutation has no influence on the final multi-epitope vaccine since it does not contain the 476aa of the spike protein. We input the mutated protein sequence into DeepVacPred and the predicted subunits are the same as the original virus. At 483aa in spike protein there is a frequent mutation from V to A, which occurs in 6 isolates from Washington DC 75 . This mutation has no influence on the final multi-epitope vaccine since it does not contain the 483aa of the spike protein. We input the mutated protein sequence into DeepVacPred and the predicted subunits are the same as the original virus. www.nature.com/scientificreports/ In conclusion, our designed multi-epitope vaccine can tackle the current RNA mutations of the coronavirus. The current RNA mutations of the coronavirus create no new potential vaccine subunits. Discussion In silico vaccine design has high value of efficacy and it strongly emphasizes the multi-epitope in the vaccine peptides. In this study, we develop DeepVacPred, an efficient vaccine subunit sieving framework, that exploits an AI-based approach to rapidly select 26 potential vaccine subunit candidates, introducing a new way for achieving a much higher speed and efficiency in in silico vaccine design. The goal is to directly predict the potential vaccine subunit sequence without the need to do a large number of different predictions, as well as to evaluate and select the predicted results manually. With this AI-based framework, we are able to skip at least 95% of unnecessary predictions and let the computer analyze and select the best vaccine subunits for us. DeepVacPred predicts the 26 vaccine subunits within less than a second, which enables us to skip the most time consuming part of the in silico vaccine design. With DeepVacPred, a researcher can construct a multi-epitope vaccine for a new virus and validate its quality within an hour. This approach can be further developed by enhancing the complexity and coverage of the dataset. In this study, we selected a part of known epitopes and protective antigens to form the dataset and use it for training the DNN architecture. We use the simple bridging of one B-cell epitopes and one T-cell epitopes. With a more comprehensive dataset and more possibilities of epitope combinations, we will be able to develop a better, more comprehensive and quicker vaccine design tool. In spite of limited available datasets, the current framework can still deal with most of the situations now and provide an efficacious vaccine design. The application of AI, and DNN methodology in particular, to protein sequences classification shows great potential. Most of the online tools rely on the SVM learning approaches. In the highly popular protective antigens prediction tool Vaxijen 34 , the AUC of the ROC curve can only reach 0.743, which cannot perform very accurate predictions. The dataset to train Vaxijen only contains 200 proteins, so it becomes more time consuming and challenging to rely on the SVM model with increasing number of discovered protective antigens. Consequently, www.nature.com/scientificreports/ the proposed DeepVacPred proves that DNN can perform a very accurate prediction with over 700,000 different proteins in the dataset. This study eventually results in a novel multi-epitope vaccine with a length of 649aa against the SARS-CoV-2. It contains an adjuvant, 11 subunits with 16 B-cell epitopes, 82 CTL epitopes and 89 HTL epitopes. It shows good antigenicity, population coverage and good physichochemical properties and structures, providing great potential for the next step COVID-19 vaccine design with actual experiments and clinical studies. Furthermore, we trace the RNA mutations of the SARS-CoV-2 virus. Basically, the RNA mutations can result in one amino acid change in the spike protein or other related proteins. The proposed vaccine design framework can also tackle the three most frequently observed mutations as well as it can be extended to deal with other potentially unknown mutations. The investigation on the RNA mutations also proves the high efficiency of our DeepVacPred. As future work, we will investigate novel AI algorithms and architectures capable of constructing multi-epitope vaccine designs that can overcome the unknown unknowns of viruses evolution. Methods DNN design and training in DeepVacPred framework. Each data input to the DNN architecture is a sequence with a length of 45 vectors which is converted from its protein sequence by Z-descriptors 31 and ACC transformation 32 . Convolutional Neural Network (CNN) exhibits good performance to identify and process such vectors while multi-layer linear neural network is broadly connected to the ouput layer of the CNN, forming a complex DNN to enhance the classification ability. Hence, our DNN is constructed by the following layers and the parameters of each layer is decided using a random search to obtain high accuracy while maintaining good computing speed: i. CNN, in channels = 1, out channels = 16, kernel size = 3, stride = 2, padding = 1, Tanh Cytotoxic T lymphocytes (CTL) epitopes prediction. We use NetMHCpan 4.1 server (http://www.cbs. dtu.dk/servi ces/NetMH Cpan/) to predict the CTL epitopes on each vaccine subunit candidates. We predict the CTL epitopes with a length of 9aa. All the parameters are set at default. NetMHCpan predicts peptide binding to any MHC Class I molecule of known sequence using artificial neural networks (ANNs) which is trained on a combination of more than 850,000 quantitative Binding Affinity (BA) and Mass-Spectrometry Eluted Ligands (EL) peptides, providing reliable prediction results 43 . Helper T lymphocytes (HTL) epitopes prediction. We use NetMHCIIpan 4.0 server (http://www.cbs. dtu.dk/servi ces/NetMH CIIpa n/) to predict the HTL epitopes on each vaccine subunit candidates. We predict the HTL epitopes with a length of 15aa. All the parameters are set at default. NetMHCIIpan predicts peptide binding to any MHC II molecule of known sequence using artificial neural networks (ANNs) which is trained on an extensive dataset of over 500,000 measurements of Binding Affinity (BA) and Eluted Ligand mass spectrometry (EL), covering the three human HLA-DR, HLA-DQ and HLA-DP alleles, providing reliable prediction results 46 . Multi-epitope vaccine construction. In www.nature.com/scientificreports/ Antigenicity, allergenicity and solubility evaluation. The antigenicity of the final vaccine and its every subunit is predicted by VaxiJen 2.0 server (http://www.ddg-pharm fac.net/ vaxijen/VaxiJen/VaxiJen.html) and AntigenPro server (http://scrat ch.prote omics .ics.uci.edu). Vaxijen is based on auto cross covariance (ACC) transformation of protein sequences into uniform vectors of principal amino acid properties 34 . Antigenpro is a sequence-based, alignment-free and pathogen-independant predictor of protein antigenicity 55 . The allergenicity of the final vaccine and its every subunit is checked by AllergenFP 1.0 server (http://ddg-pharm fac.net/Aller genFP /) and AllerTOP 2.0 server (https ://www.ddg-pharm fac.net/Aller TOP/). AllergenFP and is a binary classfier between allergens and non-allergens. The dataset is described by five E-descriptors and the strings are transformed into uniform vectors by auto-cross covariance (ACC) transformation 76 . AllerTop is also based on ACC transformation and E-descriptors 56 . The solubility is evaluated by SolPro server (http://scrat ch.prote omics .ics. uci.edu) and Protein-sol server (https ://prote in-sol.manch ester .ac.uk). SolPro is an SVM based tool to predict the solubility of a protein sequence with an overall accuracy of over 74% estimated by tenfold cross-validation 57 . Protein-sol is based on available data for Escherichia coli protein solubility in a cell-free expression system 58 . Toxicity and physicochemical properties analysis. The toxicity of the final vaccine and its every subunit is predicted by ToxinPred server (http://crdd.osdd.net/ragha va/toxin pred/). TonxinPred is based on SVM model to classify toxicity and non-toxicity. The dataset used in its method consists of 1805 toxic peptides (≤ 35 residues) 60 . The physicochemical properties of the final vaccine and its every subunit is predicted by ExPASy ProtParam server (https ://web.expas y.org/protp aram/). The physicochemical properties include hydropathicity, charge, half-life, instability index, pI (Theoretical isoelectric point value) and molecule wheight 61 . Secondary structure prediction. PSIPRED is used for the secondary structure prediction of our final vaccine (http://bioin f.cs.ucl.ac.uk/psipr ed/). PSIPRED incorporates two feed-forward neural networks which perform an analysis on output obtained from PSI-BLAST (Position Specific Iterated-BLAST). It achieves an average Q3 score of 81.6%, which can achieve accurate secondary structure prediction 63 . We also use RaptorX Property web server (http://rapto rx.uchic ago.edu/Struc tureP roper tyPre d/predi ct/) to predict the solvent accessibility (ACC) and disorder regions (DISO). RaptorX employs an emerging machine learning model called DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC), and disorder regions (DISO) simultaneously 65 . Vaccine 3D structure modeling. The 3D model of the final vaccine is constructed by RaptorX server (http://rapto rx.uchic ago.edu/Conta ctMap ). RaptorX provides distance-based protein folding powered by deep learning. This server was officially ranked 1st in contact prediction in both CASP12 and CASP13 and initiated the revolution of protein structure prediction by deep learning 66 . Vaccine 3D structure refinement. The 3D model built by RaptorX server is refined by GalaxyRefine (http://galax y.seokl ab.org/cgi-bin/submi t.cgi?type=REFIN E). GalaxyRefine first rebuilds side chains and performs side-chain repacking and subsequent overall structure relaxation by molecular dynamics simulation. According to the CASP10 assessment, the GalaxyRefine server method performed the best in improving local structure quality 67 The quality of the refined model is evaluated in terms of its GDT-HA socre, RMSD score, Molprobity score, clash score and Ramachandran plot score. Vaccine 3D structure validation. The final refined 3D model of our final vaccine is validated by ProSAweb server(https ://prosa .servi ces.came.sbg.ac.at/prosa .php). ProSA calculates an overall quality score for a specific input structure. If this score is outside a range characteristic for native proteins the structure probably contains errors. A plot of local quality scores points to problematic parts of the model which are also highlighted in a 3D molecule viewer to facilitate their detection 68 . Conformational B-cell epitope prediction. The conformational B-cell epitopes in the refined final vaccine 3D structure model are predicted by the ElliPro Server (http: //tools.iedb.org/ellipro). ElliPro is based on the geometrical properties of protein structure. Among the current conformational B-cell epitope prediction tools, ElliPro has the best AUC score of 0.732, which is a very reliable tool for identifying antibody epitopes in protein antigens 69 . Molecular docking. The molecular docking is done by ClusPro 2.0 server (https ://clusp ro.bu.edu). ClusPro is a widely used tool for protein-protein docking. Docking with each energy parameter set results in ten models defined by centers of highly populated clusters of low-energy docked structures 72 . We choose TLR4 (PDB ID: 4G8A) as the immune receptor. We select the docked complex with the lowest energy score. Molecular dynamics simulation of the vaccine-receptor complex. The molecular dynamics simulation is done by iMOD server (iMODS) (http://imods .chaco nlab.org). iMODS facilitates the exploration of www.nature.com/scientificreports/ such modes and generates feasible transition pathways between two homologous structures 74 . The iMOD server evaluates the protein stability by computing its internal coordinates through normal mode analysis (NMA). The stability of the protein is represented in terms of its main-chain deformability plot, B-factor values, eigenvalue, covariance matrix and elastic network model. Data availability We obtained the genome sequence and the spike protein sequence of SARS-CoV-2 from NCBI database (https ://www.ncbi.nlm. nih.gov) with accession number MN908947 and protein ID QHD43416.1. The protein data we collected and processed to train the DeepVacPred is available on github.com (https ://githu b.com/zikun yang/ DCVST ). www.nature.com/scientificreports/
9,609.6
2020-07-21T00:00:00.000
[ "Medicine", "Computer Science" ]
Deep Learning-Based Detection of Inappropriate Speech Content for Film Censorship Audible content has become an effective tool for shaping one’s personality and character due to the ease of accessibility to a huge audible content that could be an independent audio files or an audio of online videos, movies, and television programs. There is a huge necessity to filter inappropriate audible content of the easily accessible videos and films that are likely to contain an inappropriate speech content. With this in view, all the broadcasting and online video/audio platform companies hire a lot of manpower to detect the foul voices prior to censorship. The process has a large cost in terms of manpower, time and financial resources. In addition to inaccurate detection of foul voices due to fatigue of manpower and weakness of human visual and hearing system in long time and monotonous tasks. As such, this paper proposes an intelligent deep learning-based system for film censorship through a fast and accurate detection and localization approach using advanced deep Convolutional Neural Networks (CNNs). The dataset of foul language containing isolated words samples and continuous speech were collected, annotated, processed, and analyzed for the development of automated detection of inappropriate speech content. The results indicated the feasibility of the suggested systems by reporting a high volume of inappropriate spoken terms detection. The proposed system outperformed state-of-the-art baseline algorithms on the novel foul language dataset evaluation metrics in terms of macro average AUC (93.85%), weighted average AUC (94.58%), and all other metrics such as F1-score. Additionally, proposed acoustic system outperformed ASR-based system for profanity detection based on the evaluation metrics including AUC, accuracy, precision, and F1-score. Additionally, proposed system was proven to be faster than human manual screening and detection of audible content for films’ censorship. I. INTRODUCTION With the increased exposure to portable and immediate screen 23 time sources such as televisions, computers and smartphones, 24 filtering of audio and visual contents is becoming crucial. 25 This is because media commonly include offensive and sen- 26 sitive contents, e.g., foul languages, nudity, and sexually 27 explicit contents, which could attract the attention of users 28 The associate editor coordinating the review of this manuscript and approving it for publication was Kathiravan Srinivasan . in entertainment videos, games and movies available through 29 broadcasting channels or at online platforms. Tuttle [1] stated 30 that most movies incorporate the usage of profanity that could 31 negatively affect the society [2] and that she believed that 32 this frequency would increase over the years. Broadcasting 33 companies and media-sharing platforms are responsible in 34 ensuring the appropriateness of contents shared to the public 35 through their respective channels. In the case of language, 36 censorship is a complex filtering process that provides lan-37 guage content appropriate to consumers due to the restrictions 38 One of the earlier methods of application of KWS involved 95 the usage of large-vocabulary continuous speech recognition 96 (LVCSR) systems [13], [14]. Such systems were deployed to 97 decode speech signal to allow keyword to be identified in the 98 generated lattices (i.e., in the phonetic units' representations 99 of different sequences, given the speech signal, were likely 100 sufficient). This approach is superior in the sense that it allows 101 flexibility to handle changing or non-predefined keywords 102 [15], [16], [17] (although often with performance drop when 103 keywords are out of vocabulary [18]). 104 The main weakness of LVCSR-based KWS systems lies in 105 the computational complexity dimension. Specifically, these 106 systems require high computational resources in order to 107 generate complex lattices [16], [19], which introduces latency 108 [20]. Therefore, this approach is not suitable for the applica-109 tion of real time speech recognition and monitoring. For the 110 application of voice assistants and machine wake-up words, 111 the high computational resource and memory requirements 112 also place constraints on the usage of LVCSR systems [19], 113 [21], [22]. 114 As deep learning techniques mature over the years, usages 115 of deep spoken KWS systems [23], [24], [25], [26] have 116 increased due to progressively improving performance in 117 terms of efficiency and accuracy, in voice assistants for 118 instance. The sequence of word posterior probabilities gen-119 erated by deep neural networks is processed to identify the 120 possible existence of keywords directly without intervention 121 of any Hidden Markov Model (HMM) or Gaussian Mixture 122 Model (GMM). This deep KWS method has been attracting 123 attention due to flexible complexity of DNN generating the 124 posteriors, or acoustic model, which is dependent on compu-125 tational resource availability [27], [28], [29]. 126 Deep spoken keyword spotting system [30], [31], [32] 127 typically contains three main blocks [9]: 1) the speech feature 128 extractor that converts the input signal to a compact speech 129 representation, 2) the deep learning-based acoustic model 130 that generates posteriors over the keyword and filler (non-131 keyword) classes based on the speech features, and 3) the 132 posterior handler that processes the temporal sequence of 133 posteriors to determine the possible existence of keywords 134 in the input signal. 135 Mel-scale-related features, low-precision features, learn-136 able filter-bank features, and other features are the most rele-137 vant speech features used in deep KWS systems [9]. Speech 138 features based on the perceptually-motivated Mel-scale filter-139 bank, e.g., log-Mel spectral coefficients and Mel-frequency 140 cepstral coefficients (MFCCs), have been commonly utilized 141 in the areas of ASR and KWS. Despite the many attempts 142 to learn optimal, alternative representations from speech sig-143 nals, Mel-scale-related features is still a safe, solid, and com-144 petitive choice to date [33]. 145 In most deep KWS systems, both types of speech features 146 are normalized to have zero mean and unit standard deviation 147 prior to being input to the acoustic model in order to stabilize 148 and accelerate training and improve model generalization 149 [34]. The most employed speech feature type in deep KWS 150 MFCCs with temporal context are used in [34], [35], [36], 152 [37], and [38]. Particularly, application of discrete cosine [56] prior to processing. Next, 208 smoothed word posteriors are commonly utilized to make 209 the decision of whether a keyword is present, either through 210 comparison with a sensitivity threshold [57] or by selecting 211 the class with highest posterior within a time sliding win-212 dow [58]. One disadvantage of streaming mode processing 213 is that false detection may occur when the same keyword 214 realization is detected more than once in the smoothed pos-215 terior sequence as consecutive input segments may cover 216 parts of the same keyword realization. Post processing tech-217 nique would need to be employed in order to avoid this 218 problem [26]. 219 The current trend involves usage of KWS for voice activa-220 tion voice assistants [59] and Voice Control of Hearing Assis-221 tive Devices [54]. Hence, the literature on automated speech 222 recognition models using deep learning techniques mostly 223 revolved around inoffensive language identification only. 224 For instance, conversational and read speech dataset clear 225 of profane language utterances such as LibriSpeech [60], 226 Google's voice search traffic dataset [61], Google commands 227 dataset [52], spoken digits dataset [62], and speech emotions 228 dataset of conversational speech dialogues [63], [64] have 229 been explored in recent years. 230 In 2020, [65] researched on the efficiency of foul lan-231 guage detection using pre-trained CNNs (e.g., Alexnet and 232 Resnet50). The proposed solutions had inaccurate detection 233 and high computational cost due to large number of net-234 work parameters, causing the system to fail to meet the 235 requirements for real time usages, i.e., real time monitoring 236 for profanity filtering in videos. Another work studied the 237 categorization of isolated foul words versus isolated nor-238 mal speech using a novel foul language dataset. Despite 239 the acceptable performance on the tested dataset, the detec-240 tion and localization performances within audio samples of 241 the proposed methods (CNN and RNN) on other dataset 242 consisting of conversational speech of continuous audios 243 were not explored [66], [67]. In brief, the feasibility of 244 spoken profanity detection and localization within audio 245 files has not been proven for real time audio filtering 246 applications. 247 This experiment was carried out on English profanities 248 and its derivatives. The model utilizes the acoustic features 249 of profanities for the purpose of detecting profane words 250 and localize it within a continuous audio sample, unlike 251 Automatic speech recognition (ASR) models that transcript 252 any spoken words based on the language model that are 253 used as a part of the whole ASR system. However, the use 254 of ASR systems requires huge computational cost for the 255 use of a large dataset. Furthermore, ASR systems consist 256 of several sequenced stages including acoustic models and 257 language models. In the scenario of detecting and localizing 258 inappropriate speech content within a continuous audio input, 259 requires an additional text detection model. Consequently, 260 ASR-based systems for the detection of profanities suffers 261 of latency. Additionally, the use of sequenced models could 262 318 The datasets utilized in this study of English profanities are 319 described in this section. Next, the methodology is explained 320 in detail. Firstly, feature extraction process in Log-Mel spec-321 trogram methods applied on raw audio samples is performed. 322 Secondly, E2E CNN is used for feature learning. Thirdly, 323 posterior handling methods are done for further processing. 324 A short review of each method and its function are summa-325 rized in the following subsections. TAPAD dataset was augmented to increase the number of 425 samples eight times from 4511 foul sample to 36088 foul 426 samples to enhance the models' robustness to noise, avoid 427 models' over-fitting, and improve models' generalization and 428 reduce. The augmented dataset was then used to train pro-429 posed and baseline models. The augmentation was performed 430 using the same approaches used for MMUTM dataset that are 431 described in the previous part. 433 This dataset is a novel challenging database that are only used 434 for testing and model's evaluation purposes. This data con-435 sists of six real-world audios that were retrieved from videos 436 available on the internet, four of the samples were retrieved 437 from YouTube videos, while the other two are a full films. 438 Full films are used in the evaluation as this research designed 439 to propose a solution for films to provide real time moni-440 toring and censorship for the inappropriate speech content. 441 As described in Table 1. The total length of the testing videos 442 is about four hours, seven minutes, and nineteen seconds, 443 which is ∼ 247.32 minutes in total. It is obvious that the 444 testing dataset intensively consist of foul languages within the 445 normal conversation speech, as the dataset consist of 1322 446 profanity, where all the profanities are also existed in the 447 training dataset of MMUTM and TAPAD dataset. 448 The rate of foul words per minute is what makes this dataset 449 to be challenging, as there is about 5.345 offensive words 450 per minutes in this dataset. Additionally, this dataset is a real 451 dataset that is taken directly to test and evaluate the trained 452 model, which adds to how challenging this dataset. The only 453 per-processing happened to this dataset is the properties of the 454 audio file that were set at sampling rate of 16-kHz, 1-channel, 455 and 19-bits PCM. This dataset was purposely created for 456 this research. Therefore, we have labeled all this dataset by 457 manually finding the foul words within the audio file and the 458 corresponding timestamps, in which the profane word occurs. 459 Therefore, the annotations of this dataset consist of the foul 460 words and its timestamps, as this work is to predict the foul 461 word and localize it within a long audio file. Hence, the 462 parts of the audio samples that were not labeled as foul, 463 are considered as normal conversational speech by default, 464 VOLUME 10, 2022 using 101 Log-Mel frequency spectrogram coefficients. Inap-506 propriate and safe speech spectrogram analysis was per-507 formed using the following parameters: 0.03 frame duration, 508 1 second segment duration, 0.015 overlap window between 509 frames, and 40 frequency bands. Furthermore, a lightweight 510 model with small-sized filters was proposed in order to 511 minimize the computational resource requirement and allow 512 the target application of real time film audio filtering to 513 be achieved. Therefore, the generated Log-Mel spectrogram 514 image dimensions had small size, 40-by-101 in size specifi-515 cally, where 40 is the normalized frequency of times 400-kHz 516 (40 times 400 kHz = 16-kHz) and 101 is the number of 517 spectrogram samples used. An example of raw signals of 518 two profane words and their corresponding spectrograms are 519 shown in Figure 1. In the case of supervised CNN model, E2E learning mode 522 is done to fine-tune parameters of the whole CNN. Since 523 spectrogram images and labels were available during training 524 process, supervised learning was applied. The CNN is com-525 posed of convolutional, fully connected, pooling and batch 526 normalization layers. For detection of distinct signals, filters 527 in horizontal and vertical lines present in CNNs were passed 528 over input images. Mapping of image feature portions of the 529 signals were then performed, and the classifiers were trained 530 on the target task. Extraction of features of input images 531 and pixel relationships were sustained by obtaining image 532 features via small squares of input data using the convolution 533 layers. A mathematical operation that involves two inputs, 534 i.e., image matrix and a filter/kernel, was applied for the 535 extraction. 536 Reduction of parameters of a specific image was allowed 537 by the pooling layers. A common instance would be spatial 538 pooling, i.e., downsampling or sub-sampling, which retained 539 vital information while reducing dimensionality of each map. 540 This pooling type could be categorized into (i) max pooling, 541 In order to classify outputs related to the target task, an acti-549 vation method involving SoftMax or sigmoid can be applied. Table 3 shows the details of the proposed CNN model archi- windowed sub-sample was then input to the CNN model for 577 class predictions performed based on the posterior probabil-578 ity, e.g., the class with highest posterior probability or positive 579 detection if decision threshold was exceeded. The predicted 580 class of the sub-sample is then assigned to the corresponding 581 timestamps generated during windowing phase. Localization 582 of recognized keyword within a long input audio sample 583 could be related to the timestamps of which the sample 584 consisted of identified profane word. Although continuous 585 speech or audio sample was used as input, windowing process 586 caused the inference for windowed samples to be consid-587 ered as static mode. This mode is used due to its simplicity 588 and produce a low number of false positives compared to 589 dynamic mode. Hence, dynamic mode requires additional 590 post-processing approaches to avoid such issue of increased 591 false positive rate [9]. 593 The experimental setup, performance metrics and testing 594 results of the proposed system are discussed in this section. 595 The experimental settings and procedures utilized for appli-596 cation of automated detection of profane speech content in 597 film censorship are included in this section. The architecture 598 of the proposed foul language detector system is illustrated 599 in Figure 2. Feature extraction was performed on isolated 600 samples of English language to obtain the Log-Mel spectral 601 features, which were then sent into the CNNs for model 602 training. Similarly, the test features were obtained from audio 603 samples of real long audio files. These test features were used 604 to evaluate the performance of the trained models. 605 The expected outputs of the system were the prediction 606 probabilities of recognized profanity and the corresponding 607 timestamps to allow localization of the foul word detection 608 within test samples for film filtering. Hence, this work is not 609 an Automatic Speech Recognition (ASR), where speech con-610 tent if transcribed into the corresponding words. Additionally, 611 the proposed work is not a simple audio recognition where a 612 single spoken term from the same pool of dataset is fed into 613 a model and classified into the corresponding label, as the 614 test samples used is a continuous audio input of real-world 615 samples that are out of the training dataset pool. 616 VOLUME 10, 2022 In the equations, N tp , N fp , N fn , and N total referred to the 671 number of true positives, false positives, false negatives, 672 and total samples in all the segments respectively. Further-673 more, the performance was evaluated using area under curve 674 (AUC) and detection error trade-off (DET) curve. AUC was 675 computed after plotting the receiver operating characteristic 676 (ROC) curve which used FPR as the horizontal axis and TPR 677 as the vertical axis. This measurement reflects the robustness 678 of a binary classifier as the sensitivity threshold is varied. 679 On the contrary, DET is a graphical plot of error rates for 680 binary classification systems, i.e., graph of false rejection rate 681 (FNR) against false alarms rate (FPR). 683 The audio-based foul word recognition model proposed in 684 this research was designed to be applied for automated cen-685 sorship of audio channels of films. The experimental results 686 were obtained by running the novel test dataset, comprising 687 of continuous video files with high inappropriate word rates 688 per minute, through the trained models. Performance of the 689 model was determined using performance metrics such as 690 accuracy, F1 score, TPR, FPR and AUC. The results are dis-691 cussing the model's performance based on segment lengths, 692 probability thresholds, and process time figures. 694 The experiment includes a windowing and segmentation pro-695 cess for the lengthy continuous test samples, before it goes to 696 feature extraction, then inference and detection stages. There-697 fore, the segment length affects the detection and evaluation 698 metrics. Hence, all the test samples were evaluated using 699 three different segment lengths of 0.3, 0.4, and 0.5 seconds 700 to find the optimized segment length, that produce the best 701 and optimal system/model metrics for the detection of foul 702 languages. Although all the test samples were tested based 703 on different segment lengths, this paper will only demonstrate 704 the effect of segment length on foul language detection within 705 continuous audio samples, by highlighting the performance 706 metrics of two samples that are sample 1 and sample 2 at 707 a single probability threshold (th = 0.50) and three differ-708 ent segment lengths. Table 4 and Table 5 present the foul 709 language detection model performance using two samples 710 (sample 1 and 2), while Figure 3 and Figure 4 highlights 711 the two samples performance based on average accuracy and 712 F1-score, respectively. 713 Following Table 4 and Table 5, proposed model performed 714 positively in the detection of foul language with high average 715 accuracy, TPR, precision and F1-score, with low FNR and 716 FPR. For example, samples 1 achieved 20.75%, 11.32%, and 717 3.83% FNR, for segment length of 0.3, 0.4, and 0.5 segments 718 length respectively. Regardless, the model performance was 719 Table 4 and Table 5, all the performance 725 metrics were improved using larger segmentation length. 726 For example, TPR/recall and precision were improved and 727 TABLE 6. Overlap effect on performance metrics of sample 1 at 0.5 confidence score and 0.5 segment length. TABLE 7. Overlap effect on performance metrics of sample 2 at 0.5 confidence score and 0.5 segment length. increased drastically with longer window length, while FNR 728 and FPR were improved and drops hugely at 0.5 second 729 segment length. F1-score and average accuracy charts and figures show that 736 increasing the segment length contributes into increment of 737 model performance metrics, which are increasing accuracy, 738 recall, precision, and F1-score. Consequently, the proposed 739 system achieved the best performance on profane language 740 detection using 0.5 segment length, where model achieved 741 a high F1-score 95.33% and 85.93% for the sample and 742 sample 2 test samples. Similarly, the model produced a high 743 average accuracy of 98.31% and 95.71% for sample 1 and 744 sample 2, successively. Therefore, 0.5 seconds considered as 745 the optimal window length for the developed system. Hence, 746 the proposed model was evaluated using 0.5 second segment 747 length and the following detailed results were obtained based 748 on the optimal window duration. 749 750 The experiment includes an automated windowing and seg-751 mentation process for continuous test samples. Therefore, the 752 fixed segment length affects the detection and evaluation met-753 rics for words that are longer than window length, in addition 754 to some keywords that might be spitted into two segments 755 due to the automated and fixed windowing process. Hence, 756 an overlap time was introduced to mitigate the error arises 757 from this issue and find the optimal performance of pro-758 fanities detection in a continuous sample with an automated 759 and fixed windowing process. Although all the test samples 760 were tested with and without overlap time, this paper only 761 demonstrates the effect of overlap length for foul words detec-762 tion within continuous audio, by detailing the performance 763 metrics of two samples that are sample 1 and sample 2 at 764 a single probability threshold (th = 0.50) and 0.5 segment 765 lengths. 766 VOLUME 10, 2022 Table 6 and Table 7, Table 6 and Table 7, all the performance metrics were The performance assessment of the proposed models on the 800 detection of foul language for the six test samples is pre-801 sented in Table 8 through Table 13 sample 6. Although model test was done using threshold zero 803 through one, the tables present 0.1, 0.25, and 0.5 through 804 0.9 probability threshold. This is due to the common concern 805 of threshold performance above the common 0.5 confidence 806 score. However, all the thresholds starting from zero were 807 used when evaluating the model using ROC and DET curves 808 that are highlighted in the subsequent section. The results of 809 all samples' performance are presented due to the concern 810 of highlighting the model performance depending on differ-811 ent real-world samples, as different real time samples will 812 exhibit different characteristics like audio quality, noise, pitch 813 speed, etc. These characteristics produces different model's 814 response in terms of target keyword detection. Therefore, 815 contribute to the average numbers based on the weight of the 864 foul words within each sample compared to total foul words 865 for the whole dataset. The average metrics were computed 866 for all the thresholds. However, here we just highlight sim-867 ilar thresholds to the thresholds analysis tables. Therefore, 868 the models varied performance can be highlighted based on 869 different thresholds. For instance, how precision is affected 870 with varying thresholds. Hence, an operation threshold can 871 be chosen depending on the optimal metrics required for the 872 detection of profanities like minimizing FNR or minimizing 873 FPR. 874 Looking at the average metrics, it can be noteworthy that 875 increasing threshold contributes to a slight drop in aver-876 age accuracy (from 97.47% at 0.1 threshold to 95.34% 877 at 0.9 threshold for weighted average) and TPR/recall 878 (from 95.68% at 0.1 threshold to 92.16% at 0.9 threshold 879 for weighted average). Hence, FNR increases with thresh-880 old increment (from 4.32% at 0.1 threshold to 7.84% at 881 0.9 threshold for weighted average). In contrast, precision 882 increases dramatically with threshold increment from 85.12% 883 at 0.1 threshold to 93.75% at 0.9 threshold for weighted 884 average). Therefore, a huge drop in false detection (FPR) 885 (from 14.88% at 0.1 threshold to 6.25% at 0.9 threshold for 886 weighted average) occurred with threshold increment. 887 On the other hand, F1-score that is calculated based on 888 precision and recall varies with changing threshold and varies 889 between 89.96% and 93.25%. It is known that choosing 890 the operation points depends on the rates the user wishes 891 to achieve. For example, if F1-score matters more than all 892 the other metrics, then choosing 0.7 confidence score as the 893 best performing point, as it yields to the highest F1-score 894 based on the weighted average of around 93.25% F1-score. 895 ROC curve, AUC, and DET curve is another way of visual-896 izing the performance of the model at all operating points. 897 Figure 5 presents the ROC curves for all samples and the 898 averaged figures, in which the operating curves and the rela-899 tionship between TPR and FPR can be visually interpreted. 900 VOLUME 10, 2022 Table 17 shows the inference time of state-of-the-art CNN 928 model and the system overall process time from the input 929 of continuous speech, segmentation, through detection and 930 time estimation, where it was found that the proposed CNN 931 has inference time of 2.63 ms (0.00263 seconds) calculated 932 from the time step of applying the spectrogram image sam-933 ple at the input to the time step of model's prediction. The 934 reason behind that is the minimum number of parameters and 935 lightweight CNN of small filters and few layers. According 936 to Table 17, the average process time per each second of 937 the long audio samples, which can be defined as the average 938 time taken to process the input sample through all steps from 939 segmentation to automated detection per each second, which 940 is 0.46 seconds. This means each second of the long audio 941 will be processed completely in 0.46 seconds, that makes 942 this process to be real time process and even faster than the 943 human manual films' detection, filtering, and censorship of 944 inappropriate speech content. For example, sample 1 consist 945 of 371 seconds in total. However, the average time will be 946 taken to pass through the developed automated detection 947 for film censorship process, will be around 170.66 seconds, 948 which is less than half the length of the original sample. 949 Hence, the proposed system yield in saving time compared 950 to manual detection and censorship process. In addition to the 951 Table 20 and Table 14, it can be noted that 1005 current CNN model outperforms baseline 2 model based on 1006 all the evaluation metrics Based on the macro average metrics 1007 for both models, current model outperformed baseline 2 by 1008 around 6% average accuracy, (2% to 5%) recall/TPR and 1009 FNR, (1% to 3%) precision, and about (1% to 3%) F1-score. 1010 Table 21 presents the weighted average metrics of baseline 1011 2, whereas In Table 22, we showed the outperforming results of the 1047 proposed system based on AUC metric, which is significantly 1048 better than other baseline systems, where proposed model 1049 outperformed baseline 1 algorithms with 2.55% macro aver-1050 age AUC and weighted average AUC of 0.64%. On the other 1051 hand, current model outperformed baseline 2 algorithms with 1052 4.58% macro average AUC and weighted average AUC of 1053 2.22%. Thus, current model outperformed baseline models 1054 in terms of AUC and all other metrics. Given the scarcity of experiments on inappropriate speech 1057 content detection, the first past of this subsection highlighted 1058 a comparative analysis using acoustic-based systems for pro-1059 fanity detection. On the other hand, this part benchmark the 1060 current work against previous work that uses ASR systems 1061 for the detection of profanities. Recent research proposed 1062 a solution for analyzing the video, which helps to iden-1063 tify the profane content through the use of text detection 1064 approaches after videos being transcribed by means of ASR 1065 systems [70]. The audio samples were extracted from the 1066 input video. Then, audio samples were converted into text 1067 using Speech-to-Text library for detection and localization of 1068 profane words. The text data samples were checked against 1069 a profanity list of words. The proposed system was tested 1070 with 50 videos collected from various sources like Facebook, 1071 YouTube etc. Additionally, some of the videos were made 1072 by authors containing profane keywords. The total length of 1073 test samples was only 1734 second (∼ 28.9 minutes). The 1074 developed profanity detection using ASR systems and text 1075 detection approaches achieved an accuracy of around 85.03% 1076 on the reported dataset [70]. 1077 The reported ASR-based system containing two stages that 1078 are Speech-to-Text phase, and text detection approach, was 1079 retrained on the list of profanities proposed in this work to 1080 benchmark current work against ASR-based system. Then, the ASR-based system was tested using the six video samples model, while ASR-based system requires two inferences for 1123 Speech-to-Text and text detection models. 1124 This experiment was performed on a particular dataset of 1125 a spoken English profane words with positive outcomes in 1126 any derivation of the profanities. Nevertheless, the proposed 1127 system performance may be varied by using a different range 1128 of English verbal words or spoken utterances from different 1129 language, as the proposed model uses the direct acoustic 1130 features of utterances for the detection, unlike ASR systems 1131 where spoken terms can be transcribed based on the language 1132 models used in ASR models and accommodate wider range 1133 of keywords. However, the use of ASR models suffers of 1134 the issues that majorly concern a large dataset and large 1135 computational cost, in which the two major issues is solved 1136 in this work for the development of profane words detec-1137 tor. Additionally, ASR systems uses a few stages of models 1138 like acoustic models and language models. In this context, 1139 an additional text detector will need to be applied to locate the 1140 inappropriate speech content. Therefore, ASR-based systems 1141 for the detection of profane words suffers of performance 1142 metrics drop due to the sequenced models, as a failure in one 1143 stage leads to performance drop in the following stage. The proposed CNN model for profanity detection and cen-1146 sorship was further analyzed and compared with four dif-1147 ferent pre-trained CNN models, which are MobileNet [71], 1148 Inception-v3 [72], AlexNet [73], and ResNet-50 [74] as 1149 detailed in Table 24 and Table 25. The models are compared 1150 VOLUME 10, 2022 recommended to consider several future developments for 1207 censorship and films rating researches. The context in which 1208 keyword is uttered is crucial to define a set of words that could 1209 represent the keyword. Therefore, considering the sequence 1210 and context of uttered words is recommended in future works. 1211 1212 This research suggested the implementation of CNN model 1213 for the detection and localization of spoken foul language 1214 in continuous speech samples with static keyword detection 1215 mode test for automated video/audio/film censorship. The 1216 current work utilizes a novel dataset of foul languages to 1217 train the model. MMUTM and TAPAD datasets were man-1218 ually labeled with 2 annotations (Foul vs Normal). The CNN 1219 model was trained to classify the labels of pre-segmented 1220 isolated samples, whereas current model was tested with 1221 continuous incoming audio samples for offensive language 1222 identification. The novel test dataset consists of several real-1223 world video samples with high rate of offensive words per 1224 minute. The model input was an extracted features of the 1225 audio samples in a form of Log-Mel spectrogram images, 1226 while the output of the whole system contains the detected 1227 foul word and timestamps of profanity occurrences within 1228 lengthy audio samples. 1229 The proposed system performed differently based on the 1230 different properties and characteristics of test samples. How-1231 ever, the overall foul language detection system has per-1232 formed positively with macro average accuracy ranging from 1233 95.11% to 97.67% and weighted average accuracy of 95.34% 1234 to 97.47% for all the operating points of thresholds. Further-1235 more, the reported F1-score metric for model performance 1236 showed a balance between sensitivity and specificity of pro-1237 posed CNN by achieving F1-score ranging from 88.54% to 1238 90.45% macro averaged and 89.96% to 92.91% for weighted 1239 average metrics. Additionally, current model achieved a high 1240 AUC metric of the ROC curve of around 93.85% macro 1241 averaged and 94.58% weighted average AUC metrics. 1242 The proposed lightweight CNN model was benchmarked 1243 against two baseline models that uses only acoustic fea-1244 tures on the novel offensive language dataset. It is reported 1245 that the current model outperformed the acoustic baseline 1246 algorithms in terms of performance metrics. We showed 1247 the outperforming results of the proposed system based on 1248 AUC metric, which is significantly better than other base-1249 line models, where proposed model outperformed baseline 1250 1 algorithms with 2.55% macro average AUC and weighted 1251 average AUC of 0.64%. On the other hand, proposed system 1252 outperformed baseline 2 model with 4.58% macro average 1253 AUC and weighted average AUC of 2.22%. Thus, current 1254 model outperformed baseline models in terms of AUC and all 1255 other metrics. Additionally, proposed acoustic system outper-1256 formed ASR-based system for profanity detection based on 1257 the evaluation metrics including AUC, accuracy, precision, 1258 and F1-score. 1259 This work also demonstrated that proposed system for 1260 audible and speech content processing and detection of 1261 terms of inference and overall process speed. It was found 1263 that the proposed CNN has inference time of 2.63 ms 1264 (0.00263 seconds), which is attested to the light-weight struc-1265 ture of developed model. Furthermore, the average time 1266 taken to process the input sample through all steps from 1267 segmentation to automated detection per each second, which 1268 is 0.46 seconds. This means each second of the long audio 1269 will be processed completely in 0.46 seconds, that makes 1270 this process to be real time process and even faster than the 1271 human manual films' detection, filtering, and censorship of 1272 inappropriate speech content. This attested to the light-weight 1273 structure of CNN architecture, which make the process and 1274 inference to be faster and suitable for content screening, 1275 filtering, and censorship.
8,005.2
2022-01-01T00:00:00.000
[ "Computer Science", "Law" ]
Existence of solutions to path-dependent kinetic equations and related forward - backward systems This paper is devoted to path-dependent kinetics equations arising, in particular, from the analysis of the coupled backward - forward systems of equations of mean field games. We present local well-posedness, global existence and some regularity results for these equations. Introduction A deterministic dynamic in B * can be naturally specified by a vectorvalued ordinary differential equatioṅ µ t = Ψ(t, µ t ) (1.1) with a given initial value µ ∈ B * , where the mapping (t, η) → Ψ(t, η) is from R + × B * to B * . More generally, one often meets the situations whenμ does not belong to B * , but to some its extension. Namely, let D be a dense subset of B, which is itself a Banach space with the norm D ≥ B . A deterministic dynamic in B * can be specified by equation (1.1), where the mapping (t, η) → Ψ(t, η) is from R + × B * to D * . Written in weak form, equation (1.1) means that, for all f ∈ D, (f,μ t ) = (f, Ψ(t, µ t )). (1.2) In many applications, equation (1.2) appears in the form where the mapping (t, η) → A[t, η] is from R + × B * to bounded linear operators A[t, η] : D → B such that, for each pair (t, η) ∈ R + × B * , A[t, η] generates a strongly continuous semigroup in B. Of major interest is the case when B * is the space of measures on a locally compact space. It turns out that, in this case and under mild technical assumptions, an evolution (1.2) preserving positivity has to be of form (1.3) with the operators A[t, η] generating Feller processes, see Theorems 6.8.1 and 11.5.1 from [7]. Equation (1.3) will be referred to as the general kinetic equation. It contains most of the basic equations from non-equilibrium statistical mechanics and evolutionary biology, see monograph [7] for an extensive discussion. In this paper we are mostly interested in yet more general equation. Namely, let M be a closed convex subset of B * , which is also closed in The main object of this paper is a "path-dependent" version of equation we call (1.5) an adapted kinetic equation, where {µ ≤t } is a shorthand for {µ s } 0≤s≤t . Adapted kinetic equations can be seen as analytic analogs of stochastic differential equations with adapted coefficients, and their wellposedness can be obtained by similar methods. When the generators A only depend on the future of the trajectory of {µ. Remark 1.1. The terminology of adaptiveness and anticipation here should not be associated with any randomness, as in more standard usage of these words. Equation (1.4) has many applications. Let us briefly explain the crucial role played by this equation in the mean field game (MFG) methodology, which is based on the analysis of coupled systems of forward -backward evolutions and which constitutes a quickly developing area of research in modern theory of optimization, see detail e.g. in [2,3,4,5,10]. Assume that the objective of an agent described by a controlled stochastic process X(s) (passing through x at time t), given an evolutionμ . of the empirical distributions of a large number of other players, is to maximize (over a suitable class of controls {u.}) the payoff V (t, x,μ ≥t , u ≥t ) = E T t J(s, X(s),μ s , u s ) ds + V T (X(T )) , By dynamic programming the optimal payoff of such an agent should satisfy certain HJB equation (backward evolution). On the other hand, when all optimal controls {u t = u t (μ ≥t )} are found, the empirical measure µ . of the resulting process satisfies the controlled kinetic equation The main consistency condition of MFG is in the requirement that the initial µ coincides with the resulting µ. Equalizingμ . = µ . in (1.7) clearly leads to anticipating kinetic equation of type (1.6). Our main results concern the well-posedness of adaptive kinetic equations (1.5), the local well-posedness and global existence of anticipating and general path dependent kinetic equations and finally some regularity result for path-independent equations arising from their probabilistic interpretations. The rest of the paper is organised as follows. In Section 2 our main results are formulated and in Section 3 they are proved. Section 4 yields some regularity results for the solutions of kinetic equations leading also to simple verifiable conditions for compactness assumption (2.12) of our main global existence result. Section 5 show some examples. Main results Let us recall the notion of propagators needed for the proper formulation of our results. For a set S, a family of mappings U t,r from S to itself, parametrized by the pairs of numbers r ≤ t (resp. t ≤ r) from a given finite or infinite interval is called a (forward) propagator (resp. a backward propagator) in S, if U t,t is the identity operator in S for all t and the following chain rule, or propagator equation, holds for r ≤ s ≤ t (resp. for t ≤ s ≤ r): U t,s U s,r = U t,r . A backward propagator U t,r of bounded linear operators on a Banach space B is called strongly continuous if the operators U t,r depend strongly continuously on t and r. Suppose U t,r is a strongly continuous backward propagator of bounded linear operators on a Banach space with a common invariant domain D. Let A t , t ≥ 0, be a family of bounded linear operators D → B that are strongly continuous in t outside a set S of zero-measure in R. Let us say that the family A t generates U t,r on D if, for any f ∈ D, the equations hold for all s outside S with the derivatives taken in the topology of B. In particular, if the operators A t depend strongly continuously on t, equations (2.1) hold for all s and f ∈ D, where for s = t (resp. s = r) it is assumed to be only a right (resp. left) derivative. In the case of propagators in the space of measures, the second equation in (2.1) is called the backward Kolomogorov equation. We can now formulate our main results. for some positive constants c 2 , c 3 , and with their dual propagatorsŨ is well posed, that is for any µ ∈ M, it has a unique solution Φ t (µ) ∈ M (that is (2.6) holds for all f ∈ D) that depends Lipschitz continuously on time t and the initial data in the norm of D * , i.e. 7) and for µ, η ∈ M (global well-posedness for general "path-independent" case) Under the assumptions in Theorem 2.1, but without the locality constraint (2.5), the Cauchy problem for kinetic equation , and the transformationsŨ t,s of M form a propagator depending Lipschitz continuously on time t and the initial data in the norm of D * , i.e. with a constant c(T, K) depending on T and K. is well posed in M and its unique solution depends Lipschitz continuously on initial data in the norm of D * . Theorem 2.4 (global existence of the solution for general "path dependent" case). Under the assumptions in Theorem 2.1, but without the locality constraint (2.5), assume additionally that for any t from a dense subset of is relatively compact in M. Then a solution to the Cauchy problem In Proposition 4.3 in Section 4, we give the conditions under which the compactness assumption (2.12) holds. Proofs of the main results Proof of Theorem 2.1 By duality, for any {ξ 1 Next, we need to estimate the difference of the two propagators. Define an operator-valued function Y (r) : Then, together with assumptions (2.2) and (2.4), . Hence by the contraction principle there exists a unique fixed point for this mapping and hence a unique solution to equation (2.6). Inequality (2.7) follows directly from (2.6). Finally, if Φ t (µ) = µ t and Φ t (η) = η t , then From (2.4) and (3.1), Proof of Theorem 2.2 The global unique solution of (2.9) is constructed by extending local unique solutions of (2.6) via iterations, as is routinely performed in the theory of ordinary differential equations (ODE). To prove uniqueness and continuous dependence on the initial condition, let us assume that µ t and η t are some solutions with the initial conditions µ and η respectively. Instead of (3.2), we now get By Gronwall's lemma, this implies yielding uniqueness and Lipchitz continuity of solutions with respect to initial data. Nonlinear Markov evolutions and its regularity This section is designed to provide a probabilistic interpretation and, as a consequence, certain regularity properties for nonlinear Markov evolution µ t solving kinetic equation (2.9) in the case when B = C ∞ (R d ) and M = P(R d ) is the set of probability measures on R d , so that B * is the space of signed Borel measures on R d and K = sup µ∈P(R d ) µ B * = 1. As a consequence, w shall present a simple criterion for the main compactness assumption of Theorem 2.4. We will use the following notations. is the Banach space of continuously differentiable and bounded functions f on R d such that the derivative f ′ belongs to is the Banach space of twice continuously differentiable and bounded functions f on R d such that the first derivative f ′ and the second derivative where ∇ denotes the gradient operator; for (t, z, µ) ∈ [0, T ] × R d × P(R d ), G(t, z, µ) is a symmetric non-negative matrix, b(t, z, µ) is a vector, ν(t, z, µ, ·) is a Lévy measure on R d , i.e. depending measurably on t, z, µ, and 1 B 1 denotes, as usual, the indicator function of the unit ball in R d . Assume that each operator (4.1) generates a Feller process with one and the same domain D such that C 2 is a martingale. Proof. By the assumptions of Theorem 2.2, a solution µ t ∈ P(R d ) of equation (2.9) with initial condition µ s = µ specifies a propagatorŨ t,r [µ . ], s ≤ r ≤ t, of linear transformations in B * , solving the Cauchy problems for equation In its turn, for any ν ∈ P(R d ), equation (4.4) specifies marginal distributions of a usual (linear) Markov process {X µ s,t (ν)} in R d with the initial measure ν. Clearly, the process {X µ s,t (µ)} is a solution to our martingale problem. We shall refer to the family of processes constructed in Proposition 4.1 as to nonlinear Markov process generated by the family A[t, µ]. Then the distributions L(X µ s,t ) = Φ t,s (µ), solving the Cauchy problem for equation (2.9) with initial condition µ s have uniformly bounded pth moments, i.e. (4.6) and are 1 2 -Hölder continuous with respect to t in the space (C Lip (R d )) * , i.e. with a positive constant c. Proof. For a fixed trajectory {µ t } t≥0 with initial value µ, one can consider {X µ s,t } as a usual Markov process. Using the estimates for the moments of such processes from formula (5.61) of [8] (more precisely, its straightforward extension to time non-homogeneous case), one obtains from (4.5) that E min |X µ s,t −x| 2 , |X µ s,t −x| p |X µ s,s =x) ≤ e C(T,P )(t−s) − 1. (4.8) This implies (4.6). Moreover, (4.8) implies that 10) and consequently where constants C(T, P ) can have different values in various formulas above. Our main purpose for presenting Proposition 4.2 lies in the following corollary. Then the compactness condition from Theorem 2.4 (stating that set (2.12) is compact in P(R d )) holds for any initial measure µ with a finite moment of pth order. Proof. It follows from (4.6) and an observation that a set of probability laws on R d with a bounded pth moment, p > 0, is tight and hence relatively compact. Basic examples of operators A[t, µ] In this section, we present some basic examples of generators that fit to assumptions of our main Theorems and are relevant to the study of mean field games. Notice that the most nontrivial condition of Theorem 2.1 is (ii), as it concerns the difficult question from the theory of usual Markov process, on when a given pre-generator of Lévy-Khintchine type does really generate a Markov process. Even more difficult is the situation with time-dependent generators, as the standard semigroup methods (resolvents and Hille-Phillips-Iosida theorem) are not applicable. Example 5.1. Nonlinear Lévy processes are specified by a families of generators of type (4.1) such that all coefficients do not depend on z, i.e. The following statement is a consequence of Proposition 7.1 from [7]. Proposition 5.1. Supposed that the coefficients G, b, ν are continuous in t and Lipschitz continuous in µ in the norm of Banach space (C 2 ∞ (R d )) * , i.e. Notice that the most natural examples of a functional F on measures that are Lipschitz continuous (or even smooth) in space (C 2 ∞ (R d )) * are supplied by smooth functions of monomials g(x 1 , · · · , x n )µ(dx 1 ) · · · µ(dx n ) with sufficient smooth functions g. Example 5.2. McKean-Vlasov diffusion are specified by the following stochastic differential equation and W t is a standard Brownian motion. The corresponding generator is given by where G(t, x, µ) = tr{σ(t, x, µ)σ T (t, x, µ)}. It is well known (and follows from Ito's calculus) that if the coefficients of a diffusion are Lipshitz continuous, the corresponding SDE is well posed, implying the following. Let us note finally that not all interesting evolution of type (1.3) satisfy the Lipschitz continuity assumption used in our main results. For instance, a different type of continuity should be applied for coefficients depending on measures via their quantiles, e.g. value at risk (VAR). This type of evolution is analyzed in [9] inspired by preprint [1].
3,375.4
2013-03-21T00:00:00.000
[ "Mathematics" ]
Simple Fusion: Return of the Language Model Neural Machine Translation (NMT) typically leverages monolingual data in training through backtranslation. We investigate an alternative simple method to use monolingual data for NMT training: We combine the scores of a pre-trained and fixed language model (LM) with the scores of a translation model (TM) while the TM is trained from scratch. To achieve that, we train the translation model to predict the residual probability of the training data added to the prediction of the LM. This enables the TM to focus its capacity on modeling the source sentence since it can rely on the LM for fluency. We show that our method outperforms previous approaches to integrate LMs into NMT while the architecture is simpler as it does not require gating networks to balance TM and LM. We observe gains of between +0.24 and +2.36 BLEU on all four test sets (English-Turkish, Turkish-English, Estonian-English, Xhosa-English) on top of ensembles without LM. We compare our method with alternative ways to utilize monolingual data such as backtranslation, shallow fusion, and cold fusion. Introduction Machine translation (MT) relies on parallel training data, which is difficult to acquire. In contrast, monolingual data is abundant for most languages and domains. Traditional statistical machine translation (SMT) effectively leverages monolingual data using language models (LMs) (Brants et al., 2007). The combination of LM and TM in SMT can be traced back to the noisy-channel model which applies the Bayes rule to decompose a 0 This work was done when the first author was on an internship at Facebook. translation system (Brown et al., 1993): where x = (x 1 , . . . , x m ) is the source sentence, y = (y 1 , . . . , y n ) is the target sentence, and P T M (·) and P LM (·) are translation model and language model probabilities. In contrast, NMT (Sutskever et al., 2014;Bahdanau et al., 2014) uses a discriminative model and learns the distribution P (y|x) directly end-to-end. Therefore, the vanilla training regimen for NMT is not amenable to integrating an LM or monoglingual data in a straightforward manner. An early attempt to use LMs for NMT, also known as shallow fusion, combines LM and NMT scores at inference time in a log-linear model (Gulcehre et al., 2015(Gulcehre et al., , 2017. In contrast, we integrate the LM scores during NMT training. Our training procedure first trains an LM on a large monolingual corpus. We then hold the LM fixed and train the NMT system to optimize the combined score of LM and NMT on the parallel training set. This allows the NMT model to focus on modeling the source sentence, while the LM handles the generation based on the targetside history. Sriram et al. (2017) explored a similar idea for speech recognition using a gating network for controlling the relative contribution of the LM. We show that our simpler architecture without an explicit control mechanism is effective for machine translation. We observe gains of up to more than 2 BLEU points from adding the LM to TM training. We also show that our method can be combined with backtranslation (Sennrich et al., 2016a), yielding further gains over systems without LM. 204 2 Related Work Inference-time Combination Shallow fusion (Gulcehre et al., 2015) integrates an LM by changing the decoding objective to: y = argmax y log P TM (y|x) + λ log P LM (y). Cold Fusion Shallow fusion combines a fixed TM with a fixed LM at inference time. Sriram et al. (2017) proposed to keep the LM fixed, but train a sequence to sequence (Seq2Seq) NMT model from scratch which includes the LM as a fixed part of the network. They argue that this approach allows the Seq2Seq network to use its model capacity for the conditioning on the source sequence since the language modeling aspect is already covered by the LM. Their cold fusion architecture includes a gating network which learns to regulate the contributions of the LM at each time step. They demonstrated superior performance of cold fusion on a speech recognition task. Gulcehre et al. (2015Gulcehre et al. ( , 2017 suggest to combine a pre-trained RNN-LM with a pre-trained NMT system using a controller network that dynamically adjusts the weights between RNN-LM and NMT at each time step (deep fusion). Both deep fusion and n-best reranking with count-based LMs have been used in WMT evaluation systems (Jean et al., 2015;Wang et al., 2017). An important limitation of these approaches is that LM and TM are trained independently. Other Approaches A second line of research augments the parallel training data with additional synthetic data from a monolingual corpus in the target language. The source sentences can be generated with a separate translation system (Schwenk, 2008;Sennrich et al., 2016a) (backtranslation), or simply copied over from the target side (Currey et al., 2017). Since data augmentation methods rely on some balance between real and synthetic data (Sennrich et al., 2016a;Currey et al., 2017;Poncelas et al., 2018), they can often only use a small fraction of the available monolingual data. A third class of approaches change the NMT training loss function to incorporate monolingual data. For example, Cheng et al. (2016); Tu et al. (2017) proposed to add autoencoder terms to the training objective which capture how well a sentence can be reconstructed from its translated representation. However, training with respect to the new loss is often computationally intensive and requires approximations. Alternatively, multi-task learning has been used to incorporate source-side (Zhang and Zong, 2016) and target-side (Domhan and Hieber, 2017) monolingual data. Another way of utilizing monolingual data in both source and target language is to warm start Seq2Seq training from pre-trained encoder and decoder networks (Ramachandran et al., 2017;Skorokhodov et al., 2018). We note that pre-training can be used in combination with our approach. An extreme form of leveraging monolingual training data is unsupervised NMT (Lample et al., 2017;Artetxe et al., 2017) which removes the need for parallel training data entirely. In this work, we assume to have access to some amount of parallel training data, but aim to improve the translation quality even further by using a language model. Translation Model Training under Language Model Predictions In spirit of the cold fusion technique of Sriram et al. (2017) we also keep the LM fixed when training the translation network. However, we greatly simplify the architecture by removing the need for a gating network. We follow the usual left-to-right factorization in NMT: Let S TM (y t |y t−1 1 , x) be the output of the TM projection layer without softmax, i.e., what we would normally call the logits. We investigate two different ways to parameterize P (y t |y t−1 1 , x) using S TM (y t |y t−1 1 , x) and a fixed and pretrained language model P LM (·): POSTNORM and PRENORM. POSTNORM This variant is directly inspired by shallow fusion (Eq. 2) as we turn S TM (y t |y t−1 1 , x) 205 into a probability distribution using a softmax layer, and sum its log-probabilities with the logprobabilities of the LM, i.e. multiply their probabilities: PRENORM Another option is to apply normalization after combining the raw S TM (y t |y t−1 1 , x) scores with the LM log-probability: 3.1 Theoretical Discussion of POSTNORM and PRENORM Note that P (y t |y t−1 1 , x) might not represent a valid probability distribution under the POST-NORM criterion since, as component-wise product of two distributions, it is not guaranteed to sum to 1. A way to fix this issue would be to combine TM and LM probabilities in the probability space rather than in the log space. However, we have found that probability space combination does not work as well as POSTNORM in our experiments. We can describe S TM (y t |y t−1 1 , x) under POSTNORM informally as the residual probability added to the prediction of the LM. It is interesting to investigate what signal is actually propagated into S TM (y t |y t−1 1 , x) when training with the PRENORM strategy. We can rewrite P (y t |y t−1 1 , x) as: Alternatively, we can decompose P (y t |y t−1 1 , x) as follows using Eq. 5: Combining Eq. 6 and Eq. 7 leads to: This means that S TM (y t |y t−1 1 , x) under PRENORM is trained to predict how much more likely the source sentence becomes when a particular target token y t is revealed. Experimental Setup We evaluate our method on a variety of publicly available and proprietary data sets. For our Turkish-English (tr-en), English-Turkish (entr), and Estonian-English (et-en) experiments we use all available parallel data from the WMT18 evaluation campaign to train the translation models. Our language models are trained on News Crawl 2017. We use news-test2017 as development ("dev") set and news-test2018 as test set. Additionally, we collected our own proprietary corpus of public posts on Facebook. We refer to it as 'INTERNAL' data set. This corpus consists of monolingual English in-domain sentences and parallel data in Xhosa-English. Training set sizes are summarized in Tables 1 and 2. Our preprocessing consists of lower-casing, tokenization, and subword-segmentation using joint byte pair encoding (Sennrich et al., 2016b) with 16K merge operations. On Turkish, we additionally remove diacritics from the text. On WMT we use lower-cased Sacre-BLEU 1 (Post, 2018) to be comparable with the literature. 2 On our internal data we report tokenized BLEU scores. Our Seq2Seq models are encoder-decoder architectures (Sutskever et al., 2014;Bahdanau et al., 2014) with dot-product attention (Luong et al., 2015b) trained with our PyTorch Translate library. 3 Both decoder and encoder consist of two 512-dimensional LSTM layers and 256dimensional embeddings. The first encoder layer is bidirectional, the second one runs from right to left. Our training and architecture hyperparameters are summarized in Tab. 3. Our LSTM-based LMs have the same size and architecture as the decoder networks, but do not use attention and do not condition on the source sentence. We run beam search with beam size of 6 in all our experiments. For each setup we train five models using SGD (batch size of 32 sentences) with learning rate decay and label smoothing, and either select the best one (single system) or ensemble the four best models based on dev set BLEU score. Results Tab. 4 compares our methods PRENORM and POSTNORM on the tested language pairs. Shallow fusion (Sec. 2.1) often leads to minor improvements over the baseline for both single systems and ensembles. We also reimplemented the Table 4: Comparison of our PRENORM and POST-NORM combination strategies with shallow fusion (Gulcehre et al., 2015) and cold fusion (Sriram et al., 2017) under an RNN-LM. cold fusion technique (Sec. 2.2) for comparison. For our machine translation experiments we report mixed results with cold fusion, with performance ranging between 0.33 BLEU gain on Xhosa-English and slight BLEU degradation in most of our Turkish-English experiments. Both of our methods, PRENORM and POST-NORM yield significant improvements in BLEU across the board. We report more consistent gains with POSTNORM than with PRENORM. All our POSTNORM systems outperform both shallow fusion and cold fusion on all language pairs, yielding test set gains of up to +2.36 BLEU (Xhosa-English ensembles). Discussion and Analysis Backtranslation A very popular technique to use monolingual data for NMT is backtranslation (Sennrich et al., 2016a). Backtranslation uses a reverse NMT system to translate monolingual target language sentences into the source language, and adds the newly generated sentence pairs to the training data. The amount of monolingual data which can be used for backtranslation is usually limited by the size of the parallel corpus as the translation quality suffers when the mixing ratio between synthetic and real source sentences is too large (Poncelas et al., 2018). This is a severe limitation particularly for low-resource MT. Fig. 1 shows that both our baseline system without LM and our POSTNORM system benefit greatly from backtranslation up to a mixing ratio of 1:8, but degrade slightly if this ratio is exceeded. POSTNORM is significantly better than the baseline even when using it in combination with backtranslation. Training convergence We have found that training converges faster under the POSTNORM loss. Fig. 2 plots the training curves of our sys- tems. The baseline (orange curve) reaches its maximum of 19.39 BLEU after 28 training epochs. POSTNORM surpasses this BLEU score already after 12 epochs. Language model type So far we have used recurrent neural network language models (Mikolov et al., 2010, RNN-LM) with LSTM cells in all our experiments. We can also parameterize an n-gram language model with a feedforward neural network (Bengio et al., 2003, FFN-LM). In order to compare both language model types we trained a 4-gram feedforward LM with two 512dimensional hidden layers and 256-dimensional embeddings on Turkish monolingual data. Tab. 5 shows that the PRENORM strategy works particularly well for the n-gram LM. However, using an RNN-LM with the POSTNORM strategy still gives the best overall performance. Using both RNN and n-gram LM at the same time does not improve translation quality any further (Tab. 6). Impact on the TM distribution With the POST-NORM strategy, the TM still produces a distribution over the target vocabulary as the scores are Reference He says that years later, he still lives in fear. Baseline (no LM) He says that, for years, he still lives in fear. This work (PRENORM) He says that many years later he still lives in fear. Reference "I'm afraid," he says. Baseline (no LM) "I fear," says he. This work (PRENORM) "I am afraid," he says. normalized before the combination with the LM. This raises a natural question: How different are the distributions generated by a TM trained under POSTNORM loss from the distributions of the baseline system without LM? Tab. 7 gives some insight to that question. As expected, the RNN-LM has higher perplexity than the baseline as it is a weaker model of translation. The RNN-LM also has a higher average entropy which indicates that the LM distributions are smoother than those from the baseline translation model. The TM trained under POSTNORM loss has a much higher perplexity which suggests that it strongly relies on the LM predictions and performs poorly when it is not combined with it. However, the average entropy is much lower (1.82) than both other models, i.e. it produces much sharper distributions. Language models improve fluency A traditional interpretation of the role of an LM in MT is that it is (also) responsible for the fluency of translations (Koehn, 2009). Thus, we would expect more fluent translations from our method than from a system without LM. Tab. 8 breaks down the BLEU score of the baseline and the PRENORM ensembles on Estonian-English into n-gram precisions. Most of the BLEU gains can be attributed to the increase in precision of higher order n-grams, indicating improvements in fluency. Tab. 9 shows some examples where our PRENORM system produces a more fluent translation than the baseline. Training set size We artificially reduced the size of the English-Turkish training set even further to investigate how well our method performs in low-resource settings (Fig. 3). Our POSTNORM strategy outperforms the baseline regardless of the number of training sentences, but the gains are smaller on very small training sets. Conclusion We have presented a simple yet very effective method to use language models in NMT which incorporates the LM already into NMT training. We reported significant and consistent gains from using our method in four language directions over two alternative ways to integrate LMs into NMT (shallow fusion and cold fusion) and showed that our approach works well even in combination with backtranslation and on top of ensembles. Our method leads to faster training convergence and more fluent translations than a baseline system without LM.
3,705.8
2018-09-01T00:00:00.000
[ "Computer Science" ]
Further results on the existence of super-simple pairwise balanced designs with block sizes 3 and 4 In statistical planning of experiments, super-simple designs are the ones providing samples with maximum intersection as small as possible. Super-simple pairwise balanced designs are useful in constructing other types of super-simple designs which can be applied to codes and designs. In this paper, the super-simple pairwise balanced designs with block sizes 3 and 4 are investigated and it is proved that the necessary conditions for the existence of a super-simple \begin{document}$(v, \{3,4\}, λ)$\end{document} -PBD for \begin{document}$λ = 7,9$\end{document} and \begin{document}$λ = 2k$\end{document} , \begin{document}$k≥1$\end{document} , are sufficient with seven possible exceptions. In the end, several optical orthogonal codes and superimposed codes are given. Introduction A pairwise balanced design (or PBD) is a pair (X , A) such that X is a set of elements called points, and A is a set of subsets (called blocks) of X , each of cardinality at least two, such that every pair of points occurs in exactly λ blocks of A. If v is a positive integer and K is a set of positive integers, each of which is greater than one, then we say that (X , A) is a (v, K, λ)-PBD if |X | = v, and |A| ∈ K for every A ∈ A. We denote B(K, λ) = {v : there exists a (v, K, λ)-PBD}. A set K is said to be PBD-closed if B(K, λ) = K. A PBD is resolvable if its blocks can be partitioned into parallel classes; a parallel class is a set of point-disjoint blocks whose union is the set of all points. The notation (v, K, λ)-RPBD is used for a resolvable PBD. When K = {k}, a (v, K, λ)-PBD is a balanced incomplete block design, the notations (v, k, λ)-BIBD and (v, k, λ)-RBIBD are sometimes used in this case. A design is said to be simple if it contains no repeated blocks. A design is said to be super-simple if the intersection of any two blocks has at most two elements. When k = 3, a super-simple design is just a simple design. When λ = 1, the designs are necessarily super-simple. A super-simple (v, K 1 , λ)-PBD is also a super-simple (v, K 2 , λ)-PBD if K 1 ⊆ K 2 . In this paper, the existence of a super-simple (v, {3, 4}, λ)-PBD for λ = 7, 9 and λ = 2k, k ≥ 4, is investigated. The necessary conditions for the existence of such a super-simple design are v ≥ λ + 2 and λv(v − 1) ≡ 0 (mod 3). We shall use direct and recursive constructions to show that the necessary conditions are also sufficient with some possible exceptions. Specifically, we shall prove the following theorem. The paper is organized as follows. Some recursive constructions are provided in Section 2. Some ingredient super-simple designs are given directly by computer search in Section 3. The proof of our main theorem is given in Section 4. Some applications in optical orthogonal codes and superimposed codes are mentioned in Section 5. Recursive constructions In this section, the auxiliary design (group divisible design) is introduced and some known results stated for later use, and we also give some standard recursive constructions. A group divisible design (or GDD) is a triple (X , G, B) which satisfies the following properties: (i) G is a partition of a set X (of points) into subsets called groups. (ii) B is a set of subsets of X (called blocks) such that a group and a block contain at most one common point. (iii) Every pair of points from distinct groups occurs in exactly λ blocks. The group type (or type) of GDD is the multiset {|G| : G ∈ G}. We usually use an "exponential" notation to describe types: so type g u1 1 g u2 2 · · · g u k k denotes u i occurrences of g i , 1 ≤ i ≤ k, in the multiset. A GDD with block sizes from a set of positive integers K is called a (K, λ)-GDD. When λ = 1, we simply write K-GDD. When K = {k}, we simply write k for K. Taking the groups of a GDD as blocks yields a PBD, and taking a parallel class of blocks of a PBD as groups also yields a GDD. A (k, λ)-GDD of group type v k is called a transversal design and denoted by TD λ (k, v) for short. The known result on super-simple TD λ (4, v) is listed in the following which is used in Section 4. The following results are obvious but very useful. Their proofs are omitted here. Direct constructions In this section, direct constructions are used and some super-simple (v, {3, 4}, λ)-PBDs for small values of v are obtained, which will be used as master designs or input designs in the recursive constructions. All these designs are obtained by computer. Usually, it is difficult to find all the blocks of a design directly. So, a technique of "+d (mod v)" is used, which means that we try to find a subset S ⊆ B and an element d ∈ Z v such that {B + kd : B ∈ S, k ∈ Z} = B. The blocks of S are called base blocks. The "+d" is omitted when d = 1 and then the design is cyclic. Sometimes S is divided into two parts: P and R, and we try to find an element m ∈ Z v and an integer s such that there is a subset P 1 ⊆ P satisfying Here m is a partial multiplier of order s of the design. In this article, m is taken to be some unit of the ring Z v , i.e., m satisfies that gcd(m, v) = 1. Further, the founded base blocks of S are shuffled when the program takes too much time to find a design. Most of these ideas come from the previous papers such as [16,18,20]. Proof. For v ∈ {22, 34, 46}, we take the point set X = Z v , the base blocks are listed below and all the required blocks can be generated from them by +2 (mod v). Proof. For v ∈ {37, 49, 73}, we take the point set as X = Z v . With a computer program we found the required base blocks, which are divided into two parts, P and R, where P consists of some base blocks with a partial multiplier m of order s (i.e., each base block of P has to be multiplied by m i for 0 ≤ i ≤ s − 1), and R is the set of the remaining base blocks. We list P, m, s and R below. The desired super-simple design is generated by developing the base blocks (mod v). For v = 40, we take the point set as X = Z 40 . The base blocks are also divided into two parts, P and R, which are listed below and all the required blocks can be generated from them by +2 (mod v). Proof. For each v ∈ M , we take the point set X = Z v . With a computer program we found the required base blocks, which are divided into two parts, P and R, where P consists of some base blocks with a partial multiplier m of order s (i.e., each base block of P has to be multiplied by m i for 0 ≤ i ≤ s−1), and R is the set of the remaining base blocks. We list P, m, s and R below. The desired super-simple design is generated by developing the base blocks +2 (mod v). To prove Theorem 1.5, we shall divide it into two cases by the remaining value of λ = 7, 9. When λ = 9, the necessary conditions for a super-simple (v, {3, 4}, 9)-PBD become v ≥ 11. We shall prove that such a necessary condition is also sufficient except possibly for v ∈ {12, 16}. Concluding remarks Super-simple cyclic designs with small values are believed to be useful not only in constructing new larger super-simple cyclic designs, but also in constructing optical orthogonal codes with index two and superimposed codes. As defined in Chung [22]. A (v, k, ρ) optical orthogonal code (OOC), C, is a family of (0, 1) sequences (called codewords) of length v and weight k which satisfy the following two properties (all subscripts are reduced modulo v). (1) (The Autocorrelation Property) 0≤t≤v x t x t+i ≤ ρ for any x = (x 0 , x 1 , . . . , x v−1 ) ∈ C and any integer i ≡ 0( (mod v)); (2) (The Cross-Correlation Property) 0≤t≤v x t y t+i ≤ ρ for any x = (x 0 , x 1 , . . . , x v−1 ) ∈ C and y = (y 0 , y 1 , . . . , y v−1 ) ∈ C with x = y, and any integer i. The parameter ρ is the index of the OOC. It is well known that the number of codewords of a (v, k, ρ)-OOC can not exceed 1 [30]). The OOC is said to be optimal when its size reaches this bound. Suppose that there exists a super-simple (v, k, λ)-CBIBD. We construct a (0, 1)sequence of length v from each of the base blocks of the super-simple CBIBD such that the i-th position is 1 if and only if i is an element of the base block. According to the definitions of a super-simple CBIBD and an OOC, it is easy to see that the derived (0, 1)-sequences constitute a (v, 4, 2)-OOC with λ(v−1) k(k−1) codewords. So we have the following by Lemma 3.2 and Lemma 4.3. The main problem in the study of superimposed codes is to find the minimal length N (T ; w, r) of a (w, r) superimposed code for a given cardinality T . The following result can be found in [32]. [37] Y. Zhang, K. Chen
2,387
2018-03-27T00:00:00.000
[ "Computer Science", "Mathematics" ]
Free Space Optical Communications — Theory and Practices FSO is a line-of-sight technology that uses lasers to provide optical bandwidth connections or FSO is an optical communication technique that propagate the light in free space means air, outer space, vacuum, or something similar to wirelessly transmit data for telecommunication and computer networking. Currently, FSO is capable of up to 2.5 Gbps [1] of data, voice and video communications through the air, allowing optical connectivity without requiring fiberoptic cable or securing spectrum licenses. Operate between the 780 – 1600 nm wavelengths bands and use O/E and E/O converters. FSO requires light, which can be focused by using either light emitting diodes (LEDs) or lasers (light amplification by stimulated emission of radiation). The use of lasers is a simple concept similar to optical transmissions using fiberoptic cables; the only difference is the transmission media. Light travels through air faster than it does through glass, so it is fair to classify FSO as optical communications at the speed of the light. FSO communication is considered as an alternative to radio relay link line-of sight (LOS) communication systems. This chapter is concentrate on ground-to-ground free-space laser communications. FSO components are contain three stages: transmitter to send of optical radiation through the atmosphere obeys the Beer-Lamberts`s law, free space transmission channel where exist the turbulent eddies (cloud, rain, smoke, gases, temperature variations, fog and aerosol) and receiver to process the received signal. Typical links are between 300 m and 5 km, although longer distances can be deployed such as 8–11 km are possible depending on the speed and required availability. The importance of this chapter is to introduce the FSO technique step by step. We will briefly focus on concept of FSO technology in section 1. Section 2 presents an optical wireless transceiver design and FSO main components and transmission media. Mathematical model of atmospheric turbulence of FSO is illustrated in section 3. Second part of this study is a case study to adapt between theoretical and practical parts of FSO technique, where series of simulations results are demonstrated and analyzed. In section 4, we demonstrate the first practical part, simulation results and discussion of geometric loss and total attenuation. The second part of case study explores the optical link budget is presented in section 5. Third part of case study shows the simulation results of BER and SNR of this proposed work is demonstrated in section 6. Section 7 presents some concluding remarks. Finally, we propose some important questions related to this chapter for self-evaluation. FSO applications [1,2] • Telecommunication and computer networking • Metro network extensions: carriers can deploy FSO to extend existing metropolitan area fiber rings, to connect new networks, and, in their core infrastructure, to complete SONET rings. • Enterprise connectivity: the ease with which FSO links can be installed makes them a natural for interconnecting local area network segments that are housed in buildings separated by public streets or other right-of-way property. • Fiber backup: FSO may also be deployed in redundant links to backup fiber in place of a second fiber link. • Backhaul: FSO can be used to carry cellular telephone traffic from antenna towers back to facilities wired into the public switched telephone network. • Service acceleration: FSO can be also used to provide instant service to fiber-optic customers while their fiber infrastructure is being laid. • Last-Mile access: In today's cities, more than 95% of the buildings do not have access to the fiber optic infrastructure due to the development of communication systems after the metropolitan areas. FSO technology seems a promising solution to the connection of endusers to the service providers or to other existing networks. Moreover, FSO provides highspeed connection up to Gbps, which is far more beyond the alternative systems. The advantages and disadvantages of FSO are as following [1,2]: FSO Advantages • Long distance up to 8 km. • High bit rates speed rates: the high bandwidth capability of the fiber optic of 2.5 Gbps to 10 Gbps achieved with wavelength division multiplexing (WDM). Modern systems can handle up to 160 signals and can thus expand a basic 10 Gbit/s system over a signal fiber pair to over 1.6 Tbit/s. • Immunity from electromagnetic interference: secure cannot be detected with RF meter or spectrum analyzer, very narrow and directional beams • Invisible and eye safe, no health hazards so even a butterfly can fly unscathed through a beam • Low bit error rates (BER) • Absence of side lobes • Deployment of FSO systems quickly and easily • No Fresnel zone necessary • Low maintenance (Practical) • Lower costs as compared to fiber networks (FSO costs are as low as 1/5 of fiber network costs). • License-free long-range operation (in contrast with radio communication) FSO disadvantages For terrestrial applications, the principal limiting factors are: Beam dispersion, atmospheric absorption, rain, fog, snow, interference from background light sources (including the sun), shadowing, pointing stability in wind, and pollution. Comparison between FSO vs. fiber optics vs. other technologies In the future fiber optics replaced by FSO for the following reasons: • Optics is the study of the behavior and properties of light • Optical fibers can carry a laser beam for long distances • Most of the recent large effort of digging up the ground and laying down new fiber has been directed towards extending the fiber optic backbone to new central offices, and not laying fiber directly to the customer • Like fiber, FSO uses lasers to transmit data, but instead of enclosing the data stream in a glass fiber, it is transmitted through the air. Light and electromagnetic spectrum The electromagnetic spectrum is the range of all possible frequencies of electromagnetic radiation. The "electromagnetic spectrum" of an object has a different meaning, and is instead the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object. The electromagnetic spectrum extends from below the low frequencies used for modern radio communication to gamma radiation at the short-wavelength (high-frequency) end, thereby covering wavelengths from thousands of kilometers down to a fraction of the size of an atom. The limit for long wavelengths is the size of the universe itself, while it is thought that the short wavelength limit is in the vicinity of the Planck length, although in principle the spectrum is infinite and continuous. Most parts of the electromagnetic spectrum are used in science for spectroscopic and other probing interactions, as ways to study and characterize matter. In addition, radiation from various parts of the spectrum has found many other uses for communications and manufacturing (see electromagnetic radiation for more applications). The electromagnetic spectrum as demonstrated in Fig. 1, can be expressed in term of wavelength, frequency, or energy. Wavelength (λ), frequency (ν) are related by the expression [3]. The higher the frequency, the higher the energy. Where c is the speed of light (2.998 × 10 8 m / s). The energy of the various components of the electromagnetic spectrum is given by the expression E = hν (2) Where h is Planck`s constant = 6.63 × 10 -34 Joule seconds. The units of wavelength are meters with the terms microns (denoted μm and equal to 10 -6 m) and nanometers (10 -9 m) being used just as frequently. Frequency is measured in Hertz (Hz), with one Hertz being equal to one cycle of one cycle of sinusoidal wave per second. A commonly used unit of energy is the electron-volt. There are several transmission windows that are nearly transparent (attenuation < 0.2 dB/km), between 780 nm and 1600 nm wavelength range. These windows are located around several specific center wavelengths: • 850 nm Characterized by low attenuation, the 850 nm window is very suitable for FSO operation. In addition, reliable, high-performance, and inexpensive transmitter and detector components are generally available and commonly used in today's service provider networks and transmission equipment. Highly sensitive silicon avalanche photo diode (APD) detector technology and advanced vertical cavity surface emitting laser (VCSEL) technology can be used for operation in this atmospheric window [4]. • 1060 nm The 1060 nm transmission window shows extremely low attenuation values. However, transmission components to build FSO system in this wavelength range are very limited and are typically bulky (e.g. YdYAG solid state lasers). Because this window is not specially used in telecommunications systems, high-grade transmission components are rare. Semiconductor lasers especially tuned to the nearby 980 nm wavelength (980 nm pump lasers for fiber amplifiers) are commercially available. However, the 980 nm wavelength range experiences atmospheric attenuation of several dB/km even under clear weather conditions. • 1250 nm The 1250 nm transmission window offers low attenuation, but transmitters operating in this wavelength range are rare. Lower power telecommunications grade lasers operating typically between 1280-1310 nm are commercially available. However, atmospheric attenuation increases drastically at 1290 nm, making this wavelength only marginally suitable for free space transmission. • 1550 nm The 1550 nm band is well suited for free space transmission due to its low attenuation, as well as the proliferation of high-quality transmitter and detector components. Components include very high-speed semiconductor laser technology suitable for WDM operation as well as amplifiers (EDFA, SOA) used to boost transmission power. Because of the attenuation properties and component availability at this range, development of WDM free space optical systems is feasible. Laser principles A laser is similar in function to an LED, but somewhat different both in how it functions and in its characteristics. The idea of stimulated emission of radiation originated with Albert Einstein around 1916. Until that time, physicists had believed that a photon could interact with an atom only in two ways: The photon could be absorbed and raise the atom to a higher energy level, or the photon could be emitted by the atom when it dropped into a lower energy level [5]. Figure 2 shows the typical energy diagram (term scheme) of an atom. An electron can be moved into a higher energy level by energy provided from the outside. As a basic rule, not all transitions are allowed, and the time that an electron stays in a higher energy state before it drops to a lower energy level varies. When the electron drops from a higher to a lower level, energy is released. A radiative transition that involves the emission of a photon in the visible or infrared spectrum requires a certain amount of energy difference between both energy levels. For ease of understanding, we will describe laser operation by using only two energy levels. Figure 3 illustrates the different methods of photon interaction [5]. There are three possibilities: • Induced absorption: an incoming photon whose wavelength matches the difference between the energy levels E j and E i can be absorbed by an atom that is in the lower energy state. After this interaction process, the photon disappears, but its energy is used to raise the atom to an upper energy level. • Spontaneous emission: an atom in the upper energy level can spontaneously drop to the lower level. The energy that is released during this transition takes the form of an emitted photon. The wavelength of the photon corresponds to the energy difference between the energy states E j and E i . This resembles the process of electron-hole recombination, which resulted in the emission of a photon in the LED structure. Gas-filled fluorescent lights operate through spontaneous emission. • Stimulated emission: an atom in the upper level can drop to the lower level, emitting a photon with a wavelength corresponding to the energy difference of the transition process. The actual emission process is induced by an incoming photon whose wavelength matches the energy transition level of the atom. The stimulated photon will be emitted in phase with the stimulating photon, which continues to propagate. When these three processes take place in a media such as a solid-state material or gas-filled tube, many atoms are involved. If more atoms are in the ground state (or lower excited level) than in the upper one, the number of photons entering the material will decrease due to absorption. However, if the number of photons in the upper level exceeds the number of photons in the lower level, a condition called population inversion is created. Laser operation requires the state of population inversion because under these circumstances, the number of photons increases as they propagate through the media due to the fact that more photons will encounter upper-level atoms than will meet lower-level atoms. Keep in mind that upper-level atoms cause the generation of additional photons, whereas lower-level atoms would absorb photons. A medium with population inversion has gain and has the characteristics of an amplifier. A laser is a high-frequency generator, or oscillator. To force the system to oscillate, it needs amplification, feedback, and a tuning mechanism that establishes the oscillation frequency. In a radio-frequency system, such feedback can be provided by filtering the output signal with a frequency filter, connecting the output signal back to the input, and electronically amplifying the signal before it is coupled back into the input stage. In the case of a laser, the medium provides the amplification. Therefore, a medium capable of laser operation is often referred to as active media. For more details about fundamental of FSO technology, readers merely can refer to reference [5], chapter 2. Laser diodes The entire commercial free-space optics industry is focused on using semiconductor lasers because of their relatively small size, high power, and cost efficiency. Most of these lasers are also used in fiber optics; therefore, availability is not a problem. From the semiconductor design point of view, two different laser structures are available: edge emitting lasers and surface-emitting lasers. With an edge emitter, the light leaves the structure through a small window of the active layer and parallel to the layer structure. Surface emitters radiate through a small window perpendicular to the layer structure. Edge emitters can produce high power. More than 100 milliwatts at modulation speeds higher than 1 GHz are commercially available in the 850 nm wavelength range. The beam profile of edge-emitting diodes is not symmetrical. A typical value for this elliptical radiation output pattern is 20 × 35 degrees. This specific feature can cause a problem when the output power has to be coupled efficiently into a fiber and external optics such as cylindrical lenses are used to increase the coupling efficiency. Surface-emitting diodes typically produce less power output. However, the beam pattern is close to being symmetrical or round. A typical value for the beam divergence angle is 12 degrees. This feature is beneficial for coupling light into a (round) optical fiber. Besides discussing basic designs of semiconductor lasers, we will also provide information regarding WDM laser sources and look into Erbium Doped Fiber Amplifiers/lasers that have been discussed recently for use in FSO systems. Basic designs of optical lenses A lens is a piece of glass or other transparent material that refracts light rays in such a way that they can form an image. Lenses can be envisioned as a series of tiny refracting prisms, and each of these prisms refracts light to produce its own image. When the prisms act together, they produce an image that can be focused at a single point. Lenses can be distinguished from one another in terms of their shape and the materials from which they are made. The shape determines whether the lens is converging or diverging. The material has a refractive index that determines the refractive properties of the lens. The horizontal axis of a lens is known as the principal axis. A converging (convex) lens directs incoming light inward toward the center axis of the beam path. Converging lenses are thicker across their middle and thinner at their upper and lower edges. When collimated 1 (parallel) light rays enter a converging lens, the light is focused to a point. The point where the light converges is called the focal point and the distance between the lens and the focal point is called focal length. A diverging (convex) lens directs incoming rays of light outward away from the axis of the beam path. Diverging lenses are thinner across their middle and thicker at their upper and lower edges. Figure 4 illustrates the behavior of converging and diverging lenses [6]. The focal length (f) of an optical system is a measure of how strongly the system converges or diverges light. For an optical system in air, it is the distance over which initially collimated rays are brought to a focus. A system with a shorter focal length has greater optical power than one with a long focal length; that is, it bends the rays more strongly, bringing them to a focus in a shorter distance. The focal length f is then given by where u is the distance between the light source and the lens, and v is the distance between the lens and the screen. Important definitions After illustrating the basic concepts of FSO, we return to the important definitions related to the laser power reduction due to atmospheric channel effects phenomena. These definitions are considered as the core principle of FSO transmission channel turbulence namely atmosphere, aerosol, absorption, scattering, and radiance etc. Absorption and scattering are related to the loss and redirection of the transmitted energy. The majority of these definitions will be discussed in detail in the case study of this chapter (section 4). An atmosphere is a layer of gases surrounding a planet or other material body material of sufficient mass that is held in place by the gravity of the body. An atmosphere is more likely to be retained if the gravity is high and the atmosphere's temperature is low. Earth atmospheric, which is mostly nitrogen, also contains oxygen used by most organism for respiration and carbon dioxide used by plants, algae and cyanobacteria for photosynthesis, also protects living organisms from genetic damage by solar ultraviolet radiation. Another definition of an atmosphere is the envelope of gases surrounding the earth or another planet. An aerosol is defined as a colloidal system of solid or liquid particles in a gas. An aerosol includes both the particles and the suspending gas, which is usually air. This term describes an aero-solution, clouds of microscopic particles in air. According to the literature, the size range of aerosol particles to be only from 0.1 to 1 μm another authors indicate that the size of aerosol is between 0.01 and 10 μm in radius. Another definition of aerosol is extremely-fine liquid droplets or solid particles that remain suspended in air as fog or smoke. Fog is a thick cloud of tiny water droplets suspended in the atmosphere at or near the earth's surface that obscures or restricts visibility (to a greater extent than mist; strictly, reducing visibility to below 1 km). Smoke is a visible suspension of carbon or other particles in air, typically one emitted from a burning substance. Haze is traditionally an atmospheric phenomenon where dust, smoke and other dry particles obscure the clarity of the sky. Dust is a fine powder made up of very small pieces of earth or sand. Absorption of the light is the decrease in intensity of optical radiation (light) as it passes through a material medium owing to its interaction with the medium. In the process of absorption, the energy of the light is converted to different forms of internal energy of the medium; it may be completely or partially re-emitted by the medium at frequencies other than the frequency of the absorbed radiation. Light scattering is a form of scattering in which light is the form of propagating energy which is scattered. Light scattering can be thought of as the deflection of a ray from a straight path, for example by irregularities in the propagation medium, particles, or in the interface between two media. Deviations from the law of reflection due to irregularities on a surface are also usually considered to be a form of scattering. When these are considered to be random and dense enough that their individual effects average out, this kind of scattered reflection is commonly referred to as diffuse reflection. Scattering has different types as Rayleigh, Mie, Tyndall, Brillion, and Raman Scattering. Radiance erasures of the quantity of radiation that passes through or is emitted from a surface and falls within a given solid angle in a specified direction. Radiance is also used to quantify emission of neutering and other particles. Radiance (in Watts): total amount of energy that flows the light source. Attenuation is the gradual loss in intensity of any kind of flux through a medium. Attenuation affects the propagation of waves and signals transmission media. Scintillation is a flash of light produced in a transparent material by the passage of a particle (an electron, an alpha particle an ion, or a high-energy photon). The process of scintillation is one of luminescence whereby light of a characteristic spectrum is emitted following the absorption of radiation. The emitted radiation is usually less energetic than that absorbed. Scintillation is an inherent molecular property in conjugated and aromatic organic molecules and arises from their electronic structures. Scintillation also occurs in many inorganic materials, including salts, gases, and liquids. Lasers and eye safety According to reference [5], certain high-power laser beams used for medical procedures can damage human skin, but the part of the human body most susceptible to lasers is the eye. Like sunlight, laser light travels in parallel rays. The human eye focuses such light to a point on the retina, the layer of cells that responds to light. Like staring directly into the sun, exposure to a laser beam of sufficient power can cause permanent eye injury. For that reason, potential eye hazards have attracted considerable attention from standards writers and regulators. The standards rely on parameters such as laser wavelength, average power over long intervals, peak power in a pulse, beam intensity, and proximity to the laser. Laser wavelength is important because only certain wavelengths-between about 400 nm and 1,550 nm-can penetrate the eye with enough intensity to damage the retina. The amount of power the eye can safely tolerate varies with wavelength. This is dominated by the absorption of light by water (the primary component in the eye) at different wavelengths. The vitreous fluid of the eye is transparent to wavelengths of 400-1,400 nm. Thus, the focusing capability of the eye causes approximately a 100,000-to-1 concentration of the power to be focused on a small spot of the retina. However, in the far infrared (1,400 nm and higher), such light is not transmitted by the vitreous fluid, so the power is less likely to be transferred to the retina. Although damage to the corneal surface is a possibility, the focusing capabilities of the eye do not lead to large magnification of power densities. Therefore, much greater power is required to cause damage. The relevance of this is that lasers deployed in FSO that utilize wavelengths greater than 1,400 nm are allowed to be approximately 100 times as powerful as FSO equipment operating at 850 nm and still be considered eye safe. This would be the "killer app" of FSO except that the photo diode receiver technologies suffer reduced sensitivity at greater than 1,400 nm, giving back a substantial portion of the gain. Also, lasers that operate at such wavelengths are more costly and less available. Nevertheless, at least one FSO manufacturer has overcome these obstacles and currently offers equipment deploying multiple 1,550 nm lasers. With respect to infrared radiation, the absorption coefficient at the front part of the eye is much higher for longer wavelength (> 1,400 nm) than for shorter wavelength. As such, damage from the ultraviolet radiation of sunlight is more likely than from long wavelength infrared. Eye response also differs within the range that penetrates the eyeball (400 nm-1,400 nm) because the eye has a natural aversion response that makes it turn away from a bright visible light, a response that is not triggered by an (invisible) infrared wavelength longer than 0.7 μm. Infrared light can also damage the surface of the eye, although the damage threshold is higher than that for ultraviolet light. High-power laser pulses pose dangers different from those of lower-power continuous beams. A single high-power pulse lasting less than a microsecond can cause permanent damage if it enters the eye. A low-power beam presents danger only for longer-term exposure. Distance reduces laser power density, thus decreasing the potential for eye hazards. Optical wireless transceiver design FSO contains three components: transmitter, free space transmitted channel line of sight, and receiver. Transmitter is considered as an optical source 1-laser diode (LD) or 2-light emitting diode (2-LED) to transmit of optical radiation through the atmosphere follows the Beer-Lamberts's law as indicated in subsection 3.6 Eq. 34. FSO link is demonstrated as in Fig. 5. The selection of a laser source for FSO applications depends on various factors. It is important that the transmission wavelength is correlated with one of the atmospheric windows. As noted earlier, good atmospheric windows are around 850 nm and 1550 nm in the shorter IR wavelength range. In the longer IR spectral range, some wavelength windows are present between 3-5 micrometers (especially 3.5-3.6 micrometers) and 8-14 micrometers [5]. However, the availability of suitable light sources in these longer wavelength ranges is pretty limited at the present moment. In addition, most sources need low temperature cooling, which limits their use in commercial telecommunication applications. Other factors that impact the use of a specific light source include the following: • Price and availability of commercial components Electrical input is a network traffic into pulses of invisible light representing 1`s and 0`s. The transmitter, which consists of two part main parts: an interface circuit and source driver circuit, converts the input signal to an optical signal suitable for transmission. The drive circuit of the transmitter transforms the electrical signal to an optical signal by varying the current follow through the light source. Transmitter function is to project the carefully aimed light pulses into the air. This optical light source can be of two types: A laser diode (LD). The information signal modulates the field generated by the optical source. The modulated optical field then propagates through a free-space path before arriving at the receiver. In the receiver side, transmitted data realizes inverse operations i.e., photo detector converts the optical signal back into an electrical form as indicated in previous figure. In other words, a receiver at the other end of the link collects the light using lenses and/or mirrors. Received signal converted back into fiber or cooper and connected to the network. Reverse direction data transported the same way (full duplex). We can see, anything that can be done in fiber can be done with FSO. Equation (5) illustrates the data rate of FSO system: Where P r is a received power, and η is a received power sensitivity of the receiver [photons/ bit]. Small angles -divergence angle and spot size between transmitter and receiver are presented in Fig. 6. is a divergence angle between transmitter and receiver FSO units. The geometric path loss for an FSO link depends on the beam-width of the optical transmitter, the path length (L), and the divergence angle (θ). Transmitter and receiver aperture diameters are quantifiable parameters, and are usually specified by manufacturer. Table ( Mathematical model of atmospheric turbulence The atmospheric attenuation is one of the challenges of the FSO channel, which may lead to signal loss and link failure. The atmosphere not only attenuates the light wave but also distorts and bends it. Transmitted power of the emitted signal is highly affected by scattering and turbulence phenomena. Attenuation is primarily the result of absorption and scattering by molecules and particles (aerosols) suspended in the atmosphere. Distortion, on the other hand, is caused by atmospheric turbulence due to index of refraction fluctuations. Attenuation affects the mean value of the received signal in an optical link whereas distortion results in variation of the signal around the mean. Aerosol Aerosols are particles suspended in the atmosphere with different concentrations. They have diverse nature, shape, and size. Aerosols can vary in distribution, constituents, and concentration. As a result, the interaction between aerosols and light can have a large dynamic, in terms of wavelength range of interest and magnitude of the atmospheric scattering itself. Because most of the aerosols are created at the earth's surface (e.g., desert dust particles, human-made industrial particulates, maritime droplets, etc.), the larger concentration of aerosols is in the boundary layer (a layer up to 2 km above the earth's surface). Above the boundary layer, aerosol concentration rapidly decreases. At higher elevations, due to atmospheric activities and the mixing action of winds, aerosol concentration becomes spatially uniform and more independent of the geographical location. Scattering is the main interaction between aerosols and a propagating beam. Because the sizes of the aerosol particles are comparable to the wavelength of interest in optical communications, Mie scattering theory is used to describe aerosol scattering [8]. Visibility Runway Visual Range (RVR) Visibility was defined originally for meteorological needs, as a quantity estimated by a human observer. It defined as (Kruse model) means of the length where an optical signal of 550 nm is reduced to 0.02 of its original value [10]. However, this estimation is influenced by many subjective and physical factors. The essential meteorological quantity, namely the transparency of the atmosphere, can be measured objectively and it is called the Runway Visual Range (RVR) or the meteorological optical range [11]. Some values of atmospheric attenuation due to scattering based on visibility are presented in Table (3). Source: Table 3. Variation in atmospheric attenuation due scattering based on visibility (data obtained from [7,12]). When the length difference between the two optical paths varies, the energy passes through minima and maxima. The visibility V is defined by: The visibility depends on the degree of coherence of the source, on the length difference between the paths as well as on the location of the detector with respect to the source. The coherence between the various beams arriving at the detector also depends on the crossed media: for example the diffusing medium can reduce the coherence. For links referred to as "in direct sight" links, coherent sources can be used, provided that parasitic reflections do not interfere with the principal beam, inducing modulations of the detected signal [11]. Low visibility will decrease the effectiveness and availability of FSO systems, and it can occur during a specific time period within a year or at specific times of the day. Low visibility means the concentration and size of the particles are higher compared to average visibility. Thus, scattering and attenuation may be caused more in low visibility conditions [13]. Atmospheric attenuation Atmospheric attenuation is defined as the process whereby some or all of the electromagnetic wave energy is lost when traversing the atmosphere. Thus, atmosphere causes signal degradation and attenuation in a FSO system link in several ways, including absorption, scattering, and scintillation. All these effects are varying with time and depend on the current local conditions and weather. In general, the atmospheric attenuation is given by the following Beer's law equation [14]: where, τ is the atmospheric attenuation; β is the total attenuation coefficient and given as β = β abs β scat ; L is the distance between transmitter and receiver (unit: km); β abs is the molecular and aerosol absorption, this parameter value is considered as too small so, we can neglected; β scat is the molecular and aerosol scattering. Absorption Absorption is caused by the beam's photons colliding with various finely dispersed liquid and solid particles in the air such as water vapor, dust, ice, and organic molecules. The aerosols that have the most absorption potential at infrared wavelengths include water, O 2 , O 3 , and CO 2 Absorption has the effect of reducing link margin, distance and the availability of the link [15]. The absorption coefficient depends on the type of gas molecules, and on their concentration. Molecular absorption is a selective phenomenon which results in the spectral transmission of the atmosphere presenting transparent zones, called atmospheric transmission windows [11], shown in Fig. 7, which allows specific frequencies of light to pass through it. These windows occur at various wavelengths. The Atmospheric windows due to absorption are created by atmospheric gases, but neither nitrogen nor oxygen, which are two of the most abundant gases, contribute to absorption in the infrared part of the spectrum [7]. It is possible to calculate absorption coefficients from the concentration of the particle and the effective cross section such as [16,17]: An absorption lines at visible and near infrared wavelengths are narrow and generally well separated. Thus, absorption can generally be neglected at wavelength of interest for free space laser communication. Another reason for ignoring absorption effect is to select wavelengths that fall inside the transmittance windows in the absorption spectrum [18]. Scattering Scattering is defined as the dispersal of a beam of radiation into a range of directions as a result of physical interactions. When a particle intercepts an electromagnetic wave, part of the wave's energy is removed by the particle and re-radiated into a solid angle centered at it. The scattered light is polarized, and of the same wavelength as the incident wavelength, which means that there is no loss of energy to the particle [10]. There are three main types of scattering: (1) Rayleigh scattering, (2) Mie scattering, and (3) nonselective scattering. Figure 8 illustrates the patterns of Rayleigh, Mie and non-Selective scattering. The scattering effect depends on the characteristic size parameter x 0 , such as that x 0 = 2πr / λ, where, r is the size of the aerosol particle encountered during propagation [19]. If x 0 < < 1, the backward lobe becomes larger and the side lobes disappear as shown in Fig. 8 [20] and the scattering process is termed as Rayleigh scattering. If x 0 ≈ 1, the backward lobe is symmetrical with the forward lobe as shown in Fig. 8 and then it is Mie scattering. For x 0 > > 1, the particle presents a large forward lobe and small side lobes that start to appear as shown in Fig. 8 [20] and the scattering process is termed as non-selective scattering. The scattering process for different scattering particles present in the atmosphere is summarized in Table (4) [21]. It is possible to calculate the scattering coefficients from the concentration of the particles and the effective cross section such as [16]: Where: β scat : is either Rayleigh (molecular) β m or Mie (aerosols) β a scattering. Rayleigh (molecular) scattering Rayleigh scattering refers to scattering by molecular and atmospheric gases of sizes much less than the incident light wavelength. The Rayleigh scattering coefficient is given by [16]: Where: Rayleigh scattering cross section is inversely proportional to fourth power of the wavelength of incident beam (λ -4 ) as the following relationship: Where: n: is the index of refraction. The result is that Rayleigh scattering is negligible in the infrared waveband because Rayleigh scattering is primarily significant in the ultraviolet to visible wave range [10]. MIE (Aerosol) scattering Mie scattering occurs when the particle diameter is equal or larger than one-tenth the incident laser beam wavelength, see Table 4. Mie scattering is the main cause of attenuation at laser wavelength of interest for FSO communication at terrestrial altitude. Transmitted optical beams in free space are attenuated most by the fog and haze droplets mainly due to dominance of Mie scattering effect in the wavelength band of interest in FSO (0.5 μm -2 μm). This makes fog and haze a keys contributor to optical power/irradiance attenuation. The attenuation levels are too high and obviously are not desirable [22]. The attenuation due to Mie scattering can reach values of hundreds of dB/km [19,23] (with the highest contribution arising from fog). The Mie scattering coefficient expressed as follows [10]: β a = α a N a 1 / km Where: An aerosol's concentration, composition and dimension distribution vary temporally and spatially varying, so it is difficult to predict attenuation by aerosols. Although their concentration is closely related to the optical visibility, there is no single particle dimension distribution for a given visibility [24]. Due to the fact that the visibility is an easily obtainable parameter, either from airport or weather data, the scattering coefficient β a can be expressed according to visibility and wavelength by the following expression [11]: Where: V : is the visibility (Visual Range) km . λ: is the incident laser beam wavelength μm . i: is the size distribution of the scattering particles which typically varies from 0.7 to 1.6 corresponding to visibility conditions from poor to excellent. Since we are neglecting the absorption attenuation at wavelength of interest and Rayleigh scattering at terrestrial altitude and according to Eq. 8 and Eq. 11 then: The atmospheric attenuation τ is given as: The atmospheric attenuation in dB, τ can be calculated as follows: Rain Rain is formed by water vapor contained in the atmosphere. It consists of water droplets whose form and number are variable in time and space. Their form depends on their size: they are considered as spheres until a radius of 1 mm and beyond that as oblate spheroids: flattened ellipsoids of revolution [11]. Rainfall effects on FSO systems: Scattering due to rainfall is called non-selective scattering, this is because the radius of raindrops (100 -1000 μm) is significantly larger than the wavelength of typical FSO systems. The laser is able to pass through the raindrop particle, with less scattering effect occurring. The haze particles are very small and stay longer in the atmosphere, but the rain particles are very large and stay shorter in the atmosphere. This is the primary reason that attenuation via rain is less than haze [24]. An interesting point to note is that RF wireless technologies that use frequencies above approximately 10 GHz are adversely impacted by rain and little impacted by fog. This is because of the closer match of RF wavelengths to the radius of raindrops, both being larger than the moisture droplets in fog [14]. The rain scattering coefficient can be calculated using Stroke Law [25]: Where: a: is the radius of raindrop, (cm). N a : is the rain drop distribution, ( cm -3 ) . Q scat : is the scattering efficiency. The raindrop distribution N a can be calculated using equation following: Where: R: is the rainfall rate (cm/s), V a : is the limit speed precipitation. The rain attenuation can be calculated by using Beer's law as: τ = exp (-β rain scat L ) For more details about several weather conditions and the corresponding visibility at various wavelengths readers can refer to references [26,27]. Turbulence Clear air turbulence phenomena affect the propagation of optical beam by both spatial and temporal random fluctuations of refractive index due to temperature, pressure, and wind variations along the optical propagation path [28,29]. Atmospheric turbulence primary causes phase shifts of the propagating optical signals resulting in distortions in the wave front. These distortions, referred to as optical aberrations, also cause intensity distortions, referred to as scintillation. Moisture, aerosols, temperature and pressure changes produce refractive index variations in the air by causing random variations in density. These variations are referred to as eddies and have a lens effect on light passing through them. When a plane wave passes through these eddies, parts of it are refracted randomly causing a distorted wave front with the combined effects of variation of intensity across the wave front and warping of the isophase surface [30]. The refractive index can be described by the following relationship [31]: Where: P : is the atmospheric pressure in mbar . T : is the temperature in Kelvin K . If the size of the turbulence eddies are larger than the beam diameter, the whole laser beam bends, as shown in Fig. 9. If the sizes of the turbulence eddies are smaller than the beam diameter and so the laser beam bends, they become distorted as in Fig. 10. Small variations in the arrival time of various components of the beam wave front produce constructive and destructive interference and result in temporal fluctuations in the laser beam intensity at the receiver see Fig. 10. Refractive index structure Refractive index structure parameter C n 2 is the most significant parameter that determines the turbulence strength. Clearly, C n 2 depends on the geographical location, altitude, and time of day. Close to ground, there is the largest gradient of temperature associated with the largest values of atmospheric pressure (and air density). Therefore, one should expect larger values C n 2 at sea level. As the altitude increases, the temperature gradient decreases and so the air density with the result of smaller values of C n 2 [8]. In applications that envision a horizontal path even over a reasonably long distance, one can assume C n 2 to be practically constant. Typical value of C n 2 for a weak turbulence at ground level can be as little as 10 -17 m -2/3 , while for a strong turbulence it can be up to 10 -13 m -2/3 or larger. However, a number of parametric models have been formulated to describe the C n 2 profile and among those, one of the more used models is the Hufnagel-Valley [32] given by: Where: h : is the altitude in m]. v: is the wind speed at high altitude m / s . The most important variable in its change is the wind and altitude. Turbulence has three main effects ; scintillation, beam wander and beam spreading. Scintillation Scintillation may be the most noticeable one for FSO systems. Light traveling through scintillation will experience intensity fluctuations, even over relatively short propagation paths. The scintillation index, σ i 2 describes such intensity fluctuation as the normalized variance of the intensity fluctuations given by [8,14]: Where C n 2 is the refractive index structure, k = 2π / λ is the wave number (an expression suggests that longer wavelengths experience a smaller variance), and l is the link range (m). Where the Eq. 26 is valid for the condition of weak turbulence mathematically corresponding to σ i 2 < 1. Expressions of lognormal field amplitude variance depend on: the nature of the electromagnetic wave traveling in the turbulence and on the link geometry [8]. Beam spreading Beam spreading describes the broadening of the beam size at a target beyond the expected limit due to diffraction as the beam propagates in the turbulent atmosphere. Here, we describe the case of beam spreading for a Gaussian beam, at a distance l from the source, when the turbulence is present. Then one can write the irradiance of the beam averaged in time as [33]: Where: P o : is total beam power in W r : is the radial distance from the beam center The beam will experience a degradation in quality with a consequence that the average beam waist in time will be ω eff (l) > ω(l). To quantify the amount of beam spreading, describes the effective beam waist average as: ω eff (l) 2 = ω(l) 2 (1 + T ) Where: ω(l) : is the beam waist that after propagation distance L is given by: In which ω o is the initial beam waist at L = 0, T : is the additional spreading of the beam caused by the turbulence. As seen in other turbulence figure of merits, T depends on the strength of turbulence and beam path. Particularly, T for horizontal path, one gets: While the parameter Λ is given by: The effective waist, ω eff (l), describes the variation of the beam irradiance averaged over long term. As seen in other turbulence figure of merits, ω eff (l) 2 depends on the turbulence strength and beam path. Evidently, due to the fact that ω eff (l) > ω(l) beam will experience a loss that at beam center will be equal: L BE = 20 log 10 (ω (l) / ω eff (l)) (32) Geometric Losses (GL) The geometric path loss for an FSO link depends on the beam-width of the optical transmitter θ, its path length L and the area of the receiver aperture A r . The transmitter power, P t is spread over an area of π(Lθ) 2 / 4. The geometric path loss for an FSO link depends on the beam-width of the optical transmitter θ, its path length L and the area of the receiver aperture A r . The transmitter power, P r is spread over an area of π(Lθ) 2/4 . Geometric loss is the ratio of the surface area of the receiver aperture to the surface area of the transmitter beam at the receiver. Since the transmit beams spread constantly with increasing range at a rate determined by the divergence, geometric loss depends primarily on the divergence as well as the range and can be determined by the formula stated as [2]: Where: d 2 is the diameter receiver aperture (unit: m); d 1 is the diameter transmitter aperture (unit: m); θ is the beam divergence (unit: mrad); L is the link range (unit: m). Geometric path loss is present for all FSO links and must always be taken into consideration in the planning of any link. This loss is a fixed value for a specific FSO deployment scenario; it does not vary with time, unlike the loss due to rain attenuation, fog, haze or scintillation. Total attenuation Atmospheric attenuation of FSO system is typically dominated by haze, fog and is also dependent on rain. The total attenuation is a combination of atmospheric attenuation in the atmosphere and geometric loss. Total attenuation for FSO system is actually very simple at a high level (leaving out optical efficiencies, detector noises, etc.). The total attenuation is given by the following [34]: where, P t is the transmitted power (unit: mW); P r is the received power (unit: mW); θ is the beam divergence (unit: mrad); β is the total scattering coefficient (unit: km -1 ). According to Eq. (34), the variables which can be controlled are the aperture size, the beam divergence and the link range. The scattering coefficient is uncontrollable in an outdoor environment. In real atmospheric situations, for availabilities at 99.9% or better, the system designer can choose to use huge transmitter laser powers, design large receiver apertures, design small transmitter apertures and employ small beam divergence. Another variable that can control is link range, which must be of a short distance to ensure that the atmospheric attenuation is not dominant in the total attenuation [35]. The strength of scintillation can be measured in terms of the variance of the beam amplitude or irradiance σ i given by the following: Here, k=2π/λ is the wave number and this expression suggests that longer wavelengths experience a smaller variance, and C n 2 is a refractive index structure parameter. Equation (35) sions of lognormal field amplitude variance depend on the nature of the electromagnetic wave traveling in the turbulence and on the link geometry. In this chapter, we do not take into account the atmospheric turbulence, because its influence in Yemeni climate could be negligible. That means the effect of the turbulence is too small contrary to visibility and geometric loss. Therefore, we have taken into account only the total attenuation depending on visibility, and geometric loss. An FSO communication system is influenced by atmospheric attenuation, which limits their performance and reliability. The atmospheric attenuated by fog, haze, rainfall, and scintillation has a harmful effect on FSO system. The majority of the scattering occurred on the laser beam is Mie scattering. This scattering is due to the fog and haze aerosols existed at the atmosphere and can be calculated through visibility. FSO attenuation at thick fog can reach values of hundreds dB. Thick fog reduces the visibility range to less than 50 m, and it can affect on the performance of FSO link for distances. The rain scattering (non-selective scattering) is independent on wavelength, and it does not introduce significant attenuation in wireless infrared links, it affects mainly on microwave and radio systems that transmit energy at longer wavelengths. There are three effects on turbulence: scintillation, laser beam spreading and laser beam wander. Scintillation is due to variation in the refractive index of air. If the light is traveled by scintillation, it will experience intensity fluctuations. The geometric loss depends on FSO components design such as beam divergence, aperture diameter of both transmitter and receiver. The total attenuation depends on atmospheric attenuation and geometric loss. To reduce total attenuation, the effect of geometric loss and atmospheric attenuation is small, as FSO system must be designed. The following section explores the simulation results of geometric loss and total attenuations for Yemeni climate. Optical link budget To calculate the FSO link budget several parameters taken into account as geometric loss, link margin, received power and bit error rate. The received power should be grater less the transmitted power from the source and equal the transmitted power minus total loss. In the basic free-space channel the optical field generated at the transmitter propagates only with an associated beam spreading loss. For this system the performance can be determined directly from the power flow. The signal power received P Rx [W] depends on the transmit power P Tx [W], transmit and receive antenna gains G Tx , G Rx , and the total loss [36] P r = P t + G Tx + G Rx -total loss (36) Table (5) shows the values of transmitted output power for diffuse and tracked topology (Data obtained from [7]). In the indicated reference, they presented an expression to calculate the link distance L achievable from direct line propagation: Here, P t represent the optical output power from the transmitter (in mW), A r is the active area of the photodetector, T 1 is the transmittivity of the transmitter filter, T 2 is the transmittivity of the filter at the receiver, P rm is the optical power required (in mW) to obtain a specific carrierto-noise ratio at the receiver, and ∅ is the half angle of the energy related by optical source. From this expression, they calculate achievable distances (depending on the FOV), which in their case covered a range of between 10 and 20 m. BER and SNR Both SNR and BER are used to assess the quality of communication systems. BER performance depends on the average received power, the scintillation strength, and the receiver noise. With an appropriate design of aperture averaging, the received optical power could be increased and the effect of the scintillation can be dumped. With turbulence, the SNR is expressed as follows [37]: For FSO links with an on off keying modulation scheme in BER can be written as In our model, we have assumed that the surface area of the photo detector is large enough so that the effective SNR includes the beam spreading effect, thus the effective SNR is defined as The performance and reliability of FSO communication systems are affected and limited by atmospheric attenuation. It has a harmful effect by haze, rainfall, fog, and scintillation has a harmful effect of FSO system. The majority of the scattering occurred to the laser beam is due to the Mie scattering. This scattering is due to the fog and haze aerosols existed at the atmosphere. This scattering is calculated through visibility. FSO attenuation at thick fog can reach values of hundreds dB. Thick fog reduces the visibility range to less than 50 m, and it can affect on the performance of FSO link for distances as small. The rain scattering (non-selective scattering) is wavelength independent and it does not introduce a significant attenuation in wireless IR links, it affect mainly on microwave and radio systems that transmit energy at longer wavelengths. There are three effects on turbulence: scintillation, laser beam spreading and laser beam wander. Scintillation is due to variation in the refractive index structure of air, so if the light traveling through scintillation, it will experience intensity fluctuations. The Geometric loss depends on FSO components design such as beam divergence, aperture diameter of both transmitter and receiver. The total attenuation depends on atmospheric attenuation and Geometric loss. In order to reduce total attenuation, FSO system must be designed so that the effect of geometric loss and atmospheric attenuation is small. Practical part: Case study In this chapter, we will take Yemeni climate as a case study to study and analyze the practical part of FSO system by series of simulations obtained results. Geometric loss This part illustrates the effects of geometric loss on the performance of FSO system. We calculated the value of geometric loss using Eq. (33) assuming that the link range is 1 km and beam divergence is 1 mrad at two different designs, which are considered as particular design specifications shown in Table 6, due to particular implementation especially based on the existing product available in the industry [38,39]. Table 6. Diameters of transmitter and receiver aperture of an FSO system. Contemporary Issues in Wireless Communications There are a number of parameters that control geometric loss: transmission range, the diameter of transmitter and receiver apertures and laser beam divergence. These parameters also contribute to the design of FSO system, so that it is suitable during bad weather conditions. Figure 11. Geometric loss (dB) versus link range (km). Figure 12. Geometric loss (dB) versus divergence angle (mrad). Free Space Optical Communications -Theory and Practices http://dx.doi.org/10.5772/58884 Figure 11 shows the geometric loss versus link range using the values presented in Table 6 and divergence angle is about 0.025 mrad. The link range is in the range of 0.5 to 5.0 km. Geometric loss is proportional to link range, which shows that the link range increases with the increases of geometric loss. As demonstrated in Fig. 11 the geometric loss is 1.3 dB at 0.5 km for design 1 and -3.4 dB for design 2. While at the distance of 5km the geometric loss for design 1 reaches 8.2 dB and 7.2 dB for design 2. Figure 12 illustrates the geometric loss versus the divergence angle. The divergence angle is in the range of 0.025 to 0.07 mrad. Geometric loss is proportional to divergence angle, which suggest that when the divergence angle increases, geometric loss enhances. For a 0.025 mrad divergence angle, the geometric loss is about 1.93 dB for design 1 and -10.5 dB for design 2. For a 0.07 mrad divergence angle, the geometric loss is about 4.6 dB for design 1 and -5.6 dB for design 2. That means by using a small divergence angle of laser beam in FSO systems, geometric loss effect is minimized. Figure 13 demonstrates the geometric loss versus the transmitter aperture diameter using the values presented in Table 1, divergence angle is about 0.025 mrad and the link range is 1 km. The transmitter aperture diameter is in the range of 2 to 22 cm. This figure shows that the transmitter aperture diameter rises with increases of the geometric loss. For transmitter aperture diameter of 2 cm, the geometric loss is about -7 dB for design 1 and -3.87 dB for design 2. For the transmitter aperture diameter of 20 cm, the geometric loss is about 7.7 dB for design 1 and 10.2 dB for design 2. That means the small transmitter aperture diameter is suggested to minimize in the geometric loss effect on FSO systems. Figure 14 indicates the geometric loss versus the receiver aperture diameter using the values presented in Table 7, divergence angle is about 0.025 mrad and the link range is 1 km. When the receiver aperture diameter increases, the geometric loss decreases. For receiver aperture diameter of 2 cm, the geometric loss is about 14.4 dB for design 1 and 9.5 dB for design 2. For the receiver aperture diameter of 20 cm, the geometric loss is about -5.6 dB for design 1 and -10.5 dB for design 2. That is to say that the large receiver aperture diameter will be used to reduce the geometric loss effect on the FSO systems. The results of geometric losses with design parameters are presented in Table 7. We note that the geometric loss at low values for receiver aperture diameter is high compared to the upper values. Because the aperture diameter of receiver is smaller than aperture diameter of transmitter. At a result, the aperture diameter of transmitter must be smaller than at the receiver side. Total attenuation Total attenuation depends on attenuation resulted from hazy and rainy days and geometric loss. The attenuation in hazy days depends on visibility, while during rainy days it would be determined by rainfall rate. Visibility range changes with the quantity and density of particles, such as fog, haze and dust attached to air. The higher the density of these particles is, the less visibility is and total attenuation increases. The density of these particles is not fixed. It keeps varying with time and place as well as rainfall. The quantity and density of these particles are unpredictable, and visibility and rainfall rate are also uncontrollable. Thus, they are all not part of FSO design. We can control the value of geometric loss, because it depends on fixed parameters such as transmitter diameter and receiver apertures, transmission range, and beam divergence. During the design of FSO system, geometric loss must be at minimum to reduce the effect of total attenuation on FSO system. In this part, we used design 2 as demonstrated in Table 6 to calculate total attenuation because this design geometric loss is less as described above in Section 4.1.1. Figure 15 shows total attenuation at low visibility. We used Eq. (8) to plot Fig. 15 Note that when visibility is 9.7 km, total attenuation are of 15.39, 15.28 and 14.7 dB at wavelengths of 780, 850 and 1550 nm, respectively. Based on the previously mentioned, we conclude that total attenuation at wavelength 1550 nm is less than that at wavelengths of 780 and 850 nm. Therefore, to reduce the effect of total attenuation during hazy days, we use the wavelength of 1550 nm. Figure 17 represents total attenuation versus link range. From this figure, we found that total attenuation directly proportions with link range. When link range is 0.5 km, total attenuation is 17.3, 16.9 and 14.6 dB at wavelengths of 780, 850 and 1550 nm, respectively. In addition, when link range is 5 km, total attenuation becomes 115.0, 111.8 and 88.4 dB at wavelengths of 780, 850 and 1550 nm, respectively. Therefore, to reduce the effect of total attenuation on FSO, the distance between the transmitter and receiver shall be small. Figure 18 shows the relationship between total attenuation and laser beam divergence for three wavelengths. With increasing the beam divergence, the total attenuations are increased for three cases as demonstrated in Fig. 18. That means when the beam divergence at 1 mrad the total attenuations 32, 31, and 26 for wavelengths 780, 850, and 1550 nm, respectively. While at beam divergence of 10 mrad, we noticed that the total attenuation was increased 51.1, 50.8, and 46 dB for three previously indicated wavelengths. Therefore, to reduce atmospheric attenuation, the beam divergence should be small in accordance with the previous results. Table 8 shows the results of total attenuation for design parameters at hazy days. Figure 19 shows the total attenuation versus rainfall rate. It can be seen obviously that the influence of attenuation on transmission of FSO systems is more prominent during heavy rainfall compared to moderate and light rainfall. Figure 20 indicates the total attenuation versus link range. The atmospheric attenuation is proportional to link range, which showed that when the link range increases, the total attenuation would increase as well. The results of total attenuation for design parameters at rainy days are presented in Table 9. Table 9. Results of total attenuation for design parameters at rainy days parameters. Simulation results and discussion of haze effects on FSO in Sana'a, Aden, and Taiz cities In this section, we study the effect of atmospheric attenuation and scattering coefficient on the performance of FSO system in environment of Sana'a, Aden and Taiz cities. Sana'a city Low visibility range for Sana'a city in Fig. 21 Aden city Low visibility range in Fig. 23 Taiz city Low visibility range in Fig. 25 Table 10 shows the results of scattering coefficient and atmospheric attenuation at low visibility for Sana'a, Aden and Taiz Cities. The results show that with increasing the wavelength, in consequence the scattering coefficient and atmospheric attenuation decreased for the three cases were studied in this paper. Optical link budget After illustrating the geometric loss, total attenuation, and haze effects on the FSO in Sana'a, Aden and Taiz cities, we return to the link budget of FSO systems. This section concentrates on received power versus low and average visibility, and link range. The results presented in Fig. 27 show the relationship between received power for three different wavelengths and low visibility. As seen in Fig. 27, received power increases with the increment of low visibility. We note that the obtained received power at the wavelength of 1550 nm is the best as compared to the other two. For example, the received power curve for the wavelength of 1550 nm increases from −67 dBm at the distance of 0.6 km to -27 dBm at the distance of 5 km. However, we note that the receiving power is reduced for other two wavelengths of 780 and 850 nm. As shown in Fig. 28, the received power at wavelength of 1550 nm shows the best compared to other two wavelengths. While the received powers at the wavelengths of 780 and 850 nm are, lower. Figure 29 shows the received power versus the link range. As the link range between transmission and receiver increases, the received power decreases. At the distance of 0.5 km, the received power for the wavelength of 1550 nm is of -20.3 dBm where for the others two are of -21.7 dBm. However, in the distance of 5 km, the received power reaches -36 dBm for wavelength of 780 nm and -34.1 dB for the wavelength of 850 nm. For three study cases, the study was done to improve the efficiency of FSO systems, the wavelength of 1550 nm for three cases must be used and the distance between transmitter and receiver should be reduced. Simulation results of BER and SNR The data was taken from the Civil Aviation and Meteorology Authority and the Yemeni Meteorological Service. The work includes the analysis of these real data. The purpose here is to discuss the relationship for calculating the variance, SNR, and BER for a range of parameters. We used the wavelengths of 850, 1000, and 1550 nm. Particular attention was given to the 1550 nm wavelength since it is commonly used as the 3 rd window of optical communication backbone links. Moreover, being significantly bigger than visible wavelengths, the human retina in particular and the components of the eye in general are less sensitive to the 1550 nm wavelength. Thus, this wavelength is appropriate for eye safety. turbulence along the transmission range as indicated on Fig. 33. The higher the turbulence is, the greater the expansion of the beam size is. Figure 34 shows the SNR versus the transmission range of 0 to 4500 m. As the link range between the transmitter and receiver increases, the SNR decreases. This means that the increment of link range is able to decrease the transmission quality and efficiency of FSO systems. At a low range of 200 m, the SNR is about 74 dB for 850 nm, 77 dB for 1000 nm, and 82 dB for 1550 nm. For 4000 m, the SNR is about 18 dB for 850 nm, 21 dB for 1000 nm, and 26 dB for 1550 nm. Figure 35 indicates SNR versus expansion of the beam size resulting from air turbulence. For a beam size of ω eff (l) = 0.015 m, the SNR = 64 dB and SNR eff = 62 dB, but for ω eff (l) = 0.33 m, the SNR = 4.7 dB and SNR eff = 3.4 dB. From these results, we conclude that when the beam expands, the loss in terms of the beam intensity increases. This leads to the decrease in the SNR value, and therefore the BER increases as indicated in Fig. 36. For a spot size of 0.015 m, the BER = 10 -115 , and when the spot size of the beam is 0.33 m, the BER increases up to 10 -5 approximately. From the results above, we conclude that the narrow beam shows a limited effect of the atmospheric turbulence on the intensity. Figure 37 shows the BER versus link range between transmitter and receiver. This figure graphically represents the BER as a function of the irradiance variance. For 3500 m, BER is 10 -6 for the SNR and 10 -5 for the SNR eff . For an irradiance variance 0.05 the BER 10 -6 for the SNR and 10 -5 for the SNR eff . From the results obtained, we conclude that to improve the performance of the FSO transmission systems, it is recommended to shorten the link range between transmitter and receiver. Another improvement of the signal quality offered by the FSO systems includes using the 1550 nm wavelength. The SNR of FSO systems with 1550 nm wavelength is higher than that corresponding to 1000 and 850 nm wavelengths. To reduce the atmospheric turbulence effects on FSO systems, we suggest using the 1550 nm wavelength. Moreover, for the 1550 nm wavelength, the allowable power is largely higher compared to smaller wavelengths (about 50 times compared to 850 nm). This shows that the system operates well during heavier attenuation of the atmosphere since we can safely increase power at the source. Summary Recently, telecommunication and computer networking are moving toward optics communication. Because the light is the hasten medium to transmit, the huge information, it's cost effective, flexibility, quick deployment, and the promise of optical bandwidth. FSO is considered as alternative choice that can be employed as a reliable solution to broadband short distance applications. However, mobile operators are planning to use this technology to short distance links. In this chapter, we briefly introduced the concepts of FSO technology, mathematical approach of this technique. Practical part, we take the climate effects for deployment of FSO in Yemeni territorial as a case study. We have studied in detail the total attenuation influencing FSO systems. The total attenuation in this case study depends on two parameters: scattering and geometric losses. This work was concentrated on two different designs as demonstrated in Table 6. These results showed that when the link range, divergence angle, and transmitter aperture were increasing, the geometric loss increased too. But, we found that geometric loss decreased with the increasing of the receiver aperture diameter. Total attenuation also increased with increasing of the distance link, low visibility, and with decreasing of the wavelength. It was also shown that the effect of rainfall on the FSO system performance was so small that we can neglect it. In general, FSO performance was bad in Taiz at the low visibility compared to Sana'a and Aden. However, in the average visibility, the FSO performance was effective in three cities. We concentrate on the scintillation effects on the performance of FSO links. The analysis was carried out for the variance, SNR, and BER in the environment of Yemen. Scintillation for the Yemeni environment is wavelength and distance dependent. The wavelength of 1550 nm turned out to be interesting since it is less sensitive to atmospheric turbulence and harmless to the human eye. The results indicate that the performance of the FSO system is good during the worst conditions in Yemen. To improve the transmission efficiency of FSO systems, the wavelength of 1550 nm must be used and the distance between transmitter and receiver must be reduced. To achieve a BER of 10 -9 during air turbulence, the distance between transmitter and receiver should be 2600 m. Thus, the FSO system may be applied in Yemeni territorial efficiently even in case of the presence of air turbulence. 13. What about the climate effect on FSO performance? 14. In figure 5, what is meant by free space and how to calculate its losses? 16. Calculate the antenna gain for FSO product if the wavelength is 1550 nm? 17. What is meant by divergence angle? 18. How to connect the FSO links to the network? 19. Is FSO characterized as cost effective and why? 20. Is FSO only deployed on rooftops? 21. What are the advantages of using infrared communication instead of other radio relay links line of sight (LOS)? 22. What are the transmitted power recommended for FSO products? 23. Could you design an FSO link and calculate the link budget? 24. Calculate the received power of FSO link demonstrated in figure 6? 25. What is meant by focal length? 26. If the observer is looking at a telecommunication tower of 20 meter high at a distance of 120 m and the distance between the optical center of the eye lens and fovea is of 17 mm. a. Graphical represent of the eye looking at a tower. b. calculate the size of retinal image of the tower (h) that's reflected primarily in the area of the fovea.
16,220.4
2014-11-26T00:00:00.000
[ "Engineering", "Physics" ]
Limits of Principal Components Analysis for Producing a Common Trait Space: Implications for Inferring Selection, Contingency, and Chance in Evolution Background Comparing patterns of divergence among separate lineages or groups has posed an especially difficult challenge for biologists. Recently a new, conceptually simple methodology called the “ordered-axis plot” approach was introduced for the purpose of comparing patterns of diversity in a common morphospace. This technique involves a combination of principal components analysis (PCA) and linear regression. Given the common use of these statistics the potential for the widespread use of the ordered axis approach is high. However, there are a number of drawbacks to this approach, most notably that lineages with the greatest amount of variance will largely bias interpretations from analyses involving a common morphospace. Therefore, without meeting a set of a priori requirements regarding data structure the ordered-axis plot approach will likely produce misleading results. Methodology/Principal Findings Morphological data sets from cichlid fishes endemic to Lakes Tanganyika, Malawi, and Victoria were used to statistically demonstrate how separate groups can have differing contributions to a common morphospace produced by a PCA. Through a matrix superimposition of eigenvectors (scale-free trajectories of variation identified by PCA) we show that some groups contribute more to the trajectories of variation identified in a common morphospace. Furthermore, through a set of randomization tests we show that a common morphospace model partitions variation differently than group-specific models. Finally, we demonstrate how these limitations may influence an ordered-axis plot approach by performing a comparison on data sets with known alterations in covariance structure. Using these results we provide a set of criteria that must be met before a common morphospace can be reliably used. Conclusions/Significance Our results suggest that a common morphospace produced by PCA would not be useful for producing biologically meaningful results unless a restrictive set of criteria are met. We therefore suggest biologists be aware of the limitations of the ordered-axis plot approach before employing it on their own data, and possibly consider other, less restrictive methods for addressing the same question. Introduction Determining the relative contributions of natural selection, historical contingency, and chance events in evolutionary radiations has been a longstanding challenge in biology, especially from a quantitative perspective. In a recent article from PLoS One [1], Young et al. introduce a modified methodology of principal components analysis (PCA) combined with linear regression called 'ordered-axis plots' to test whether radiations of African rift lake cichlids display differences in diversity and patterns of convergence, or non-convergence centered around a common mean. Using this method a single PCA is first carried out on equally sized groups simultaneously in order to create a common trait space, secondly PC scores on each axis are ordered from highest to lowest for each group, and third ordered axes are plotted and tested for differences in slope (indicating differences in variance) using linear regression. The authors make a compelling case from their analysis that African cichlids have evolved along similar axes, and that diversity is age-ordered with lower diversity existing in the youngest radiation from Lake Victoria. Although this study may appear methodologically appealing given the ease with which PCA and linear regression can be combined to produce the 'orderedaxis plot' approach, we feel it is important to highlight the major limitations this method introduces that can lead to inaccurate conclusions about patterns of evolutionary diversification. PCA is one of the more straightforward multivariate methods and is primarily used to reduce dimensionality in data sets by 'concentrating' variation into fewer uncorrelated variables. This process relies on identifying eigenvectors, the scale-free trajectories that describe the maximum covariance or correlations among variables. For evolutionary studies eigenvectors may identify primary trajectories of divergence. PCA is most efficient at reducing dimensionality when the original variables are highly correlated, allowing the majority of variation to be explained by just a few vectors [2,3]. This means that variables that possess higher degrees of both variance and associated covariance will have a greater influence over how PC axes (PCs) are determined. In other words, in a pooled analysis the major axis of divergence in a more variable group may 'swamp' the vectors present in other less variable groups, making it appear as though all groups are diverging the same way (Figure 1.). This influence is further enhanced by the requirement of orthogonality (lack of correlation) among PC axes. PC1, for example, accounts for the greatest degree of variation, and will influence the direction of all subsequent PCs because they must be orthogonal to this first axis [2,3]. To alleviate this problem a PCA can be performed on a scale-free correlation matrix rather than a covariance matrix, but outliers could still have a strong influence in defining the direction of the first PC. In practice this means that a PCA applied to several groups simultaneously, as occurs in the 'ordered-axis plots approach', may not accurately account for variation in groups displaying relatively lower magnitudes of covariance among traits ( Figure 1). In turn, PCs created from this method may not accurately describe the major trajectories of evolution specific to less variable groups. Although having equal sample sizes may alleviate this issue somewhat, as is the case in Young et al. [1], variance and covariance are not a function of sample size. Methods Here we demonstrate the potentially confounding effects of these problems using our own geometric morphometric data set of cichlid craniofacial shape from each of the three African rift lake assemblages (Figure 2.). To begin we performed a common translation, rotation, and scaling of size on our complete set of landmark coordinates [3]. Partial warp scores (shape variables), including uniform scores, were obtained from these aligned coordinates and were then used in a PCA of each lake assemblage separately, and a PCA on all lakes combined. The combined PCA represented a common morphospace for all cichlids similar to what was calculated in Young et al. [1]. Results and Discussion We predicted that primary vectors of cichlid divergence identified by a PCA within lakes would differ from those identified by a PCA run on the combined dataset from all lakes (Figure 2.). We therefore tested for differences in the variance of eigenvalues from different PCA models. Eigenvalues are a scalar value used to represent the amount of variation each eigenvector accounts for in a given PCA [2,3]. If covariation among traits is high, the first few PCs present large eigenvalues relative to later ones, and the variance of eigenvalues is high. If covariation is low, PCs have similar eigenvalues and variance among them is low [4,5]. We used a procedure that bootstrapped the differences in eigenvalue variance 1000 times by sampling with replacement from rows of our raw data [6], and found that Tanganyika displayed significantly higher variance in eigenvalues compared to the common morphospace (s 2 = 0.011 versus 0.008 respectively, Figure 1. In a common morphospace, major axes of morphological diversity may still differ among groups of interest. Ordered axis plots may not be able to discriminate between patterns of morphological diversity along axes of a multidimensional morphospace because of its reliance upon principal components analysis and the inherent biases of this method. Here the aspects of diversity parallel to PC1 are highlighted with a red arrow for each group of interest. Note that the length of most variable group (pink) is parallel to PC1 because it has the greatest influence over the determination of PC1 in this common morphospace. Other less variable groups (blue, green) have less influence over the trajectory of PC1, but still possess variation that lies parallel to PC1. However the greatest axis of variation within these less variable groups may lie along a vector that differs from PC1. Without knowing a priori whether axes of variation among distinct groups are similar, it is impossible to know the degree to which an ordered axis plots approach will yield misleading results. doi:10.1371/journal.pone.0007957.g001 p = 0.004). This suggested that the common morphospace model did not accurately reflect the patterns of trait covariation found in Lake Tanganyika ( Figure 3A). Without this investigation we would only be able to assume that variation in cichlid traits was spread similarly across PCs in each of the three lakes. There were no significant differences in the spread of eigenvalue variance in comparisons of both Malawi and Victoria to the common morphospace ( Figure 3B,C). In addition to testing for incongruent eigenvalue variances, we were interested in determining whether the vectors of divergence identified in common morphospace accurately reflected lake specific directions of cichlid divergence. To determine whether the primary directions of divergence differed between the common morphospace and each lake we extracted the first 5 eigenvectors from each of our PCA models for use in Procrustes matrix superimpositions [7,8]. Eigenvectors are orthogonal and summarize information about data covariation independent of scale, and so were useful for comparisons of vector direction among the PCA models describing cichlid evolution. Procrustes matrix superimpositions are a method of matrix correlation that allows for tests of association using raw untransformed data [7]. The concordance of two eigenvector matrices (i.e. lake specific vectors versus the common morphospace vectors) can then be determined and tested based on a goodness-of-fit measure. The sum of the squared residuals between eigenvector matrices provides a goodness-of-fit statistic (m12) that ranges between 0 and 1, and identifies the optimal superimposition that can be used as a metric of concordance. Small values of m12 correspond to small residual variation and, hence, a high concordance of matrices. Our tests of association using 1000 bootstrapped replicates revealed that the eigenvectors of the Victoria PCA model were not significantly associated with the common morphospace PCA (p = 0.072, m12 = 0.796). While eigenvectors in Tanganyika and Malawi were significantly associated with the common PCA (p,0.01, m12 = 0.6371; and p,0.01, m12 = 0.5476, respectively), high m12 values suggested that the concordance was not strong. Taken together these results suggested that Malawi and Tanganyika had a greater influence over the calculation of the common morphospace than Victoria, and that the common morphospace also had vectors that largely did not align with the vectors identified independently in each lake. This problem was exacerbated when we extended our analysis to include scale with our eigenvectors by investigating potential associations between the first 5 PCs of the common morphospace, and the first 5 lake specific PCs. Both Victoria and Tanganyika had no association between PC axes (p = 1.0, m12 = 0.991; p = 0.081, m12 = 0.874 respectively), while the m12 value increased in Malawi (p,0.01, m12 = 0.624). Therefore, in this analysis our common morphospace did not correspond well to the major axes of variation identified in the different lake assemblages. To explicitly demonstrate how these biases could affect an ordered-axis plots analysis we tested its performance on data with known alterations in covariance structure. We first removed the effect of PC1 from the original raw landmark data used to create the above common cichlid morphospace using a multiple regression in the program Standard 6 [9]. Thus, variation identified by PC1, which accounted for more than 23.7% of the variance in our original PCA, was now largely absent from the data set. We next used both this PC1 standardized data set and the original data as groups for comparison in an ordered axis plot approach [1]. This would be akin to comparing biological groups that differ in both their levels of variation, and their primary direction of variation, which should bias any analysis in a common trait space. However, the ordered-axis plot approach revealed a regression slope of 0.98 on the new PC1, and an intercept of 0.02 (slope of 1 indicates equal levels of variance between groups, intercept of 0 indicates a common trajectory of divergence), indicating that the major axes of variation, and levels of variation between these data sets were extremely similar. This result is striking because we knowingly removed the major axis of variation from one of the data sets. Any interpretation of biological processes such as historical contingency or selection derived from this type of analysis would therefore be highly questionable. Although these results highlight the potential problems of interpreting data from a common PCA on multiple groups we do not feel they have been especially detrimental to the results of Young et al. [1]. In fact their main conclusion of a common axis of divergence is supported by our own data (analysis not shown). However, in their analysis of lake-specific morphospace, where the angles of PCs were compared between lakes, differences did exist between Tanganyika and Victoria for total body shape on PC1 (i.e., M max ), which makes any interpretation from the ordered-axis plots approach (i.e., combined morphospace) questionable for total body shape in these two assemblages (see Figure 3. in Young et al., [1]). Furthermore, Young et al. [1] provide no comparison of the common PCA model to the lake specific models, making it difficult to know if their common morphospace accurately reflects the major axes of divergence in each lake. Lastly, the calculations of slope and intercept found in table 1 of Young et al. [1] would benefit greatly from the generation of confidence intervals from a resampling procedure to determine whether their values differ from random. In their present form the values used to generate rankings in table 1 do not indicate any statistical significance that allows us to reliably determine whether evolutionary trajectories, or variation differs among cichlid assemblages. The ordered-axis plot method is conceptually appealing and methodologically straight forward, but we feel that it will only be of limited use to biologists interested in understanding the repeatability of evolutionary radiations if their data meet the following criteria: 1. Sample sizes among groups are equal [1]. 2. The direction of evolution (covariance) is the same in all groups. It is the this second criterion that is especially important for producing accurate results from an ordered-axis plot approach, because, as we have shown, particular groups can bias a common morphospace (Figure 1.). There are several methods for testing whether the direction of evolution is statistically similar across groups, including the methods we have used here, common principal components analysis [10,11], comparisons of PCA subspace [1] or other methods of trajectory comparison [12]. Our findings suggest the application of ordered-axis plots is only useful for confirming, not discovering, common trajectories of evolution. It is a method that is probably more useful for testing differences in the mean and variances of groups along specific, constrained axes of morphospace. It is worth pointing out however, that several traditional tests like ANOVA and F-tests for homogeneity of variance already exist, and are well suited to this task, as shown by their use in Young et al. [1]. In addition, while it may be of interest to look at divergence on specific axes in some studies, morphometric methods have existed for several years that allow for tests of differences in means and variance in a wholly unconstrained shape space [3,13,14]. Given the myriad of timetested methods that are available for examining patterns of divergence, and the limitations of the ordered-axis plot approach, we urge biologists to be thoughtful when considering this technique.
3,530.8
2009-11-23T00:00:00.000
[ "Mathematics" ]
Study of the Effect of the A206/1.0 wt. % γ Al 2 O 3 Nanocomposites Content on the Portevin-Le Chatelier Phenomenon in Al/0.5 wt. % Mg Alloys the the Al Content on the Portevin-Le Chatelier Phenomenon in Al/0.5 wt. % Alloys. Abstract: The Portevin-Le Chatelier (PLC) phenomenon or dynamic strain aging in Al–0.5 wt. % Mg alloys was investigated at different strain rates. This research also examined the effect of γ Al 2 O 3 nanoparticles on the PLC phenomenon. A nanocomposite made of A206/1.0 wt. % γ Al 2 O 3 was manufactured to this purpose and then, added to an Al–0.5 wt. % Mg melt to obtain ingots of Al–0.5 wt. % Mg–20 wt. % A206/1.0 wt. % γ Al 2 O 3 and Al–0.5 wt. % Mg–10 wt. % A206/1.0 wt. % γ Al 2 O 3 with 6 mm diameter. Cold deformation allowed manufacturing 1 mm diameter wires using the 6 mm diameter ingots. A 300 ◦ C solution treatment, followed by rapid cooling in ice water permitted to retain Mg atoms in solid solution. The tensile tests performed on the wires revealed the PLC phenomenon upon the tensile stress vs. strain plastic zone. The phenomenon was quantified using MatLab™ and statistical analysis. The results demonstrated how the alumina nanoparticles can diminish the serration amplitude of the PLC phenomenon. Introduction Commercial Al-Mg alloys are among the most commonly used metallic materials due to their low density and superior mechanical properties. Nonetheless, manufacturing parts made of these alloys by plastic deformation may be affected by the occurrence of the Portevin-Le Chatelier (PLC) phenomenon [1]. This problem could hinder the utilization of the said alloys in specific applications as the PLC serration occurs upon forming a part at room temperature. Such a serration effect presents itself as rough marks on the finished surface and can be characterized through stress-strain tensile curves. The PLC phenomenon can result in structural problems and affect the alloy's final mechanical properties. Therefore, the structure or part can fail under service loads, whereas the combination of these effects added to environmental conditions can make the part susceptible to corrosion [2]. Usually, the serrations in the PLC phenomenon are sorted as A, B, and C. Typically, the bands sorted as A are generated at high strain rates. At the same time, type B and C are observed at medium strain rates and low strain rates. The PLC phenomenon is also affected by the material surface, the texture, and the sample geometry [1][2][3]. More specifically, the PLC effect generates superficial and structural problems in aluminum alloys. PLC also can cause part failure under service loads, as aforementioned [4]. Among other variables, PLC research focused on evaluating cross-sectional area reduction and the amount of cold work on the ensuing strain hardening. The authors acknowledge the wealth of information available on the PLC phenomenon, but opted, for the sake of brevity, to focus on those variables pertinent to the goal of the present investigation. In the said literature, it was found that the flow instability rises proportionally to the cross-sectional area of the samples [5]. In cylindrical specimens, Zhang found that the bandwidth has a linear relationship with the diameter of the samples, with the proportionality constant being between 0.50-0.67 [6]. The literature reports that the addition of homogeneously distributed particles in the samples modifies the conditions required for the appearance of the Portevin-Le Chatelier phenomenon [7] since the movement of the mobile dislocations is hindered by the particles [8]. Further, the influence of the added particles depends on their quantity and their distribution within the test specimens [7]. Zhao has reported that grain refinement in Al-Mg alloys delays the appearance of the PLC effect but amplifies the serrated flow [9]. Similar results (i.e, an increase in critical strain is reported) were obtained with the surface nano-crystallization of 5182 Al alloys using the mechanical surface wear treatment [10]. Additionally, Lebedkina obtained results opposite to those reported by Zhao, as he notes that extreme grain refinement can cause suppression of the PLC phenomenon [11]. Further, the temperature of the sample also affects the serration type, critical strain, yield strength, and ultimate strength [12]. Xu reports that the PLC effect was only present within the 223-323 K temperature range. The tensile tests were performed at temperatures from 173-333 K. This author observed that the critical strain decreased with rising temperature (223-310 K). For tensile tests performed between 310 and 323 K, higher temperatures led to higher critical strains. This behavior was reported in 5456 Al alloys [13]. The experimental works reported, as well as the ensuing numerical modeling, focused on evaluating the different variables that could affect the PLC phenomenon or eliminate it. In that respect, we argue that there is still room for more explorations related to the effective means to control the PLC serrations. Therefore, to advance the understanding of the phenomenon, the present research intends to reduce or remove the amplitude of said serrations upon plastic deformation of Al-Mg-based alloys via the insertion of γAl 2 O 3 nanoparticles and thermal treatments of the inoculated alloys. To facilitate the mechanical testing of the material, the test pieces were cold-drawn wires. Materials and Methods The first stage of the wire production was manufacturing the master nanocomposites, made of an A206 alloy matrix with 1 wt. % γAl 2 O 3 nanoparticles [14,15]. The master A206/1.0 wt. % γAl 2 O 3 nanocomposites allowed inoculating an Al-0.5 wt. % Mg melt to obtain a composite treated with γAl 2 O 3 nanoparticles. The last stage was manufacturing the wires by cold-rolling and drawing, followed by their corresponding characterization. To assure the nanoparticles addition to the alloy, an A206/1.0 wt. % γAl 2 O 3 nanocomposite was fabricated by melting the A206 at 630 • C alloy in an Ar atmosphere. An axial impeller at 500 rpm generated a vortex into which γAl 2 O 3 nanoparticles was added. Then, we raised the velocity to 1200 rpm for 40 s. To enhance the dispersion of the nanoparticles the tip of a niobium (C-103) ultrasonic probe was inserted into the melt, which generated a 20 kHz ultrasonic vibration (Sonicator 3000, Misonix Inc., Farmingdale, NY, USA). After that, the melt was cast into the molds. The complete ultrasonic processing procedure is described in prior research [14,15]. Manufacturing Procedure of the Al-Mg-A206/γAl 2 O 3 Wires The A206/1.0 wt. % γAl 2 O 3 nanocomposites were added to an Al-0.5 wt. % Mg melt to prepare Al-0.5 wt. % Mg-20 wt. % A206/1.0 wt. % γAl 2 O 3 and Al-0.5 wt. % Mg-10 wt. % A206/1.0 wt. % γAl 2 O 3 samples by dilution. Preparing an Al-0.5 wt. % Mg-10 wt. % A206 and an Al-0.5 wt. % Mg-20 wt. % A206 alloy permitted to differentiate between the effect of the γAl 2 O 3 nanoparticles and the amount of A206 alloy in the samples. To set up the material, pure aluminum (99.5%), an Al-25 wt. % Mg master alloy, and the A206/1.0 wt. % γAl 2 O 3 nanocomposite were melted at 760 • C as the melt was mechanically stirred. The melt was poured into a cylindrical mold with a 6 mm diameter. The resulting ingots underwent 400 • C full annealing for 5 h to soften the material before cold work. First, the ingots were cold deformed to render 3 mm diameter wires. To prepare the final specimens, i.e., 1 mm diameter wires, another 400 • C full annealing for 5 h was required. Finally, to guarantee a circular cross-sectional area with a diameter of 1 mm, the wires were cold-drawn. A similar procedure, we used it in previous research [14]. Finally, the wires underwent a 300 • C solution treatment for 30 min, followed by ice-water quenching. Afterward, each wire sample was sectioned in three parts whit 250 mm from working length to perform tensile tests at strain rates of 0.500 mm/min, 0.250 mm/min, and 0.125 mm/min, in an Instron ® (model 5944, Norwood, MA, USA) low force universal testing machine, following the ASTM B557 standard [16]. Wire specimens were cut and polished to observe their microstructure in an Epiphot 200 optical microscope (Nikon, Melville, NY, USA) at each manufacturing stage. Results The heat treatments allowed cold forming the as-cast ingots in stages to obtain the wires. As aforementioned, these wires underwent tensile tests, and the results were evaluated using MatLab™ (Mathworks, Natick, MA, USA). The following section shows a detailed analysis of the said results. Optical Micrographs The microphotographs in Figure 1 show the microstructure of the wires during the manufacturing process. Naturally, through the process, the grains are elongated due to the cold rolling and the cross-section area reduction. Ultimate Tensile Strength and Critical Strain The tensile tests were performed on specimens obtained from each of the three sections at different strain rates. The strain rates used were 0.500 mm/min, 0.250 mm/min, and 0.125 mm/min, in our Instron ® (model 5944) low force universal testing machine. The critical strain was measured on the stress-strain curve and is defined as the strain value at which the PLC phenomenon starts occurring. Figures 2-4 present the measured critical strain and the ultimate tensile strength (UTS) at different strain rates. Higher critical strains for higher strain rates are referred to as 'normal' behavior, while the opposite is referred to as inverse behavior [17]. No significant nanoparticles' effect on the recorded UTS and critical strain values is apparent. On the other hand, the addition of the A206 alloy did affect the UTS and critical strain, which is a possible consequence of the presence of copper, the primary alloying element in the A206 alloy [18]. Similarly, higher UTS values occurred as copper content increased. Figure 5 displays a tensile curve with the PLC phenomenon in one of the studied alloys after removing the initial par. This means that this is the curve up to the proof stress for 0.2% plastic strain, i.e., YS (offset = 0.2%), which was obtained [19] using MatLab™. For convenience in the subsequent analysis, we used the stress vs. time data instead of the stress vs. strain curve. The frequency of the tensile curve was sorted using a fast Fourier transform (FFT) so that one can compute the frequency of each component (PLC signal and environmental noise or vibrations introduced by the equipment). To this purpose, a group of pure aluminum samples were manufactured and underwent tensile tests under the same conditions as the wires bearing the PLC effect. This FFT analysis of the data pertaining to the Figure 5 displays a tensile curve with the PLC phenomenon in one of the studied alloys after removing the initial par. This means that this is the curve up to the proof stress for 0.2% plastic strain, i.e., YS (offset = 0.2%), which was obtained [19] using MatLab™. For convenience in the subsequent analysis, we used the stress vs. time data instead of the stress vs. strain curve. The frequency of the tensile curve was sorted using a fast Fourier transform (FFT) so that one can compute the frequency of each component (PLC signal and environmental noise or vibrations introduced by the equipment). To this purpose, a group of pure aluminum samples were manufactured and underwent tensile tests under the same conditions as the wires bearing the PLC effect. This FFT analysis of the data pertaining to the Figure 5 displays a tensile curve with the PLC phenomenon in one of the studied alloys after removing the initial par. This means that this is the curve up to the proof stress for 0.2% plastic strain, i.e., YS (offset = 0.2%), which was obtained [19] using MatLab™. For convenience in the subsequent analysis, we used the stress vs. time data instead of the stress vs. strain curve. Figure 5 displays a tensile curve with the PLC phenomenon in one of the studied alloys after removing the initial par. This means that this is the curve up to the proof stress for 0.2% plastic strain, i.e., YS (offset = 0.2%), which was obtained [19] using MatLab™. For convenience in the subsequent analysis, we used the stress vs. time data instead of the stress vs. strain curve. The frequency of the tensile curve was sorted using a fast Fourier transform (FFT) so that one can compute the frequency of each component (PLC signal and environmental noise or vibrations introduced by the equipment). To this purpose, a group of pure aluminum samples were manufactured and underwent tensile tests under the same conditions as the wires bearing the PLC effect. This FFT analysis of the data pertaining to the The frequency of the tensile curve was sorted using a fast Fourier transform (FFT) so that one can compute the frequency of each component (PLC signal and environmental noise or vibrations introduced by the equipment). To this purpose, a group of pure aluminum samples were manufactured and underwent tensile tests under the same conditions as the wires bearing the PLC effect. This FFT analysis of the data pertaining to the tensile curve (stress vs. strain) of the pure aluminum wires revealed peaks at frequencies below 0.2 Hz, 0.1 Hz, and 0.07 Hz in the tests carried out at 0.500 mm/min (3.33·10 −5 s −1 corresponds to the strain rate divided by the samples length in s −1 ), 0.250 mm/min (1.66·10 −5 s −1 ), and 0.125 mm/min (8.33·10 −6 s −1 ). As mentioned, these frequencies were associated with the environmental noise or vibrations of the equipment (artifacts). Figure 6 presents the FFT of the plastic region of the tensile stress-time curve of the aluminum wires tested at 0.500 mm/min. tensile curve (stress vs. strain) of the pure aluminum wires revealed peaks at frequencies below 0.2 Hz, 0.1 Hz, and 0.07 Hz in the tests carried out at 0.500 mm/min (3.33·10 −5 s -1 corresponds to the strain rate divided by the samples length in s -1 ), 0.250 mm/min (1.66·10 -5 s -1 ), and 0.125 mm/min (8.33·10 -6 s -1 ). As mentioned, these frequencies were associated with the environmental noise or vibrations of the equipment (artifacts). Figure 6 presents the FFT of the plastic region of the tensile stress-time curve of the aluminum wires tested at 0.500 mm/min. FFT evaluation of the tensile curve pertaining to the Al-Mg-A206/1 wt. % γAl2O3 composite samples revealed peaks in frequencies higher than 0.2, 0.1 and 0.07 Hz in the tests carried out at 0.500 mm/min (3.33·10 -5 s -1 ), 0.250 mm/min (1.66·10 -5 s -1 ), and 0.125 mm/min (8.33·10 -6 s -1 ), respectively ( Figure 7). We attributed them to the Portevin-Le Chatelier phenomenon. Then, these high frequencies due to the PLC phenomenon were equaled to zero to obtain the curve without the PLC effect (filtered curves). Figure 8 shows the original and filtered curves. Likewise, the curve resulting from the subtraction of the original and filtered curve is in Figure 9. Subsequently, we computed the absolute value of the resulting signal, which is depicted in Figure 10. We integrated the said obtained signal (i.e., Figure 10) to determine the area under the curve; the result was then divided over the strain to normalize the stress vector and obtain the amplitude of the PLC phenomenon. The results are shown in Section 3.5. A similar analysis was used in previous investigations [20]. . We attributed them to the Portevin-Le Chatelier phenomenon. Then, these high frequencies due to the PLC phenomenon were equaled to zero to obtain the curve without the PLC effect (filtered curves). Analysis of Portevin-Le Chatelier Effect with Fast Fourier Transform Using MatLab™ tensile curve (stress vs. strain) of the pure aluminum wires revealed peaks at frequencies below 0.2 Hz, 0.1 Hz, and 0.07 Hz in the tests carried out at 0.500 mm/min (3.33·10 −5 s -1 corresponds to the strain rate divided by the samples length in s -1 ), 0.250 mm/min (1.66·10 -5 s -1 ), and 0.125 mm/min (8.33·10 -6 s -1 ). As mentioned, these frequencies were associated with the environmental noise or vibrations of the equipment (artifacts). Figure 6 presents the FFT of the plastic region of the tensile stress-time curve of the aluminum wires tested at 0.500 mm/min. FFT evaluation of the tensile curve pertaining to the Al-Mg-A206/1 wt. % γAl2O3 composite samples revealed peaks in frequencies higher than 0.2, 0.1 and 0.07 Hz in the tests carried out at 0.500 mm/min (3.33·10 -5 s -1 ), 0.250 mm/min (1.66·10 -5 s -1 ), and 0.125 mm/min (8.33·10 -6 s -1 ), respectively (Figure 7). We attributed them to the Portevin-Le Chatelier phenomenon. Then, these high frequencies due to the PLC phenomenon were equaled to zero to obtain the curve without the PLC effect (filtered curves). Figure 8 shows the original and filtered curves. Likewise, the curve resulting from the subtraction of the original and filtered curve is in Figure 9. Subsequently, we computed the absolute value of the resulting signal, which is depicted in Figure 10. We integrated the said obtained signal (i.e., Figure 10) to determine the area under the curve; the result was then divided over the strain to normalize the stress vector and obtain the amplitude of the PLC phenomenon. The results are shown in Section 3.5. A similar analysis was used in previous investigations [20]. Figure 8 shows the original and filtered curves. Likewise, the curve resulting from the subtraction of the original and filtered curve is in Figure 9. Subsequently, we computed the absolute value of the resulting signal, which is depicted in Figure 10. We integrated the said obtained signal (i.e., Figure 10) to determine the area under the curve; the result was then divided over the strain to normalize the stress vector and obtain the amplitude of the PLC phenomenon. The results are shown in Section 3.5. A similar analysis was used in previous investigations [20]. Amplitude of the PLC Signal Using Fast Fourier Transform Using the code shown in Section 3.3, one can determine the PLC signal amplitude at different strain rates. The PLC phenomenon was present in all the studied alloys (i.e., after solution treatment and quenching). The 20% A206/1 wt. % γAl2O3 specimens showed a smaller serration effect at all strain rates. Figures 11-13 show the signal amplitude as a function of the nanocomposite composition. A significant strain rate effect is apparent in the signal amplitude. In effect, the amplitude was smaller at a 0.500 mm/min strain rate. On the other hand, the control sample that has only the A206 alloy added (i.e., alloying copper) displays a different behavior when compared with the wires containing nanoparticles. In effect, as one raises the amount of A206 alloy, a heightened PLC signal amplitude comes forth. Amplitude of the PLC Signal Using Fast Fourier Transform Using the code shown in Section 3.3, one can determine the PLC signal amplitude at different strain rates. The PLC phenomenon was present in all the studied alloys (i.e., after solution treatment and quenching). The 20% A206/1 wt. % γAl2O3 specimens showed a smaller serration effect at all strain rates. Figures 11-13 show the signal amplitude as a function of the nanocomposite composition. A significant strain rate effect is apparent in the signal amplitude. In effect, the amplitude was smaller at a 0.500 mm/min strain rate. On the other hand, the control sample that has only the A206 alloy added (i.e., alloying copper) displays a different behavior when compared with the wires containing nanoparticles. In effect, as one raises the amount of A206 alloy, a heightened PLC signal amplitude comes forth. Amplitude of the PLC Signal Using Fast Fourier Transform Using the code shown in Section 3.3, one can determine the PLC signal amplitude at different strain rates. The PLC phenomenon was present in all the studied alloys (i.e., after solution treatment and quenching). The 20% A206/1 wt. % γAl2O3 specimens showed a smaller serration effect at all strain rates. Figures 11-13 show the signal amplitude as a function of the nanocomposite composition. A significant strain rate effect is apparent in the signal amplitude. In effect, the amplitude was smaller at a 0.500 mm/min strain rate. On the other hand, the control sample that has only the A206 alloy added (i.e., alloying copper) displays a different behavior when compared with the wires containing nanoparticles. In effect, as one raises the amount of A206 alloy, a heightened PLC signal amplitude comes forth. Amplitude of the PLC Signal Using Fast Fourier Transform Using the code shown in Section 3.3, one can determine the PLC signal amplitude at different strain rates. The PLC phenomenon was present in all the studied alloys (i.e., after solution treatment and quenching). The 20% A206/1 wt. % γAl 2 O 3 specimens showed a smaller serration effect at all strain rates. Figures 11-13 show the signal amplitude as a function of the nanocomposite composition. A significant strain rate effect is apparent in the signal amplitude. In effect, the amplitude was smaller at a 0.500 mm/min strain rate. On the other hand, the control sample that has only the A206 alloy added (i.e., alloying copper) displays a different behavior when compared with the wires containing nanoparticles. In effect, as one raises the amount of A206 alloy, a heightened PLC signal amplitude comes forth. Energy Released druing the PLC Phenomenon To determine the energy released as the PLC phenomenon took place, we registered the maximum peaks of the PLC signal using MatLab™. Then, a linear regression ( Figure 14) line was fitted to the maximum peaks, according to Figure 14. Subsequently, the difference between the area under the blue line and the area under the red line (stress-strain Energy Released druing the PLC Phenomenon To determine the energy released as the PLC phenomenon took place, we registered the maximum peaks of the PLC signal using MatLab™. Then, a linear regression ( Figure 14) line was fitted to the maximum peaks, according to Figure 14. Subsequently, the difference between the area under the blue line and the area under the red line (stress-strain Energy Released druing the PLC Phenomenon To determine the energy released as the PLC phenomenon took place, we registered the maximum peaks of the PLC signal using MatLab™. Then, a linear regression ( Figure 14) line was fitted to the maximum peaks, according to Figure 14. Subsequently, the difference between the area under the blue line and the area under the red line (stress-strain Energy Released druing the PLC Phenomenon To determine the energy released as the PLC phenomenon took place, we registered the maximum peaks of the PLC signal using MatLab™. Then, a linear regression ( Figure 14) line was fitted to the maximum peaks, according to Figure 14. Subsequently, the difference between the area under the blue line and the area under the red line (stress-strain curve) was computed. The resulting (shaded) area corresponds to the energy released as a result of the PLC phenomenon. curve) was computed. The resulting (shaded) area corresponds to the energy released as a result of the PLC phenomenon. 15-17 present the energy released during the PLC phenomenon in the wires as a function of the A206 and A206/1 wt. % γAl2O3 content and the strain rate. The 20% A206/1 wt. % γAl2O3 specimens showed a smaller energy released during the PLC phenomenon at strain rates of 0.250 mm/min and 0.500 mm/min. Also, a significant decrease of the energy released during the PLC phenomenon is apparent at a high strain rate (i.e., 0.500 mm/min). Moreover, as the added amount of the A206 alloy raises in the control sample, a heightened energy released during the PLC phenomenon comes forth at strain rates of 0.250 mm/min and 0.500 mm/min. specimens showed a smaller energy released during the PLC phenomenon at strain rates of 0.250 mm/min and 0.500 mm/min. Also, a significant decrease of the energy released during the PLC phenomenon is apparent at a high strain rate (i.e., 0.500 mm/min). Moreover, as the added amount of the A206 alloy raises in the control sample, a heightened energy released during the PLC phenomenon comes forth at strain rates of 0.250 mm/min and 0.500 mm/min. curve) was computed. The resulting (shaded) area corresponds to the energy released as a result of the PLC phenomenon. Figures 15-17 present the energy released during the PLC phenomenon in the wires as a function of the A206 and A206/1 wt. % γAl2O3 content and the strain rate. The 20% A206/1 wt. % γAl2O3 specimens showed a smaller energy released during the PLC phenomenon at strain rates of 0.250 mm/min and 0.500 mm/min. Also, a significant decrease of the energy released during the PLC phenomenon is apparent at a high strain rate (i.e., 0.500 mm/min). Moreover, as the added amount of the A206 alloy raises in the control sample, a heightened energy released during the PLC phenomenon comes forth at strain rates of 0.250 mm/min and 0.500 mm/min. Figures 15-17 present the energy released during the PLC phenomenon in the wires as a function of the A206 and A206/1 wt. % γAl2O3 content and the strain rate. The 20% A206/1 wt. % γAl2O3 specimens showed a smaller energy released during the PLC phenomenon at strain rates of 0.250 mm/min and 0.500 mm/min. Also, a significant decrease of the energy released during the PLC phenomenon is apparent at a high strain rate (i.e., 0.500 mm/min). Moreover, as the added amount of the A206 alloy raises in the control sample, a heightened energy released during the PLC phenomenon comes forth at strain rates of 0.250 mm/min and 0.500 mm/min. Statistical Analysis of Al-Mg-A206/γAl2O3 and Al-Mg-A206 Samples To validate the effect of the γAl2O3 nanoparticle amounts on the PLC phenomenon, we completed a statistical analysis using the amplitude of each peak of the PLC serrated signal (stress-strain curve). For each peak, we determine the maximum and minimum values. The signal amplitude was computed after obtaining the maximum and minimum peaks using MatLab ™, After subtracting the minimum peaks of the maximum peaks, we obtained the amplitude of the PCL signal. This statistical data analysis was performed using Minitab™ (State College, PA, USA). The amplitude results show a left-skewed distribution ( Figure 18). To obtain a normal distribution, the output was transformed (Figure 19) using the power function in Equation (1), where TA represents the transformed amplitude and A is the computed amplitude. Minitab™ was used to compute the transformation parameter λ that normalized the response. Equation (1) shows the power function used to yield the data's normal distribution. Minitab™ computed λ by iteration; this was estimated as 0.27, 0.30, and 0.20 for the wires tested at 0.125 mm/min, 0.250 mm/min, and 0.500 mm/min, respectively. To validate the effect of the γAl 2 O 3 nanoparticle amounts on the PLC phenomenon, we completed a statistical analysis using the amplitude of each peak of the PLC serrated signal (stress-strain curve). For each peak, we determine the maximum and minimum values. The signal amplitude was computed after obtaining the maximum and minimum peaks using MatLab ™, After subtracting the minimum peaks of the maximum peaks, we obtained the amplitude of the PCL signal. This statistical data analysis was performed using Minitab™ (State College, PA, USA). The amplitude results show a left-skewed distribution ( Figure 18). To obtain a normal distribution, the output was transformed (Figure 19) using the power function in Equation (1), where TA represents the transformed amplitude and A is the computed amplitude. Minitab™ was used to compute the transformation parameter λ that normalized the response. Equation (1) shows the power function used to yield the data's normal distribution. Minitab™ computed λ by iteration; this was estimated as 0.27, 0.30, and 0.20 for the wires tested at 0.125 mm/min, 0.250 mm/min, and 0.500 mm/min, respectively. TA = A λ (1) Figure 17. Energy released during the PLC phenomenon at a 0.500 mm/min strain rate. Statistical Analysis of Al-Mg-A206/γAl2O3 and Al-Mg-A206 Samples To validate the effect of the γAl2O3 nanoparticle amounts on the PLC phenomenon, we completed a statistical analysis using the amplitude of each peak of the PLC serrated signal (stress-strain curve). For each peak, we determine the maximum and minimum values. The signal amplitude was computed after obtaining the maximum and minimum peaks using MatLab ™, After subtracting the minimum peaks of the maximum peaks, we obtained the amplitude of the PCL signal. This statistical data analysis was performed using Minitab™ (State College, PA, USA). The amplitude results show a left-skewed distribution ( Figure 18). To obtain a normal distribution, the output was transformed (Figure 19) using the power function in Equation (1), where TA represents the transformed amplitude and A is the computed amplitude. Minitab™ was used to compute the transformation parameter λ that normalized the response. Equation (1) Figure 20 depicts the transformed amplitude for the Al-0.5 wt. % Mg-A206/γAl2O3 and Al-0.5 wt. % Mg-A206 wires at different strain rates. In this analysis, our data shows a similar behavior as in the prior analyses. Figure 20 corresponds to the signal's transformed amplitude, where a significant effect of the strain rate is apparent in the PLC amplitude in all samples. Finally, the samples containing 20% nanocomposite produced a smaller transformed amplitude at strain rates of 0.250 mm/min and 0.500 mm/min. Figures 2-4 show the critical strain and the UTS for all the samples at different strain rates. The Al-0.5 wt. % Mg-20% A206 and Al-0.5 wt. % Mg-10% A206/1 wt. % γAl 2 O 3 samples had higher critical strains at 0.250 mm/min strain rate, concerning the value of the critical strain registered at 0.125 mm/min; it decreases for faster strain rates (0.500 mm/min). The Al-0.5 wt. % Mg-10% A206 sample shows a critical strain with an inverse behavior at low strain rates, followed by normal behavior at high strain rates (0.500 mm/min). The Al-0.5 wt. % Mg-20% A206/1 wt. % γAl 2 O 3 sample shows a critical strain with normal behavior. This inverse behavior is explained in detail by Jarfors et al. [21]. Balik et al. suggest that increasing the initial density of mobile dislocations might turn an inverse behavior into a normal behavior or vice-versa; this theory can explain the behavior of our wires because all our samples were cold-rolled to obtain wires with 1 mm diameter. Similar results were published by Chihab [17]. Figures 11-13 show the Portevin-Le Chatelier phenomenon signal amplitude of the wires as a function of the A206 and A206/1 wt. % γAl 2 O 3 content and the strain rate. The 20% A206/1 wt. % γAl 2 O 3 specimens displayed less serration at all strain rates. In addition, the more significant amounts of the A206 alloy (i.e., with alloying copper) amplifies the amplitude of the PLC signal. Therefore, we can affirm that the addition of the alumina nanoparticles decreased the PCL effect, and copper (unlike nanoparticles) augments the PLC amplitude. Discussion Furthermore, there is an effect of the strain rate on the signal amplitude. The higher the strain rate, the smaller the amplitude. In effect, the 0.500 mm/min strain rate results rendered a smaller amplitude. Figures 15-17 present the energy released by the Portevin-Le Chatelier phenomenon occurring in the Al-0.5 wt. % Mg wires as a function of the A206 and A206/1 wt. % γAl 2 O 3 added and the tensile strain rate. The released energy shows a behavior like the behavior obtained when analyzing the signal's amplitude under study. The A206 alloy addition to the specimens expanded the PLC signal amplitude. One should recall that the primary alloying element of A206 alloys is copper. Thus, the presence of copper in the samples can explain the increase in the signal amplitude. In effect, the PCL phenomenon, in this case resulting from the combined effect of Cu and Mg solute atoms, made it easier to pin dislocations before the dynamic precipitation (abrupt load drop). Accordingly, the literature reports that the PCL phenomenon is also present in aluminum-copper alloys [2,22]. All things considered, this research presents an alternative to assess the Portevin-Le Chatelier phenomenon quantitatively using computational and statistical tools. Notwithstanding, we must underscore that with our proposed method using FFT, the Fourier spectra could differ for different types of serrations (i.e., A, B, and C). Those differences call for a detailed investigation that would allow discriminating the dominant type of serration in the stress-strain curve. The amplitude of the PLC phenomenon can be somewhat controlled in the Al-0.5 wt. % Mg samples with the addition of the A206/1 wt. % γAl 2 O 3 nanocomposite. The PLC phenomenon affect aluminum alloys used in soldering techniques. Such control of the PLC phenomenon can have application in the aerospace and automotive industries, among others. Therefore, this work presents an alternative to control the phenomenon in the Al-Mg alloy produced by deformation and heating of the element to be joined. Conclusions The present manuscript adds to a growing corpus of research on the Portevin-Le Chatelier phenomenon in Al-Mg alloys. By treating the said alloys with γAl 2 O 3 , the ensuing data analysis leads to the following conclusions: • The PLC phenomenon is present in all the solution-treated Al-Mg wires. • MatLab™ codes developed for the present research permitted the quantification of the PLC signal using Fast Fourier Transform that helped shed light on the nanoparticles effect on the stress serration upon tensile testing. • Because of the non-normality of the residuals upon statistical evaluation of the results, a transformation of amplitude λ was required to further the analysis. • The wires treated with the 20 wt. % A206/1 wt. % γAl 2 O 3 nanocomposite presented the highest reduction in the serration amplitude upon the tensile stress-strain curves. • The critical strain is affected by the amount of A 206 added as well as the presence of the Al 2 O 3 nanoparticles. The feasibility of controlling the serration in the PLC phenomenon of Al-Mg alloys using alumina nanoparticles opens the door to future work on cold work manufacturing of parts made of the said material.
7,925.4
2021-06-21T00:00:00.000
[ "Materials Science" ]
Edinburgh Research Explorer Effects of the weighting matrix on dynamic manipulability of robots Dynamic manipulability of robots is a well-known tool to analyze, measure and predict a robot’s performance in executing different tasks. This tool provides a graphical representation and a set of metrics as outcomes of a mapping from joint torques to the acceleration space of any point of interest of a robot such as the end-effector or the center of mass. In this paper, we show that the weighting matrix, which is included in the aforementioned mapping, plays a crucial role in the results of the dynamic manipulability analysis. Therefore, finding proper values for this matrix is the key to achieve reliable results. This paper studies the importance of the weighting matrix for dynamic manipulability of robots, which is overlooked in the literature, and suggests two physically meaningful choices for that matrix. We also explain three different metrics, which can be extracted from the graphical representations (i.e. ellipsoids) of the dynamic manipulability analysis. The application of these metrics in measuring a robot’s physical ability to accelerate its end-effector in various desired directions is discussed via two illustrative examples. Introduction To build a high performance robot, design is probably the most important process which hugely influences the robot's performance. Designing a robot (i.e. determining the values of its design parameters such as mass and inertia distributions, dimensions, etc.) presets the limits of its abilities or in other words, its capabilities to perform certain tasks. If a robot is not well designed, no matter how advanced its controller is, it could end up in poor performance (Leavitt et al. 2004). In the other hand, if the design is "perfect", the larger range of feasible options would be available in the control space which makes it easier for the controller to achieve a desired task with higher performance. Also School of Informatics, University of Edinburgh, Edinburgh, UK of redundant robots, a certain task is achievable via various configurations in which physical abilities of the robot are different (Ajoudani et al. 2017). Therefore, in order to improve the robot's performance in different tasks and exploit its maximum abilities, it is desired to be able to compare different configurations of a robot and possibly to find the optimal one (e.g. in terms of torque/energy efficiency). This is completely intuitive since humans always try to exploit the redundancy in their limbs and also the environmental contacts to improve their performance while minimizing their efforts in executing various tasks. For example, usual human arms configurations are different while using screwdriver to tighten up a screw compared to while holding a mug. As already mentioned, finding (i) proper values for the design parameters, and (ii) best configuration for a robot in performing a certain task are the two important elements in making high performance robots and/or improving the performance of existing robots. Thus, it is beneficial to develop a unified and general metric which enables us to measure physical abilities of various robots in different configurations and different contact conditions. For this application, there exists a very famous metric in the robotics community which is called manipulability. The concept of manipulability for robots first introduced by Yoshikawa (1985a) in the 80's. He defined manipulability ellipsoid as the result of mapping Euclidean norm of joint velocities (i.e.q Tq ) to the end-effector velocity space. By using task space Jacobian (i.e. J), he also proposed a manipulability metric for robots as w = det(JJ T ) which represents the volume of the corresponding manipulability ellipsoid. The main issue with this measure is that, multiplying J, which is a velocity mapping function, and J T , which is a force mapping function, is physically meaningless. In other words, in a general case, a robot may have different joint types (e.g. revolute and prismatic) and therefore different velocity and force units in the joints which makes the Jacobian to have columns with different units. This issue was first identified by Doty et al. (1995). They proposed using a weighting matrix in order to unify the units. However, even after that, many researchers used (Chiu 1987;Gravagne and Walker 2001;Guilamo et al. 2006;Jacquier-Bret et al. 2012;Lee 1989Lee , 1997Leven and Hutchinson 2003;Melchiorri 1993;Vahrenkamp et al. 2012;Valsamos and Aspragathos 2009) or suggested (Chiacchio et al. 1991;Koeppe and Yoshikawa 1997) same problematic metric for the manipulability of robots. Yoshikawa (1985b) also introduced dynamic manipulability metric and dynamic manipulability ellipsoid as extensions to his previous works on robot manipulability. He defined dynamic manipulability metric as where M is the joint-space inertia matrix, and dynamic manipulability ellipsoid as a result of mapping unit norm of joint torques to the operational acceleration space. Here, (M T M) −1 can be regarded as a weighting matrix which obviously solves the main issue with the first manipulability metric. However, physical interpretation of this metric still remains unclear. In other words, it is not quite obvious what the relationship is between w d and feasible or achievable operational space accelerations due to actual torque limits in the joints. Although, Yoshikawa (1985b) and later on some other researchers (Chiacchio 2000;Kurazume and Hasegawa 2006;Rosenstein and Grupen 2002;Tanaka et al. 2006;Yamamoto and Yun 1999) tried to include the effects of maximum joint torques into dynamic manipulability metric by normalizing the joint torques, their proposed normalizations are not done properly and therefore the results do not represent physical abilities of a robot in producing operational space accelerations. The issue with their suggested normalization will be discussed in more details in Sect. 3. Over the last two or three decades, many studies have been done on robot manipulability. Also many researchers have used manipulability metrics/ellipsoids in order to design more efficient robots or find better and more efficient configurations for robots to perform certain tasks Bagheri et al. 2015;Bowling and Khatib 2005;Guilamo et al. 2006;Kashiri and Tsagarakis 2015;Tanaka et al. 2006;Tonneau et al. 2014Tonneau et al. , 2016Zhang et al. 2013). However, almost all of these studies have overlooked the effects of not using (or using inappropriate) a weighting matrix. In this paper, we focus on the weighting matrix for dynamic manipulability calculations and study its importance and influences on the dynamic manipulability analysis. We also show that, by using this analysis, we can decompose the effects of the gravity and robot's velocity from the effects of robot's configuration and inertial parameters on the acceleration of a point of interest (i.e. operational space acceleration). Therefore, the outcome of the dynamic manipulability analysis will be a configuration based (i.e. velocity independent) metric/ellipsoid which is dependent only on the physical properties of a robot and its configuration. Hence, we claim that, by selecting proper values for the weighting matrix, dynamic manipulability can provide a powerful tool to analyse and measure a robot's physical abilities to perform a task. This paper is an extended and generalized version of our previous study on dynamic manipulability of the center of mass (CoM) (Azad et al. 2017). Main contributions over our previous work are (i) generalizing the idea of weighting matrix for dynamic manipulability to any point of interest (not only the CoM), (ii) investigating the relationship between the dynamic manipulability and the Gauss' principle of least constraints by suggesting a proper weighting matrix, (iii) describing the relationship between the dynamic manipulability metrics and operational space control, and (iv) discussing the applications of the dynamic manipulability metrics based on the suggested choices of weighting matrices. We first derive dynamic manipulability equations for the operational space of a robot. To this aim, we use general motion equations in which the robot is assumed to have floating base with multiple contacts with the environment. Thus, the effects of under-actuation due to the floating base and kinematic constraints due to the contacts will be included in the calculations. As a result of our dynamic manipulability analysis, we obtain an ellipsoid which graphically shows the operational space accelerations due to the weighted unit norm of torques at the actuated joints. This is applicable to all types of robot manipulators as well as legged (floating base) robots with different contact conditions. The setting of the weights is up to the user which is supposed to be done based on the application. Two physically meaningful choices for the weights are introduced in this paper and their physical interpretations are discussed. We also discuss different manipulability metrics which can be computed using the equation of the manipulability ellipsoid. We investigate the application of those metrics in comparing various robot configurations and finding an optimal one in terms of the physical abilities of the robot to achieve a desired task. Dynamic manipulability Considering a floating base robot with multiple contacts with the environment, the inverse dynamics equation will be where M is n × n joint-space inertia matrix, h is ndimensional vector of centrifugal, Coriolis and gravity forces, B is n × k selection matrix of the actuated joints, τ is k-dimensional vector of joint torques, J c is l × n Jacobian matrix of the constraints and f c is l-dimensional vector of constraint forces (and/or moments). Here, we assume that kinematic constraints are bilateral. This is a reasonable assumption if there is no slipping or loss of contact. In this case, we can write where N c M is the null-space projection matrix of J c . Similarly, we can write the operational space acceleration in the form of where is the mapping from joint torques to operational space acceleration, is the velocity and gravity dependent part ofp and J is the Jacobian of point p in the operational space of the robot which impliesṗ = Jq. Available torques at the joints are always limited due to saturation limits which directly affects the accessible joint space and operational space accelerations. To investigate these effects, first we define limits on joint torques as which is a unit weighted norm of actuated joints with W τ as a k × k weighting matrix. To find out the effects onp, we invert (11) as where τ 0 is a vector of arbitrary joint torques, N p = I−J # p J p is the projection matrix to the null-space of J p , and is a generalized inverse of J p . By replacing τ from (15) into (14), we will have The details of the derivations can be found in "Appendix I". The inequality in (17) defines an ellipsoid in the operational acceleration space which is called dynamic manipulability ellipsoid. The center of this ellipsoid is atp vg and its size and shape are determined by eigenvectors and eigenvalues of matrix J p W −1 τ J T p . As it can be seen, This matrix is a function of the weighting matrix W τ and also J p which is dependent on the robot's configuration and inertial parameters. Due to high influence of the weighting matrix on the dynamic manipulability ellipsoid, it is quite important to define W τ properly in order to obtain a correct and physically meaningful mapping from the bounded joint torques to the operational space acceleration. This can be helpful in order to study the effects of limited joint torques on the operational space accelerations. Note that, if the weighting matrix is not defined properly, the outcome ellipsoid will be confusing and ambiguous rather than beneficial and useful. Weighting matrix In this section, we study the effects of the weighting matrix on the dynamic manipulability ellipsoid and propose two reasonable and physically meaningful choices for this matrix. First one is called bounded joint torques and incorporates saturation limits at the joints, and the second one is called bounded joint accelerations which assumes limits on the joint accelerations. The latter is also related to the Gauss' principle of least constraints which will be discussed further in this section. First choice: bounded joint torques The dynamic manipulability ellipsoid is defined to map available joint torques to the operational acceleration space. In order to include all available joint torques in the initial bounding inequality in (14), we introduce a weighting matrix as where τ i max is the saturation limit at the i th joint and function diag(v) builds a diagonal matrix out of vector v. Note that, if we replace W τ from (18) into (14), we will have which guarantees that |τ i | < τ i max for each i and therefore it accommodates all possible combinations of joint torques. This is different from torque normalization which is mentioned in the literature (Ajoudani et al. 2017;Chiacchio 2000;Gu et al. 2015;Rosenstein and Grupen 2002). To the best of the authors' knowledge, none of the previous studies considered the number of actuators (i.e. k) in the weighting matrix which makes it an incorrect estimation of the feasible area. Figure 1 shows dynamic manipulability ellipses for a planar robot in six different configurations. The robot is consisted of five links which are connected via revolute joints. The first and last links are assumed to be passively in contact with the ground (to mimic a planar quadruped robot). The length and mass of the middle link are assumed to be (18) is used for the calculations where the number of actuators is 4 and the maximum torque at the actuators connected to the middle link are assumed to be twice the maximum torque at the other two actuators. The velocity and gravity are set to zero since their only effect would be to change the center point of the ellipses. The shaded polygons in Fig. 1 represent exact areas in the acceleration space of the point of interest (i.e. the center point of the middle link) which are accessible due to the limited torques at the joints in six different configurations. These areas are computed using (11), numerically. As it can be seen in the plots, the polygons are always completely enclosed in the ellipses which implies that the dynamic manipulability ellipses, with the suggested weighting matrix in (18), are reasonable approximations of the exact feasible areas. These ellipses also graphically show that, given the limits at the joint torques, what accelerations are feasible in the operational space and what directions are easier to accelerate the point of interest. Note that, the choice of this point is dependent on the desired task. For example, for a balancing task, the CoM can be considered as the point of interest (Azad et al. 2017), whereas for a manipulation task, it makes more sense to choose the end-effector as the point of interest. It is worth mentioning that, the main purpose of the plots in Fig. 1 is to show the accuracy of the approximation of the polygons by the ellipses. Although, one can compare the robot configurations in terms of feasible operational space accelerations with same amount of available torques at the joints. As it can be seen in this figure, the ellipses (and also polygons) in the left column are larger than their corresponding ones in the right column which implies that by changing the angle from 90 • to 120 • , the range of available accelerations at the point of interest is extended. Second choice: bounded joint accelerations To propose our second suggestion for the weighting matrix, first we assume limits on the joint accelerations as a unit weighted norm centered atq vg . This limit can be written as where W q is a positive definite weighting matrix in the joint acceleration space. This matrix can be used to unify the units and/or prioritize the importance of joint accelerations. By substituting (q −q vg ) from (7) into (20), we will have which implies that choosing the weighting matrix as converts the inequality in (21) to the one in (14). Thus, the ellipsoid in (17) will show the boundaries on the operational space accelerations due to the limited joint accelerations. This is true only if W τ in (22) is positive definite or in other words, if J q is full column rank. Observe that, in general, J q could be rank deficient due to kinematic constraints. This happens when contact forces cancel out the effects of joint torques and result in zero motion at the joints (i.e.q = 0 when τ = 0). Mathematically, it means that a linear combination of the columns of J q becomes zero which implies that J q is rank deficient. This violates the positive definite assumption of W τ and invalidates the results in (17). In this case, we define a new positive definite weighting matrix as where J q c is a full column rank matrix obtained from the singular value decomposition of J q . This is explained in "Appendix II". As a result of this decomposition we will have where J q r is a full row rank matrix. Plugging (24) back into (21), yields where τ r q = J q r τ is regarded as a reduced vector of the joint torques. The relationship between this vector and operational space accelerations can be acquired from (11) and (12) as Therefore, the outcome ellipsoid in (17) will be This ellipsoid helps in studying the effects of bounded joint accelerations on operational space accelerations by assuming virtual limits on the joint accelerations. Relation to the Gauss' principle of least constraints The Gauss' principle of least constraints says that a constrained system always minimizes the inertia-weighted norm of the difference between its acceleration and what the acceleration would have been if there were no constraints (Fan et al. 2005;Lötstedt 1982). In general, robot's motion tasks can be regarded as virtual kinematic constraints which are enforced by control torques. Thus, to calculate the unconstrained robot's acceleration (i.e.q u ), both f c and τ in (1) should be set to zero. So, we will have Therefore, the difference betweenq u and the robot's acceleration in (7) is ig. 2 The intersection areas between colored ellipses and black ones are proper approximations of the corresponding colored areas. Colored ellipses are dynamic manipulability ellipses with the bounded joint accelerations and W q = M. Blue, yellow and red ellipses are related to different norms (1, 2 and 3, respectively) of the inequality in (30). The corresponding colored polygons show the feasible task space accelerations due to torque limits and subject to (30) (Color figure online) It is proved in "Appendix III" of this paper that the inertiaweighted norm of this difference is always greater than left hand side of (21) if W q is set to M. So, one can conclude that It implies that, by setting W q = M, the ellipsoid in (27) represents the mapping in the task acceleration space of the function that is minimized in constrained systems according to the Gauss' principle. Note that, in a special case, where the robot is fully actuated and there is no constraint forces, we will have J q c = J q = M −1 . Therefore, in this case, setting W q = M will be equivalent to setting W τ = M −1 according to (22). The dynamic manipulability ellipsoid for this special case (with the above mentioned setting for the weighting matrix) will be the same as the generalized inertia ellipsoid which is introduced in Asada (1983). Figure 2 repeats the graphs in Fig. 1 including new colored ellipses and areas. The blue, yellow and red ellipses show dynamic manipulability ellipses which are calculated using (27), where the joint weighting matrix W q is set to M, 1 4 M and 1 9 M, respectively. Note that, the factor of M in W q actually determines the norm of the inequality in (30). Obviously, this norm is 1, 2 and 3 for the blue, yellow and red ellipses, respectively. The colored polygons in the plots represent the corresponding exact feasible areas which are the results of mapping the joint accelerations in (30) to the task acceleration space given the torque saturation limits. These areas are obtained by evaluating (11) numerically subject to the inequality in the left hand side of (30) and also the torque limits. The intersection areas between the colored ellipses and the black ones are shown in Fig. 3. The colored polygons in this figure are the same as those in Fig. 2. According to Fig. 3, the intersection areas are reasonable approximations of the exact areas shown by corresponding colored polygons. However, in the top two plots, the approximations are not as good as the other ones. The reason is that in these two plots, there are relatively large gaps between the feasible areas due to the torque limits only (i.e. gray polygons) and the dynamic manipulability ellipse with bounded joint torques (i.e. black ellipse) which directly affects the estimation of the colored areas. This is inevitable in some configurations for robots with under-actuation and/or kinematic constraints due to the rank deficiency of J q . As it can be seen in Fig. 2, the colored ellipses for each configuration have the same shape but different sizes. The shapes are the same since they are mapping the same equation (30), and the sizes are different since the values of the norm in this equation are different. The axis of the larger radius of the colored ellipses shows the direction in the task acceleration space in which lower inertia-weighted norm of (q −q u ) is achievable. Hence, it is ideal to have the larger radii of both black and colored ellipses in a same direction to provide larger intersection area between them. In that case, larger part of the feasible area (i.e. the gray area which is estimated by black ellipse) would be covered by the colored areas implying that more points in the operational acceleration space will be achievable by lower inertia-weighted norm of (q −q u ). In other words, although it is beneficial to have larger ellipsoids of both types (i.e. bounded joint torques and bounded joint accelerations with W q = M), it is also desirable to have both ellipsoids in a same direction to maximize the intersection area between them. Manipulability metrics We define the manipulability matrix as the matrix that determines the size and shape of the manipulability ellipsoid. Thus, if we write both manipulability ellipsoid inequalities in (17) and (27) as then A will be the manipulability matrix which is A = J p W −1 τ J T p for (17) and A = JJ q c W −1 r q J T q c J T for (27). As mentioned earlier in Sect. 1, the square root of the determinant of the manipulability matrix (i.e. w = √ det(A)) is defined as a manipulability metric in most of the studies in the literature (Lee 1997;Vahrenkamp et al. 2012;Yoshikawa 1985bYoshikawa , 1991. This metric represents the volume of the manipulability ellipsoid and shows the ability to accelerate the point of interest in all directions in general. Most of the times, we want to measure the ability to accelerate the robot in a certain direction. To this aim, some studies (Chiu 1987;Koeppe and Yoshikawa 1997;Lee and Lee 1988; Lee 1989) proposed the length of the manipulability ellipsoid in the desired direction as a suitable metric. This length is actually the distance between the center point and the intersection of the desired direction and surface of the ellipsoid. As an example for a 2D case, this length is shown by d in Fig. 4, where the desired direction is denoted by u. To calculate d, since the intersection point is on the surface of the ellipsoid, we replace (p −p vg ) with d u |u| in the equality form of (31). Therefore, which implies that Another useful measure would be the orthogonal projection of the ellipsoid in the desired direction which is shown by s in Fig. 4 for an example of a 2D case. This projection indicates the maximum acceleration of the point of interest in the direction u, though achieving that acceleration may result in some accelerations in other directions, as well. To calculate s, we use the method and equations which are described in Pope (2008). To do so, we first rewrite the ellipsoid inequality in (31) to conform with the form that is mentioned in Pope (2008). Since A is a symmetric matrix, its Eigendecomposition results in A = QΛQ T , where Q is an orthogonal matrix and Λ is a diagonal matrix of eigenvalues of A. Note that A −1 = QΛ −1 Q T and A −1/2 = QΛ −1/2 = Λ −1/2 Q T . So, we can rewrite (31) as According to Pope (2008), for an ellipsoid with the form of (34), s can be calculated via For the details of calculations readers are referred to Pope (2008). Applications of manipulability metrics In this section, we explain the application of manipulability metrics through two examples. In these examples, we (i) compare different robot configurations (in Sect. 5.1), and (ii) find an optimal configuration (in Sect. 5.2) for a robot to accelerate its end-effector in desired directions. To this aim, the proper metric is the length of manipulability ellipsoid which is d in (33). The robot is assumed to be a three degrees of freedom RRR planar robot. Each link of this robot has unit mass and unit length with its CoM at the middle point. Example I: Comparing robot configurations In this example, we consider six different configurations of the planar robot and plot the bounded joint torques ellipses using (17), and bounded joint accelerations ellipses using (27) for the end-effector of that robot. These ellipses are shown in Fig. 5 by black and gray colors, respectively. For the bounded joint torques ellipses we assume that the torque limits are the same for all joints (i.e. τ max = 0.5) and for the bounded joint accelerations ellipses, we set W q = M to conform to the Gauss' principle of least constraints. We also calculate lengths of the ellipses for three desired directions using (33). The desired directions are (i) the horizontal, (ii) 45 • to the horizontal, and (iii) the vertical, which are shown by vectors in the plots in Fig. 5. The values of these lengths are reported in Tables 1 and 2 under columns d 1 for bounded joint torques ellipses and d 2 for bounded joint accelerations ellipses. In Tables 1 and 2, ||τ || and ||τ || M −1 = (τ T M −1 τ ) 1 2 are respectively the norms and inverse inertia-weighted norms of the minimum joint torques which are required to accelerate the robot's end-effector by one unit in the desired directions at each configuration. Minimum joint torques are calculated by using (15) assuming that τ 0 = 0 and alsop vg = 0 (i.e. velocity and gravity are set to zero). Note that, for these calculations, J # p needs to be computed via (16) which depends on the weighting matrix W τ . In order to be able to compare the norms of the minimum joint torques with the relevant manipulability metrics (i.e. d 1 and d 2 ), W τ in (16) is assumed to be identity for the torques in Table 1, and M −1 for the torques in mentioning that, these two settings for W τ are the most common ones in the operational space control frameworks (Peters and Schaal 2008). As it can be seen in both Tables 1 and 2, wherever the norm or the weighted norm of joint torques is higher the corresponding manipulability metric is lower and vice versa. In other words, norms or weighted norms of the torques are inversely related to the corresponding manipulability metrics d 1 or d 2 . It implies that, maximizing manipulability metrics is the dual problem of minimizing the (weighted) norm of the joint torques. Therefore, one can optimize the relevant dynamic manipulability metric in order to maximize the robot performance or efficiency to perform a certain task. This will be described in the next example. Another advantage of using dynamic manipulability analysis is that it provides a graphical representation of the mapping from the joint torques to the operational acceleration space which can help in better understanding the problem specially if it is a planar one. For example, comparing the plots in each row of Fig. 5, one can conclude that the left hand side ones are referring to better (more efficient) configurations for accelerating the robot's end-effector in the desired directions. This is because both black and gray ellipses in the left column plots (odd numbers) are extended in the same direction as the desired ones, whereas in the right column plots (even numbers) at least one of the ellipses is not extended in the desired direction. This conclusion agrees with the values mentioned in the diagonal components of Tables 1 and 2 since the norm or weighted norm of the joint torques are lower in odd number plots compared to the corresponding even ones. Example II: Optimizing the robot configuration In the second example, we find optimal configurations for the robot in order to minimize the norm and inverse inertiaweighted norm of the joint torques. The task is to accelerate the robot's end-effector in the direction of 60 • to the horizontal while the position of the end-effector is at p = (0.5, 1.5). Fig. 6 Two optimal configurations for a planar RRR robot (right column) and corresponding dynamic manipulability ellipses (left column). The black and gray ellipses are bounded joint torques and bounded joint accelerations ellipses, respectively. For the former, torque saturation limits at the joints are assumed to be the same and for the latter W q = M to conform to the Gauss' principle This is a typical redundancy resolution problem in the operational space control. Figure 6 shows bounded joint torques ellipses (black) and bounded joint accelerations (conforms with the Gauss' principle) ellipses (gray) for the robot in two optimal configurations. These configurations, which are shown in the right column of Fig. 6, are the outcomes of an optimization algorithm. This algorithm maximizes the length of black and gray ellipses in the desired direction for the bottom and top plots, respectively. The desired direction is shown by vectors in the plots. The optimization problem has the following form: where q l and q u are the lower and upper limits of the joints. Note that d is calculated using (33) and is dependent on q via the A matrix. According to Fig. 6, depending on the objective function, which is maximizing the length of either black or gray ellipse in the desired direction, the optimal configuration of the robot is different. The values of the optimal lengths of the black and gray ellipses are mentioned in Table 3 under columns d 1 and d 2 , respectively. The norm and inverse inertia-weighted norm of required joint torques in the optimal configurations are also reported in the table. As can be seen in this table, the Optimal values are given in bold norm of the joint torques is lower in the bottom plot compared to the top one, whereas the inverse inertia-weighted norm of the joint torques in the top plot is lower compared to the bottom one. This agrees with the values of d 1 and d 2 which are the corresponding metrics. Note that, inverse inertia-weighed norm of the joint torques is representing the inertia-weighted norm of (q −q u ) for this robot. It implies that in the top plot, the inertia-weighted norm of joint accelerations is lower although the norm of the joint torques is higher. Therefore, by using dynamic manipulability analysis, we can optimize a robot's configuration in terms of torque and/or acceleration efficiency. It is worth mentioning that, in this particular example, even the norm of joint accelerations is lower in the top plot compared to the bottom one. The values of joint accelerations, required to accelerate the end-effector in the desired direction, areq bottom = (0.53, −1.14, 1.99) T for the bottom plot andq top = (0.12, 0.23, −1.23) T for the top one. So, the norm of joint accelerations are 2.36 and 1.26, respectively. Conclusion We revisited the concept of dynamic manipulability analysis for robots and derived the corresponding equations for floating base robots with multiple contacts with the environment. The outcomes of this analysis are a manipulability ellipsoid which is dependent on a weighting matrix, and different manipulability metrics which are extracted from the ellipsoid. We described the importance of the weighting matrix which is included in the equations and claimed that, by using proper weighting matrix, dynamic manipulability can be a useful tool in order to study, analyse and measure physical abilities of robots in different tasks. We suggested two physically meaningful options for the weighting matrix and explained their applications in comparing different robot configurations and finding an optimal one using two illustrative examples. The dynamic manipulability analysis can be performed for any point of interest of a robot according to the desired task. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecomm ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
7,779.8
0001-01-01T00:00:00.000
[ "Computer Science" ]
Measurement uncertainty relations for position and momentum: Relative entropy formulation Heisenberg's uncertainty principle has recently led to general measurement uncertainty relations for quantum systems: incompatible observables can be measured jointly or in sequence only with some unavoidable approximation, which can be quantified in various ways. The relative entropy is the natural theoretical quantifier of the information loss when a `true' probability distribution is replaced by an approximating one. In this paper, we provide a lower bound for the amount of information that is lost by replacing the distributions of the sharp position and momentum observables, as they could be obtained with two separate experiments, by the marginals of any smeared joint measurement. The bound is obtained by introducing an entropic error function, and optimizing it over a suitable class of covariant approximate joint measurements. We fully exploit two cases of target observables: (1) $n$-dimensional position and momentum vectors; (2) two components of position and momentum along different directions. In (1), we connect the quantum bound to the dimension $n$; in (2), going from parallel to orthogonal directions, we show the transition from highly incompatible observables to compatible ones. For simplicity, we develop the theory only for Gaussian states and measurements. Introduction Uncertainty relations for position and momentum [40] have always been deeply related to the foundations of Quantum Mechanics. For several decades, their axiomatization has been of 'preparation' type: an inviolable lower bound for the widths of the position and momentum distributions, holding in any quantum state. Such kinds of uncertainty relations, which are now known as preparation uncertainty relations (PURs), have been later extended to arbitrary sets of n ≥ 2 observables [44][45][46]59]. All PURs trace back to PUR the celebrated Robertson's formulation [58] of Heisenberg's uncertainty principle: for any two observables, represented by self-adjoint operators A and B, the product of the variances of A and B is bounded from below by the expectation value of their commutator; in formulae, Var ρ (A) Var ρ (B) ≥ 1 4 |Tr{ρ[A, B]}| 2 , where Var ρ is the variance of an observable measured in any system state ρ. In the case of position Q and momentum P , this inequality gives Heisenberg's relation Var ρ (Q) Var ρ (P ) ≥ 2 4 . About 30 years after Heisenberg and Robertson's formulation, Hirschman attempted a first statement of position and momentum uncertainties in terms of informational quantities. This led him to a formulation of PURs based on Shannon entropy [41]; his bound was later refined [12,14], and extended to discrete observables [50]. Also other entropic quantities have been used [35]. We refer to [31,63] for an extensive review on entropic PURs. However, Heisenberg's original intent [40] was more focused on the unavoidable disturbance that a measurement of position produces on a subsequent measurement of momentum [21,25,26,[53][54][55][56]65]. Trying to give a better understanding of his idea, more recently new formulations were introduced, based on a 'measurement' interpretation of uncertainty, rather than giving bounds on the probability distributions of the target observables. Indeed, with the modern development of the quantum theory of measurement and the introduction of positive operator valued measures and instruments [1,20,23,34,39,44], it became possible to deal with approximate measurements of incompatible observables and to formulate measurement uncertainty relations (MURs) for position and momentum, as well as for more general MUR observables. The MURs quantify the degree of approximation (or inaccuracy and disturbance) made by replacing the original incompatible observables with a joint approximate measurement of them. A very rich literature on this topic flourished in the last 20 years, and various kinds of MURs have been proposed, based on distances between probability distributions, noise quantifications, conditional entropy, etc. [19, 21, 22, 24-26, 31, 32, 38, 53-56, 65, 66]. In this paper, we develop a new information-theoretical formulation of MURs for position and momentum, using the notion of the relative entropy (or Kullback-Leibler divergence) of two probabilities. The relative entropy S(p q) is an informational quantity which is precisely tailored to quantify the amount of information that is lost by using an approximating probability q in place of the target one p. Although classical and quantum relative entropies have already been used in the evaluation of the performances of quantum measurements [1, 6-11, 18, 19, 32, 51, 51], their first application to MURs is very recent [2]. In [2], only MURs for discrete observables were considered. The present work is a first attempt to extend that information-theoretical approach to the continuous setting. This extension is not trivial and reveals peculiar problems, that are not present in the discrete case. However, the nice properties of the relative entropy, such as its scale invariance, allow for a satisfactory formulation of the entropic MURs also for position and momentum. We deal with position and momentum in two possible scenarios. Firstly, we consider the case of ndimensional position and momentum, since it allows to treat either scalar particles, or vector ones, or even the case of multi-particle systems. This is the natural level of generality, and our treatment extends without difficulty to it. Then, we consider a couple made up of one position and one momentum component along two different directions of the n-space. In this case, we can see how our theory behaves when one moves with continuity from a highly incompatible case (parallel components) to a compatible case (orthogonal ones). The continuous case needs much care when dealing with arbitrary quantum states and approximating observables. Indeed, it is difficult to evaluate or even bound the relative entropy if some assumption is not made on probability distributions. In order to overcome these technicalities and focus on the quantum content of MURs, in this paper we consider only the case of Gaussian preparation states and Gaussian measurement apparatuses [16,36,45,46,49,59,62]. Moreover, we identify the class of the approximate joint measurements with the class of the joint POVMs satisfying the same symmetry properties of their target position and momentum observables [20,44]. We are supported in this assumption by the fact that, in the discrete case [2], simmetry covariant measurements turn out to be the best approximations without any hypothesis (see also [24-26, 65, 66] for a similar appearance of covariance within MURs for different uncertainty measures). We now sketch the main results of the paper. In the vector case, we consider approximate joint measurements M of the position Q ≡ (Q 1 , . . . , Q n ) and the momentum P ≡ (P 1 , . . . , P n ). We find the following entropic MUR (Theorem 21, Remark 14): for every choice of two positive thresholds ǫ 1 , ǫ 2 , with ǫ 1 ǫ 2 ≥ 2 /4, there exists a Gaussian state ρ with position variance matrix A ρ ≥ ǫ 1 ½ and momentum variance matrix B ρ ≥ ǫ 2 ½ such that S(Q ρ M 1,ρ ) + S(P ρ M 2,ρ ) ≥ n (log e) ln 1 + 2 √ for all Gaussian approximate joint measurements M of Q and P . Here Q ρ and P ρ are the distributions of position and momentum in the state ρ, and M ρ is the distribution of M in the state ρ, with marginals M 1,ρ and M 2,ρ ; the two marginals turn out to be noisy versions of Q ρ and P ρ . The lower bound is strictly positive and it grows linearly with the dimension n. The thresholds ǫ 1 and ǫ 2 are peculiar of the continuous case and they have a classical explanation: the relative entropy S(p q) → +∞ if the variance of p vanishes faster than the variance of q, so that, given M, it is trivial to find a state ρ enjoying (1) if arbtrarily small variances are allowed. What is relevant in our result is that the total loss of information S(Q ρ M 1,ρ ) + S(P ρ M 2,ρ ) exceeds the lower bound even if we forbid target distributions with small variances. The MUR (1) shows that there is no Gaussian joint measurement which can approximate arbitrarily well both Q and P . The lower bound (1) is a consequence of the incompatibility between Q and P and, indeed, it vanishes in the classical limit → 0. Both the relative entropies and the lower bound in (1) are scale invariant. Moreover, for fixed ǫ 1 and ǫ 2 , we prove the existence and uniqueness of an optimal approximate joint measurement, and we fully characterize it. In the scalar case, we consider approximate joint measurements M of the position Q u = u · Q along the direction u and the momentum P v = v · P along the direction v, where u · v = cos α. We find two different entropic MURs. The first entropic MUR in the scalar case is similar to the vector case (Theorem 17, Remark 11). The second one is (Theorem 15): c ρ (α) = (log e) ln 1 + | cos α| for all Gaussian states ρ and all Gaussian joint approximate measurements M of Q u and P v . This lower bound holds for every Gaussian state ρ without constraints on the position and momentum variances Var (Q u,ρ ) and Var (P v,ρ ), it is strictly positive unless u and v are orthogonal, but it is state dependent. Again, the relative entropies and the lower bound are scale invariant. The paper is organized as follows. In Section 2, we introduce our target position and momentum observables, we discuss their general properties and define some related quantities (spectral measures, mean vectors and variance matrices, PURs for second order quantum moments, Weyl operators, Gaussian states). Section 3 is devoted to the definitions and main properties of the relative and differential (Shannon) entropies. Section 4 is a review on the entropic PURs in the continuous case [12,14,41], with a particular focus on their lack of scale invariance. This is a flaw due to the very definition of differential entropy, and one of the reasons that lead us to introduce relative entropy based MURs. In Section 5 we construct the covariant observables which will be used as approximate joint measurements of the position and momentum target observables. Finally, in Section 6 the main results on MURs that we sketched above are presented in detail. Some conclusions are discussed in Section 7. Target observables and states Let us start with the usual position and momentum operators, which satisfy the canonical commutation rules: Q ≡ (Q 1 , . . . , Q n ), P ≡ (P 1 , . . . , P n ), Each of the vector operators has n components; it could be the case of a single particle in one or more dimensions (n = 1, 2, 3), or several scalar or vector particles, or the quadratures of n modes of the electromagnetic field. We assume the Hilbert space H to be irreducible for the algebra generated by the canonical H operators Q and P . An observable of the quantum system H is identified with a positive operator valued measure (POVM); in the paper, we shall consider observables with outcomes in R k endowed with its POVM Borel σ-algebra B(R k ). The use of POVMs to represent observables in quantum theory is standard and B(R k ) the definition can be found in many textbooks [20,23,34,37]; the alternative name "non-orthogonal resolutions of the identity" is also used [44][45][46]. Following [20,23,38,46], a sharp observable is an observable represented by a projection valued measure (pvm); it is standard to identify a sharp observable on the out-pvm come space R k with the k self-adjoint operators corresponding to it by spectral theorem. Two observables are jointly measurable or compatible if there exists a POVM having them as marginals. Because of the non-vanishing commutators, each couple Q i , P i , as well as the vectors Q, P , are not jointly measurable. We denote by T(H) the trace class operators on H, by S ⊂ T(H) the subset of the statistical operators S, T(H) (or states, preparations), and by L(H) the space of the linear bounded operators. L(H) Position and momentum Our target observables will be either n-dimensional position and momentum (vector case) or position and momentum along two different directions of R n (scalar case). The second case allows to give an example ranging with continuity from maximally incompatible observables to compatible ones. Vector observables As target observables we take Q and P as in (3) and we denote by Q(A), P(B), A, B ∈ B(R n ), their pvm's, that is Then, the distributions in the state ρ ∈ S of a sharp position and a sharp momentum measurements (denoted by Q ρ and P ρ ) are absolutely continuous with respect to the Lebesgue measure; we denote by f (•|ρ) and g(•|ρ) their probability densities: ∀A, B ∈ B(R n ), In the Dirac notation, if |x and |p are the improper position and momentum eigenvectors, these densities take the expressions f (x|ρ) = x|ρ|x and g(p|ρ) = p|ρ|p , respectively. The mean vectors and the variance matrices of these distributions will be given in (7) and (8). Scalar observables As target observables we take the position along a given direction u and the momentum along another given direction v: In this case we have [Q u , P v ] = i cos α, so that Q u and P v are not jointly measurable, unless the directions u and v are orthogonal. Their pvm's are denoted by Q u and P v , their distributions in a state ρ by Q u,ρ and P v,ρ , and their corresponding probability densities by f u (•|ρ) and g v (•|ρ): ∀A, B ∈ B(R), g v (p|ρ) dp. Of course, the densities in the scalar case are marginals of the densities in the vector case. Means and variances will be given in (11). Quantum moments. Let S 2 be the set of states for which the second moments of position and momentum are finite: Then, the mean vector and the variance matrix of the position Q in the state ρ ∈ S 2 are while for the momentum P we have For ρ ∈ S 2 it is possible to introduce also the mixed 'quantum covariances' Since there is no joint measurement for the position Q and momentum P , the quantum covariances C ρ ij are not covariances of a joint distribution, and thus they do not have a classical probabilistic interpretation. By means of the moments above, we construct the three real n × n matrices A ρ , B ρ , C ρ , the 2ndimensional vector µ ρ and the symmetric 2n × 2n matrix V ρ , with We say V ρ is the quantum variance matrix of position and momentum in the state ρ. In [59] dimensionless canonical operators are considered, but apart from this, our matrix V ρ corresponds to their "noise matrix in real form"; the name "variance matrix" is also used [49,60]. In a similar way, we can introduce all the moments related to the position Q u and momentum P v introduced in (6). For ρ ∈ S 2 , the means and variances are respectively Similarly to (9), we have also the 'quantum covariance' u · C ρ v ≡ v · (C ρ ) T u. Then, we collect the two means in a single vector and we introduce the variance matrix: be a real symmetric 2n × 2n block matrix with the same dimensions of a quantum variance matrix. Define In this case we have: V ≥ 0, A > 0, B > 0, and The inequalities (14) for V ± tell us exactly when a (positive semi-definite) real matrix V is the quantum variance matrix of position and momentum in a state ρ. Moreover, they are the multidimensional version of the usual uncertainty principle expressed through the variances [44,46,59], hence they represent a form of PURs. The block matrix Ω in the definition of V ± is useful to compress formulae involving position and momentum; moreover, it makes simpler to compare our equations with their frequent dimensionless versions (with = 1) in the literature [36,49]. By using the real block vector αu ′ βv ′ , with arbitrary α, β ∈ R and given u ′ , v ′ ∈ R n , the semipositivity (14) implies which in turn implies A ≥ 0, B ≥ 0 and (15). Then, by choosing u ′ = v ′ = u i , where u 1 , . . . , u n are the eigenvectors of A (since A is a real symmetric matrix, u i ∈ R n for all i), one gets the strict positivity of all the eigenvalues of A; analogously, one gets B > 0. Inequality (15) for u ′ = u and v ′ = v becomes the uncertainty ruleà la Robertson [58] for the observables in (6) (a position component and a momentum component spanning an arbitrary angle α): Inequality (16) is equivalent to Since V ± are block matrices, their positive semi-definiteness can be studied by means of the Schur complements [27,47,57]. However, as V ± are complex block matrices with a very peculiar structure, special results hold for them. Before summarizing the properties of V ± in the next proposition, we need a simple auxiliary algebraic lemma. In this case we have Moreover, we have also the following properties for the various determinants: By interchanging A with B and C with C T in (18)-(22) equivalent results are obtained. Proof. Since we already know that V + ≥ 0 implies the invertibility of A, the equivalence between (14) and (18) with A > 0 follows from [47, Theor. 1.12 p. 34] (see also [57,Theor. 11.6] or [27,Lemma 3.2]). In (19), the first inequality follows by summing up the two inequalities in (18). The last two ones are immediate by the positivity of A −1 . The equality in (20) is Schur's formula for the determinant of block matrices [47, Theor. 1.1 p. 19]. Then, the first inequality is immediate by the lemma above and the trivial relation B ≥ B − C T A −1 C; the second one follows from (19): The equality det V = 2 2n is equivalent to det B − C T A −1 C = det 2 4 A −1 ; since the latter two determinants are evaluated on ordered positive matrices by (19), they coincide if and only if the respective arguments are equal (Lemma 2); this shows the equivalence in (21). Then, by (18), the self-adjoint matrix i 2 A −1 C − C T A −1 is both positive semi-definite and negative semi-definite; hence it is null, that is, (19), Lemma 2 then implies C T A −1 C = 0 and so C = 0. By (18) and (19), every time three matrices A, B, C define the quantum variance matrix of a state ρ, the same holds for A, B, C = 0. This fact can be used to characterize when two positive matrices A and B are the diagonal blocks of some quantum variance matrix, or two positive numbers c Q and c P are the position and momentum variances of a quantum state along the two directions u and v. A −1 . Two real numbers c Q > 0 and c P > 0, having the dimension of the square of a length and momentum, respectively, are such that c Q = Var(Q u,ρ ) and c P = Var(P v,ρ ) for some state ρ if and only if Proof. For A and B, the necessity follows from (19). The sufficiency comes from (18) by choosing For c Q and c P , the necessity follows from (15). The sufficiency comes from (18) with V ρ = A 0 0 B and for example the following choices of A and B: -if cos α = ±1, we take A = c Q ½ and B = c P ½; where A ′ and B ′ are any two scalar multiples of the orthogonal projection onto {u, v} ⊥ satisfying where A ′ and B ′ are as in the previous item. In the last two cases, we chose A and B in such a way that B = cQ cP (cos α) 2 A −1 when restricted to the linear span of {u, v}. Weyl operators and Gaussian states In the following, we shall introduce Gaussian states, Gaussian observables and covariant observables on the phase-space. In all these instances, the Weyl operators are involved; here we recall their definition and some properties (see e.g. [45,Sect. 5.2] or [46,Sect. 12.2], where, however, the definition differs from ours in that the Weyl operators are composed with the map Ω −1 of (13)). Definition 1. The Weyl operators are the unitary operators defined by The Weyl operators (23) satisfy the composition rule in particular, this implies the commutation relation These commutation relations imply the translation property due to this property, the Weyl operators are also known as displacement operators. With a slight abuse of notation, we shall sometimes use the identification where x p is a block column vector belonging to the phase-space R n × R n ≡ R 2n ; here, the first block x is a position and the second block p is a momentum. By means of the Weyl operators, it is possible to define the characteristic function of any trace-class operator. Definition 2. For any operator ρ ∈ T(H), its characteristic function is the complex valued function ρ : Note that k is the inverse of a length and l is the inverse of a momentum, so that w is a block vector living in the space R 2n ≡ R n × R n regarded as the dual of the phase-space. Instead of the characteristic function, sometimes the so called Weyl transform Tr {W (x, p)ρ} is introduced [45,49]. By [45,Prop. 5.3.2, Theor. 5.3.3], we have ρ(w) ∈ L 2 (R 2n ) and the following trace formula holds: ∀ρ, σ ∈ T(H), Moreover, the following inversion formula ensures that the characteristic function ρ completely characterizes the state ρ [45, Coroll. 5.3.5]: The last two integrals are defined in the weak operator topology. Finally, for ρ ∈ S 2 , the moments (7)-(10) can be expressed as in [45,Sect. 5.4]: for a vector µ ρ ∈ R 2n and a real 2n × 2n matrix V ρ such that V ρ + ≥ 0. The condition V ρ + ≥ 0 is necessary and sufficient in order that the function (31) defines the characteristic function of a quantum state [45, Theor. 5.5.1], [46,Theor. 12.17]. Therefore, Gaussian states are exactly the states whose characteristic function is the exponential of a second order polynomial [45, Eq. We shall denote by G the set of the Gaussian states; we have G ⊂ S 2 ⊂ S. By (30), the vectors a ρ , b ρ G and the matrices A ρ , B ρ , C ρ characterizing a Gaussian state ρ are just its first and second order quantum moments introduced in (7)-(9). By (31), the corresponding distributions of position and momentum are Gaussian, namely 2n if and only if ρ is pure. , then the equivalence (22) gives B ρ = 2 4 (A ρ ) −1 , so that the variance matrices A ρ and B ρ have a common eigenbasis u 1 , . . . , u n . Thus, all the corresponding couples of position Q ui and momentum P ui have minimum uncertainties: Var(Q ui ) Var(P ui ) = 2 4 . Therefore, if we consider the factorization of the Hilbert space H = H 1 ⊗ · · · ⊗ H n corresponding to the basis u 1 , . . . , u n , all the partial traces of the state ρ on each factor H i are minimum uncertainty states. Since for n = 1 the minimum uncertainty states are pure and Gaussian, the state ρ is a pure product Gaussian state. The converse is immediate. Relative and differential entropies In this paper, we will be concerned with entropic quantities of classical type [17,33,61]. We express them in 'bits', that is we use the base-2 logarithms: log a ≡ log 2 a. We deal only with probabilities on the measurable space R n , B(R n ) which admit densities with respect to the Lebesgue measure. So, we define the relative entropy and differential entropy only for such probabilities; moreover, we list only the general properties used in the following. Relative entropy or Kullback-Leibler divergence The fundamental quantity is the relative entropy, also called information divergence, discrimination information, Kullback-Leibler divergence or information or distance or discrepancy. The relative entropy of a probability p with respect to a probability q is defined for any couple of probabilities p, q on the same probability space. Given two probabilities p and q on (R n , B(R n )) with densities f and g, respectively, the relative entropy of p with respect to q is The value +∞ is allowed for S(p q); the usual convention 0 log(0/0) = 0 is understood. The relative entropy (33) is the amount of information that is lost when q is used to approximate p [17, p. 51]. Of course, if x is dimensioned, then the densities f and g have the same dimension (that is, the inverse of x), and the argument of the logarithm is dimensionless, as it must be. As S(p q) is scale invariant, it quantifies a relative error for the use of q as an approximation of p, not an absolute one. Let us employ the relative entropy to evaluate the effect of an additive Gaussian noise ν ∼ N (b; β 2 ) on an independent Gaussian random variable X. If X ∼ N (a; α 2 ), then X + ν ∼ N (a + b; α 2 + β 2 ), and the relative entropy of the true distribution of X with respect to its disturbed version X + ν is This expression vanishes if the noise becomes negligible with respect to the true distribution, that is if β 2 /α 2 → 0 and b 2 /α 2 → 0. On the other hand, S(X X + ν) diverges if the noise becomes too strong with respect to the true distribution, or, in other words, if the true distribution becomes too peaked with respect to the noise, that is, β 2 /α 2 → +∞ or b 2 /α 2 → +∞. Differential entropy The differential entropy of an absolutely continuous random vector X with a probability density f is This quantity is commonly used in the literature, even if it lacks many of the nice properties of the Shannon entropy for discrete random variables. For example, H(X) is not scale invariant, and it can be negative [33, p. 244]. Since the density f enters in the logarithm argument, the definition of H(X) is meaningful only when f is dimensionless, which is the same as X being dimensionless. Note that, if X is dimensioned and c > 0 is a real parameter making X = cX a dimensionless random variable, then In the following, we shall consider the differential entropy only for dimensionless random vectors X. The equality holds iff X is Gaussian with variance matrix A and arbitrary mean vector a. (ii) If X = (X 1 , . . . , X n ) is an absolutely continuous random vector, then The equality holds iff the components X 1 , . . . , X n are independent. Remark 1. In property (i) we have used the following well-known matrix identity, which follows by diagonalization: Remark 2. Property (i) yields that the differential entropy of a Gaussian random variable X ∼ N (a; α 2 ) is which is an increasing function of the variance α 2 , and thus it is a measure of the uncertainty of X. Note that H(X) ≥ 0 iff α 2 ≥ 1/(2πe). Entropic PURs for position and momentum The idea of having an entropic formulation of the PURs for position and momentum goes back to [12,14,41]. However, we have just seen that, due to the presence of the logarithm, the Shannon differential entropy needs dimensionless probability densities. So, this leads us to introduce dimensionless versions of position and momentum. Let λ > 0 be a dimensionless parameter and κ a second parameter with the dimension of a mass times a frequency. Then, we introduce the dimensionless versions of position and momentum: We use a unique dimensional constant κ, in order to respect rotation symmetry and do not distinguish different particles. Anyway, there is no natural link between the parameter multiplying Q and the parameter multiplying P ; this is the reason for introducing λ. As we see from the commutation rules, the constant λ plays the role of a dimensionless version of ; in the literature on PURs, often λ = 1 is used [12,14,31]. Vector observables Let Q and P be the pvm's of Q and P ; then, Q ρ and P ρ are their probability distributions in the state ρ. The total preparation uncertainty is quantified by the sum of the two differential entropies H( Q ρ )+H( P ρ ). For ρ ∈ G, by Proposition 8 we get In the case of product states of minimum uncertainty, we have (det A ρ ) (det B ρ ) = 2 /4 n ; then, by taking (20) into account, we get Thus, the bound (37) arises from quantum relations between Q and P ; indeed, there would be no lower bound for (36) if we could take both det A ρ and det B ρ arbitrarily small. By item (ii) of Proposition 8, the differential entropy for the distribution of a random vector is smaller than the sum of the entropies of its marginals; however, the final bound (37) is a tight bound for both By the results of [12,14], the same bound (37) is obtained even if the minimization is done over all the states, not only the Gaussian ones. The uncertainty result (37) depends on λ, this being a consequence of the lack of scale invariance of the differential entropy; note that the bound is positive if and only if λ > 1/(πe). Sometimes in the literature the parameter appears in the argument of the logarithm [19,32]; this fact has to be interpreted as the appearance of a parameter with the numerical value of , but without dimensions. In this sense the formulation (37) is consistent with both the cases with λ = 1 or λ = . Sometimes the smaller bound ln 2π appears in place of log πe [50]; this is connected to a state dependent formulation of the entropic PUR [31, Sect. V.B]. Scalar observables The dimensionless versions of the scalar observables introduced in (6) are We denote by Q u,ρ and P v,ρ the associated distributions in the state ρ. For ρ ∈ S 2 , the respective means and variances are with Var( Q u,ρ ) Var( P v,ρ ) ≥ λ |cos α| /2. As in the vector case, the total preparation uncertainty is quantified by the sum of the two differential entropies H( Q u,ρ ) + H( P v,ρ ). For ρ ∈ G, Proposition 8 gives Then, we have the lower bound which depends on λ, but not on κ. Of course, because of (39), for Gaussian states a lower bound for the sum H( Q u,ρ ) + H( P v,ρ ) is equivalent to a lower bound for the product Var( Q u,ρ ) Var( P v,ρ ). By a slight generalization of the results of [12,14], the bound (40) is obtained also when the minimization is done over all the states. Let us note that the bound in (40) is positive for |λ cos α| > 1/(πe), and it goes to −∞ for α → π/2, which is the case of compatible Q u,ρ and P v,ρ . In the case α = 0, the bound (40) is the same as (37) for n = 1. Approximate joint measurements of position and momentum In order to deal with MURs for position and momentum observables, we have to introduce the class of approximate joint measurements of position and momentum, whose marginals we will compare with the respective sharp observables. As done in [21,28,44,45], it is natural to characterize such a class by requiring suitable properties of covariance under the group of space translations and velocity boosts: namely, by approximate joint measurement of position and momentum we will mean any POVM on the product space of the position and momentum outcomes sharing the same covariance properties of the two target sharp observables. As we have already discussed, two approximation problems will be of our concern: the approximation of the position and momentum vectors (vector case, with outcomes in the phase-space R n × R n ), and the approximation of one position and one momentum component along two arbitrary directions (scalar case, with oucomes in R × R). In order to treat the two cases altogether, we consider POVMs with outcomes in R m × R m ≡ R 2m , which we call bi-observables; they correspond to a measurement of m position components and m momentum components. The specific covariance requirements will be given in the Definitions 5,6,7. In studying the properties of probability measures on R k , a very useful notion is that of the characteristic function, that is, the Fourier cotransform of the measure at hand; the analogous quantity for POVMs turns out to have the same relevance. Different names have been used in the literature to refer to the characteristic function of POVMs, or, more generally, quantum instruments, such as characteristic operator or operator characteristic function [1, 3-6, 42-44, 49]. As a variant, also the symplectic Fourier transform quite often appears [46,Sect. 12.4.3]. The characteristic function has been used, for instance, to study the quantum analogues of the infinite-divisible distributions [3][4][5][6]43,44] and measurements of Gaussian type [42,46,49]. Here, we are interested only in the latter application, as our approximating bi-observables will typically be Gaussian. Since we deal with bi-observables, we limit our definition of the characteristic function only to POVMs on R m × R m , which have the same number of variables of position and momentum type. Being measures, POVMs can be used to construct integrals, whose theory is presented e.g. in [23, Sect Here, the dimensions of the vector variables k and l are the inverses of a length and momentum, respectively, as in the definition of the characteristic function of a state (27). This definition is given so that Tr M(k, l)ρ is the usual characteristic function of the probability distribution M ρ on R 2m . Covariant vector observables In terms of the pvm's (4), the translation property (25) is equivalent to the symmetry properties and they are taken as the transformation property defining the following class of POVMs on R 2n [20,23,28,49,64]. We denote by C the set of all the covariant phase-space observables. C The interpretation of covariant phase-space observables as approximate joint measurements of position and momentum is based on the fact that their marginal POVMs have the same symmetry properties of Q and P, respectively. Although Q and P are not jointly measurable, the following well-known result says that there are plenty of covariant phase-space observables [30,48], [45,Theor. 4.8.3]. In (43) below, we use the parity operator Π on H, which is such that Proposition 9. The covariant phase-space observables are in one-to-one correspondence with the states on H, so that we have the identification S ∼ C; such a correspondence σ ↔ M σ is given by The characteristic function (41) of a measurement M σ ∈ C has a very simple structure in terms of the characteristic function (27) of the corresponding state σ ∈ S. Proposition 10. The characteristic function of M σ ∈ C is given by and the characteristic function of the probability M σ ρ is In (44) we have used the identification (26). The characteristic function of a state is introduced in (27). In terms of probability densities, measuring M σ on the state ρ yields the density function h σ (x, p|ρ) = Tr{M σ (x, p)ρ}. Then, by (45), the densities of the marginals M σ 1,ρ and M σ 2 ρ are the convolutions where f and g are the sharp densities introduced in (5). By the arbitrariness of the state ρ, the marginal POVMs of M σ turn out to be the convolutions (or 'smearings') Let us remark that the distribution of the approximate position observable M σ 1 in a state ρ is the distribution of the sum of two independent random vectors: the first one is distributed as the sharp position Q in the state ρ, the second one is distributed as the sharp position Q in the state σ. In this sense, the approximate position M σ 1 looks like a sharp position plus an independent noise given by σ. Of course, a similar fact holds for the momentum. However, this statement about the distributions can not be extended to a statement involving the observables. Indeed, since Q and P are incompatible, nobody can jointly observe M σ , Q and P, so that the convolutions (46) do not correspond to sums of random vectors that actually exist when measuring M σ . Covariant scalar observables Now we focus on the class of approximate joint measurements of the observables Q u and P v representing position and momentum along two possibly different directions u and v (see Section 2.1.2). As in the case of covariant phase-space observables, this class is defined in terms of the symmetries of its elements: we require them to transform as if they were joint measurements of Q u and P v . Recall that Q u and P v denote the spectral measures of Q u , P v . Due to the commutation relation (24), the following covariance relations hold for all A, B ∈ B(R) and x, p ∈ R n . We employ covariance to define our class of approximate joint measurements of Q u and P v . We denote by C u,v the class of such bi-observables. C u,v So, our approximate joint measurements of Q u and P v will be all the bi-observables in the class C u,v . It is useful to work with a little more generality, and merge Definitions 5 and 6 into a single notion of covariance. Thus, approximate joint observables of Q u and P v are just J-covariant observables on R 2 for the choice of the 2 × 2n matrix On the other hand, covariant phase-space observables constitute the class of ½ 2n -covariant observables on R 2n , where ½ 2n is the identity map of R 2n . Gaussian measurements When dealing with Gaussian states, the following class of bi-observables quite naturally arises. for two vectors a M , b M ∈ R m , a real 2m × 2n matrix J M and a real symmetric 2m × 2m matrix V M satisfying the condition In this definition, the vector a M has the dimension of a length, and b M of a momentum; similarly, the matrices J M , V M decompose into blocks of different dimensions. The condition (49) is necessary and sufficient in order that the function (48) defines the characteristic function of a POVM. For unbiased Gaussian measurements, i.e., Gaussian bi-observables with a M = b M = 0, the previous definition coincides with the one of [46,Section 12.4.3]. It is also a particular case of the more general definition of Gaussian observables on arbitrary (not necessarily symplectic) linear spaces that is given in [36,49]. We refer to [46,49] for the proof that Eq. (48) is actually the characteristic function of a POVM. Measuring the Gaussian observable M on the Gaussian state ρ yields the probability distribution M ρ whose characteristic function is hence the output distribution is Gaussian, Covariant Gaussian observables For Gaussian bi-observables, J-covariance has a very easy characterization. Proof. For x, p ∈ R n , we let M ′ and M ′′ be the two POVMs on R 2m given by By the commutation relations (24) for the Weyl operators, we immediately get we have also Since M(k, l) = 0 for all k, l, by comparing the last two expressions we see that M ′ = M ′′ if and only if which in turn is equivalent to J M = J. Vector observables Let us point out the structure of the Gaussian approximate joint measurements of Q and P. Proposition 12. A bi-observable M σ ∈ C is Gaussian if and only if the state σ is Gaussian. In this case, the covariant bi-observable M σ is Gaussian with parameters Proof. By comparing (31), (44) and (48), and using the fact that W (x 1 , p 2 ) ∝ W (x 2 , p 2 ) if and only if x 1 = x 2 and p 1 = p 2 , we have the first statement. Then, for σ ∈ G, we see immediately that M σ is a Gaussian observable with the above parameters. We call C G the class of the Gaussian covariant phase-space observables. By (50), observing M σ on C G a Gaussian state ρ ∈ G yields the normal probability distribution M σ When a σ = 0 and b σ = 0, we have an unbiased measurement. Scalar observables We now study the Gaussian approximate joint measurements of the target observables Q u and P u defined in (6). Proposition 13. A Gaussian bi-observable M with parameters where J is given by (47). In this case, the condition (49) is equivalent to Proof. The first statement follows from Proposition 11. Then, the matrix inequality (49) reads which is equivalent to (52). We write C G u,v for the class of the Gaussian (u, v)-covariant phase-space observables. An observ-C G u,v able M ∈ C G u,v is thus characterized by the couple (µ M , V M ). From (50) with J M = J given by (47), we get that measuring M ∈ C G u,v on a Gaussian state ρ yields the probability distribution M ρ = N µ ρ u,v + µ M ; V ρ u,v + V M . Its marginals with respect to the first and second entry are, respectively, Example 2. Let us construct an example of an approximate joint measurement of Q u and P v , by using a noisy measurement of position along u followed by a sharp measurement of momentum along v. Let ∆ be a positive real number yielding the precision of the position measurement, and consider the POVM M on R 2 given by The characteristic function of M is Then, M ∈ C G u,v with parameters a M = 0, b M = 0, V M = 0 and J M = J given by (47). Note that M can be regarded as the limit case of the observables of the previous example when cos α = 0 and ∆ ↓ 0. Entropic MURs for position and momentum In the case of two discrete target observables, in [2] we found an entropic bound for the precision of their approximate joint measurements, which we named entropic incompatibility degree. Its definition followed a three steps procedure. Firstly, we introduced an error function: when the system is in a given state ρ, such a function quantifies the total amount of information that is lost by approximating the target observables by means of the marginals of a bi-observable; the error function is nothing else than the sum of the two relative entropies of the respective distributions. Then, we considered the worst possible case by maximizing the error function over ρ, thus obtaining an entropic divergence quantifying the approximation error in a state independent way. Finally, we got our index of the incompatibility of the two target observables by minimizing the entropic divergence over all bi-observables. In particular, when symmetries are present, we showed that the minimum is attained at some covariant bi-observables. So, the covariance followed as a byproduct of the optimization procedure, and was not a priori imposed upon the class of approximating bi-observables. As we shall see, the extension of the previous procedure to position and momentum target observables is not straightforward, and peculiar problems of the continuous case arise. In order to overcome them, in this paper we shall fully analyse only a case in which explicit computations can be done: Gaussian preparations, and Gaussian bi-observables, which we a priori assume to be covariant. We conjecture that the final result should be independent of these simplifications, as we shall discuss in Section 7. As we said in Section 5, by "approximate joint measurement" we mean "a bi-observable with the 'right' covariance properties". Scalar observables Given the directions u and v, the target observables are Q u and P v in (6) with pvm's Q u and P v . For ρ ∈ G with parameters (µ ρ , V ρ ) given in (12), the target distributions Q u,ρ and P v,ρ are normal with means and variances (11). An approximate joint measurements of Q u and P v is given by a covariant bi-observable M ∈ C u,v ; then, we denote its marginals with respect to the first and second entry by M 1 and M 2 , respectively. For a Gaussian covariant bi-observable M ∈ C G u,v with parameters (µ M , V M ), the distribution of M in a Gaussian state ρ is normal, , so that its marginal distributions M 1,ρ and M 2,ρ are normal with means u · a ρ + a M and v · b ρ + b M and variances Let us recall that |u| = 1, |v| = 1, u · v = cos α, and that by (16) and (52), we have Error function The relative entropy is the amount of information that is lost when an approximating distribution is used in place of a target one. For this reason, we use it to give an informational quantification of the error made in approximating the distributions of sharp position and momentum by means of the marginals of a joint covariant observable. Definition 9. Given the preparation ρ ∈ S and the covariant bi-observable M ∈ C u,v , the error function for the scalar case is the sum of the two relative entropies: The relative entropy is invariant under a change of the unit of measurement, so that the error function is scale invariant, too; indeed, it quantifies a relative error, not an absolute one. In the Gaussian case the error function can be explicitly computed. where and s : [0, +∞) → [0, +∞) is the following C ∞ strictly increasing function with s(0) = 0: Proof. The statement follows by a straightforward combination of (32), (34), (53) and (56). Note that the error function does not depend on the mixed covariances u · C ρ v and V M 12 . Note also that, if we select a possible approximation M, then the error function S(ρ, M) decreases for states ρ with increasing sharp variances Var (Q u,ρ ) and Var (P v,ρ ): the loss of information decreases when the sharp distributions make the approximation error negligible. Finally, note that This means that, apart from the term ∆(ρ, M) due to the bias, our error function S(ρ, M) only depends on the two ratios "variance of the approximating distribution over variance of the target distribution". Thus, in order to optimize the error function, one has to optimize these two ratios. We use formula (57) to firstly give a state dependent MUR, and then, following the scheme of [2], a state independent MUR. A lower bound for the error function can be found by minimizing it over all possible approximate joint measurements of Q u and P v . First of all, let us remark that this minimization makes sense because we consider only (u, v)-covariant bi-observables: if we minimized over all possible bi-observables, then the minimum would be trivially zero for every given preparation ρ. Indeed, the trivial bi-observable M(A × When minimizing the error function over all (u, v)-covariant bi-observables, both the minimum and the best measurement attaining it are state dependent. When α = ±π/2, the two target observables are compatible, so that their joint measurement trivially exists (see Example 3) and we get inf M∈Cu,v S(ρ, M) = 0. In order to have explicit results for any angle α, we consider only the Gaussian case. Theorem 15 (State dependent MUR, scalar observables). For every ρ ∈ G and M ∈ C G u,v , where the lower bound is with The lower bound is tight and the optimal measurement is unique: c ρ (α) = S(ρ, M * ), for a unique M * ∈ C G u,v ; such a Gaussian (u, v)-covariant bi-observable is characterized by Proof. As already discussed, the case cos α = 0 is trivial. If cos α = 0, we have to minimize the error function (57) over M. First of all we can eliminate the positive term ∆(ρ, M) by taking an unbiased measurement. Then, since s is an increasing function, by the second condition in (55) we can also take This implies V M * 12 = 0 by (52). In this case the error function (57) reduces to Var (Q u,ρ ) , with z ρ given by (61); by the first of (55), we have z ρ ∈ (0, 1]. Now, we can minimize the error function with respect to x by studying its first derivative: Having x > 0, we immediately get that x = z ρ gives the unique minimum. Thus S(ρ, M) ≥ S(ρ, M * ) = s(z ρ ) log e = log(1 + z ρ ) − z ρ 1 + z ρ log e, and which conclude the proof. Remark 3. The minimum information loss c ρ (α) depends on both the preparation ρ and the angle α. When α = ±π/2, that is when the target observables are not compatible, c ρ (α) is strictly grater than zero. This is a peculiar quantum effect: given ρ, u and v, there is no Gaussian approximate joint measurement of Q u and P v that can approximate them arbitrarily well. On the other side, in the limit α → ±π/2, the lower bound c ρ (α) goes to zero; so, the case of commuting target observables is approached with continuity. Remark 4. The lower bound c ρ (α) goes to zero also in the classical limit → 0. This holds for every angle α and every Gaussian state ρ. Remark 5. Another case in which c ρ (α) → 0 is the limit of large uncertainty states, that is, if we let the product Var (Q u,ρ ) Var (P v,ρ ) → ∞: our entropic MUR disappears because, roughly speaking, the variance of (at least) one of the two target observables goes to infinity, its relative entropy vanishes by itself, and an optimal covariant bi-observable M * has to take care of (at most) only the other target observable. Remark 6. Actually, something similar to the previous remark happens also at the macroscopic limit, and does not require the measuring instrument to be an optimal one; indeed, unbiasedness is enough in this case. This happens because the error function S(ρ, M) quantifies a relative error; even if the measurement approximation M is fixed, such an error can be reduced by suitably changing the preparation ρ. Indeed, if we consider the position and momentum of a macroscopic particle, for instance the center of mass of many particles, it is natural that its state has much larger position and momentum uncertainties than the intrinsic uncertainties of the measuring instrument; that is, Var(Qu,ρ) ≪ 1 and Var(Pv,ρ) ≪ 1, implying that the error function (57) is negligible. In practice, this is a classical case: the preparation has large position and momentum uncertainties and the measuring instrument is relatively good. In this situation we do not see the difference between the joint measurement of position and momentum and their separate sharp observations. Remark 7. The optimal approximating joint measurement M * ∈ C G u,v is unique; by (62) it depends on the preparation ρ one is considering, as well as on the directions u and v. A realization of M * is the measuring procedure of Example 2. Remark 9. For cos α = 0, we get inf M∈C G u,v S(ρ, M) = s(z ρ ) log e, where z ρ is defined by (61). As z ρ ranges in the interval (0, 1], the quantity inf M∈C G u,v S(ρ, M) takes all the values in the interval 0, 1 − log e 2 , so that sup In order to get this result, we needed cos α = 0; however, the final result does not depend on α. Therefore, in the sup ρ inf M -approach of (63), the continuity from quantum to classical is lost. Now we want to find an entropic quantification of the error made in observing M ∈ C u,v as an approximation of Q u and P v in an arbitrary state ρ. The procedure of [2], already suggested in [25, Sect. VI.C] for a different error function, is to consider the worst case by maximizing the error function over all the states. However, in the continuous framework this is not possible for the error function (56); indeed, from (57) we get sup ρ∈G S(ρ, M) = +∞ even if we restrict to unbiased covariant bi-observables. Anyway, the reason for S(ρ, M) to diverge is classical: it depends only on the continuous nature of Q u and P v , without any relation to their (quantum) incompatibility. Indeed, as we noted in Section 3.1, if an instrument measuring a random variable X ∼ N (a; α 2 ) adds an independent noise ν ∼ N (b; β 2 ), thus producing an output X + ν ∼ N (a + b; α 2 + β 2 ), then the relative entropy S(X X + ν) diverges for α 2 → 0; this is what happens if we fix the noise and we allow for arbitrarily peaked preparations. Thus, the sum S(Q u,ρ M 1,ρ ) + S(P v,ρ M 2,ρ ) diverges if, fixed M, we let Var(Q u,ρ ) or Var(P v,ρ ) go to 0. The difference between the classical and quantum frameworks emerges if we bound from below the variances of the sharp position and momentum observables. Indeed, in the classical framework we have inf b,β 2 sup α 2 ≥ǫ S(X X + ν) = 0 for every ǫ > 0; the same holds for the sum of two relative entropies if no relation exists between the two noises. On the contrary, in the quantum framework the entropic MURs appear due to the relation between the position and momentum errors occurring in any approximate joint measurement. In order to avoid that S(ρ, M) → +∞ due to merely classical effects, we thus introduce the following subset of the Gaussian states: and we evaluate the error made in approximating Q u and P v with the marginals of a (u, v)-covariant bi-observable by maximizing the error function over all these states. For Gaussian M, depending on the choice of the thresholds ǫ 1 and ǫ 2 , the divergence D G ǫ (Q u , P v M) can be easily computed or at least bounded. Proof. By Proposition 4, maximizing the error function over the states in G u,v ǫ is the same as maximizing (57) with (54) over the parameters Var (Q u,ρ ) and Var (P v,ρ ) satisfying (55) and (64). (cos α) 2 , the thresholds themselves satisfy Heisenberg uncertainty relation, and so equality (66) follows from the expression (57) and the fact the functions s(x), s(y), ∆(ρ, M) are decreasing in Var (Q u,ρ ) and Var (P v,ρ ). Entropic incompatibility degree of Q u and P v The last step is to optimize the state independent ǫ-entropic divergence (65) over all the approximate joint measurements of Q u and P v . This is done in the next definition. Again, depending on the choice of the thresholds ǫ 1 and ǫ 2 , the entropic incompatibility degree c G inc (Q u , P v ; ǫ) can be easily computed or at least bounded. (cos α) 2 , the incompatibility degree c G inc (Q u , P v ; ǫ) is given by The infimum in (68) is attained and the optimal measurement is unique, in the sense that for a unique M ǫ ∈ C G u,v ; such a bi-observable is characterized by The latter bound is where the state ρ ǫ (u, v) is defined in item (ii) of Theorem 16 and M ǫ is the bi-observable in C G u,v such that Proof. (i) In the case ǫ 1 ǫ 2 ≥ 2 4 (cos α) 2 , due to (66), the proof is the same as that of Theorem 15 with the replacements Var (Q u,ρ ) → ǫ 1 and Var (P v,ρ ) → ǫ 2 . Remark 11 (State independent MUR, scalar observables). By means of the above results, we can formulate a state independent entropic MUR for the position Q u and the momentum P v in the following way. Chosen two positive thresholds ǫ 1 and ǫ 2 , there exists a preparation ρ ǫ (u, v) ∈ G u,v ǫ (introduced in Theorem 16) such that, for all Gaussian approximate joint measurements M of Q u and P v , we have The inequality follows by (66) and (69) What is relevant is that, for every approximate joint measurement M, the total information loss S(ρ, M) does exceed the lower bound (75) even if the set of states G u,v ǫ forbids preparations ρ with too peaked target distributions. Indeed, without the thresholds ǫ 1 , ǫ 2 , it would be trivial to exceed the lower bound (75), as we noted in Section 6.1.2. We also remark that, chosen ǫ 1 and ǫ 2 , we found a single state ρ ǫ (u, v) in G u,v ǫ that satisfies (75) for every M, so that ρ ǫ (u, v) is a 'bad' state for all Gaussian approximate joint measurements of position and momentum. When ǫ 1 ǫ 2 ≥ 2 4 (cos α) 2 , the optimal approximate joint measurement M ǫ is unique in the class of Gaussian (u, v)-covariant bi-observables; it depends only on the class of preparations G u,v ǫ : it is the best measurement for the worst choice of the preparation in the class G u,v ǫ . Remark 12. The entropic incompatibility degree c G inc (Q u , P v ; ǫ) is strictly positive for cos α = 0 (incompatible target observables) and it goes to zero in the limits α → ±π/2 (compatible observables), → 0 (classical limit), and ǫ 1 ǫ 2 → ∞ (large uncertainty states). Remark 13. The scale invariance of the relative entropy extends to the error function S(ρ, M), hence to the divergence D G ǫ (Q u , P v M) and the entropic incompatibility degree c G inc (Q u , P v ; ǫ), as well as the entropic MUR (75). Vector observables Now the target observables are Q and P given in (3), with pvm's Q and P; the approximating bi-observables are the covariant phase-space observables C of Definition 5. Each bi-observable M ∈ C is of the form M = M σ for some σ ∈ S, where M σ is given by (43). C G is the subset of the Gaussian bi-observables in C, and M σ ∈ C G if and only if σ is a Gaussian state. We proceed to define the analogues of the scalar quantities introduced in Sections 6.1.1, 6.1.2, 6.1.3. In order to do it, in the next proposition we recall some known results on matrices. Error function Definition 12. Given the preparation ρ ∈ S and the covariant phase-space observable M σ , with σ ∈ S, the error function for the vector case is the sum of the two relative entropies: As in the scalar case, the error function is scale invariant, it quantifies a relative error, and we always have S(ρ, M σ ) > 0 because position and momentum are incompatible. Indeed, since the marginals of a bi-observable M σ ∈ C turn out to be convolutions of the respective sharp observables Q and P with some probability densities on R n , Q ρ = M σ 1,ρ and P ρ = M σ 2,ρ for all states ρ; this is an easy consequence, for instance, of [15,Problem 26.1,p. 362]. In the Gaussian case the error function can be explicitly computed. Proposition 19 (Error function for the vector Gaussian case). For ρ, σ ∈ G, the error function has the two equivalent expressions: where the function s is defined in (58), and Proof. First of all, recall that A direct application of (34) yields We can transform this equation by using ln det (½ + E ρ,σ ) = Tr {ln (½ + E ρ,σ )} , This gives In the same way a similar expression is obtained for S(P ρ M σ 2,ρ ) and (77a) is proved. On the other hand, by using and the analogous expressions involving B ρ and R ρ,σ , one gets (77b). State dependent lower bound In principle, a state dependent lower bound for the error function could be found by analogy with Theorem 15, by taking again the infimum over all joint covariant measurements, that is inf σ S(ρ, M σ ). By considering only Gaussian states ρ and measurements M σ , from (18), (77a), (78a) the infimum over σ ∈ G can be reduced to an infimum over the matrices A σ : The above equality follows since the monotonicity of s (Proposition 18) implies that the trace term in (77a) attains its minimum when B σ = 2 4 (A ρ ) −1 . However, it remains an open problem to explicitly compute the infimum over the matrices A σ when the preparation ρ is arbitrary. Nevertheless, the computations can be done at least for a preparation ρ * of minimum uncertainty (Proposition 6). Indeed, by (22) we get Now we can diagonalize E ρ,σ and minimize over its eigenvalues; since s(x) + s(x −1 ) attains its minimum value at x = 1, this procedure gives E ρ,σ = ½. So, by denoting by σ * the state giving the minimum, we For an arbitrary ρ ∈ G, we can use the last formula to deduce an upper bound for inf σ∈G S(ρ, M σ ). Indeed, if ρ * is a minimum uncertainty state with A ρ * = A ρ , then B ρ ≥ 2 4 (A ρ ) −1 = B ρ * by (19), and, using again the state σ * of (79), we find The second inequality in the last formula follows from (77b), (78b) and the monotonicity of s (Prop. 18). Entropic divergence of Q, P from M σ In order to define a state independent measure of the error made in regarding the marginals of M σ as approximations of Q and P, we can proceed along the lines of the scalar case in Section 6.1.2. To this end, we introduce the following vector analogue of the Gaussian states defined in (64): In the vector case, Definition 10 then reads as follows. As in the scalar case, when M σ is Gaussian, depending on the choice of the product ǫ 1 ǫ 2 , we can compute the divergence D G ǫ (Q, P M σ ) or at least bound it from below. Theorem 20. Let the bi-observable M σ ∈ C G be fixed. , the divergence D G ǫ (Q, P M σ ) is given by where ρ ǫ is any Gaussian state with A ρǫ = ǫ 1 ½ and B ρǫ = ǫ 2 ½. Remark 15. For n = 1, the vector lower bound in (92) reduces to the scalar lower bound found in (75) for two parallel directions u and v; for n ≥ 1, the bound linearly grows with n. By (58) the function s is increasing and in a neighborhood of zero it behaves as s(x) ≃ x 2 /2; in the present case δ 1 /ǫ 1 ≪ 1 and δ 2 /ǫ 2 ≪ 1 and, so, we have that the error function is negligible. This is practically a 'classical' case: the preparation has 'large' position and momentum uncertainties and the measuring instrument is 'relatively good'. In this situation we do not see the difference between the joint measurement of position and momentum and their separate sharp distributions. Of course the bound (92) continues to hold, but it is also negligible since ǫ 1 ǫ 2 ≫ 2 /4. Remark 18. The scale invariance of the relative entropy extends also in the vector case to the error function S(ρ, M σ ), the divergence D G ǫ (Q, P M σ ) and the entropic incompatibility degree c G inc (Q, P; ǫ), as well as the entropic MUR (92). Indeed, let us consider the dimensionless versions of position and momentum (35) and their associated projection valued measures Q, P introduced in Section 4. Accordingly, we rescale the joint measurement M σ of (43) in the same way, obtaining the POVM M σ (B) = B M σ ( x, p)d xd p, Here, both the vector variables x and p, as well as the components of the Borel set B, are dimensionless. By the scale invariance of the relative entropy, the error function takes the same value as in the dimensioned case: S( Q ρ M σ 1,ρ ) + S( P ρ M σ 2,ρ ) = S(Q ρ M σ 1,ρ ) + S(P ρ M σ 2,ρ ). Conclusions We have extended the relative entropy formulation of MURs given in [2] from the case of discrete incompatible observables to a particular instance of continuous target observables, namely the position and momentum vectors, or two components of them along two possibly non parallel directions. The entropic MURs we found share the nice property of being scale invariant and well-behaved in the classical and macroscopic limits. Moreover, in the scalar case, when the angle spanned by the position and momentum components goes to ±π/2, the entropic bound correctly reflects their increasing compatibility by approaching zero with continuity. Although our results are limited to the case of Gaussian preparation states and covariant Gaussian approximate joint measurements, we conjecture that the bounds we found still hold for arbitrary states and general (not necessarily covariant or Gaussian) bi-observables. Let us see with some more detail how this should work in the case when the target observables are the vectors Q and P . The most general procedure should be to consider the error function S(Q ρ M 1,ρ ) + S(P ρ M 2,ρ ) for an arbitrary POVM M on R n × R n and any state ρ ∈ S. First of all, we need states for which neither the position nor the momentum dispersion are too small; the obvious generalization of the test states (81) is Then, the most general definitions of the entropic divergence and incompatibility degree are: It may happen that Q ρ is not absolutely continuous with respect to M 1,ρ , or P ρ with respect to M 2,ρ ; in this case, the error function and the entropic divergence take the value +∞ by definition. So, we can restrict to bi-observables that are (weakly) absolutely continuous with respect to the Lebesgue measure. However, the true difficulty is that, even with this assumption, here we are not able to estimate (94), hence (95). It could be that the symmetrization techniques used in [25,65] can be extended to the present setting, and one can reduce the evaluation of the entropic incompatibility index to optimizing over all covariant biobservables. Indeed, in the present paper we a priori selected only covariant approximating measurements; we would like to understand if, among all approximating measurements, the relative entropy approach selects covariant bi-observables by itself. However, even if M is covariant, there remains the problem that we do not know how to evaluate (94) if ρ and M are not Gaussian. It is reasonable to expect that some continuity and convexity arguments should apply, and the bounds in Theorem 21 could be extended to the general case by taking dense convex combinations. Also the techniques used for the PURs in [12,14] could be of help in order to extend what we did with Gaussian states to arbitrary states. This leads us to conjecture: c inc (Q, P; ǫ) = c G inc (Q, P; ǫ). Conjecture (96) is also supported since the uniqueness of the optimal approximating bi-observable in Theorem 21.(i) is reminiscent of what happens in the discrete case of two Fourier conjugated mutually unbiased bases (MUBs); indeed, in the latter case, the optimal bi-observable is actually unique among all the bi-observables, not only the covariant ones [2,Theor. 5]. Similar considerations obviously apply also to the case of scalar target observables. We leave a more deep investigation of equality (96) to future work. As a final consideration, one could be interested in finding error/disturbance bounds involving sequential measurements of position and momentum, rather than considering all their possible approximate joint measurements. As sequential measurements are a proper subset of the set of all the bi-observables, optimizing only over them should lead to bounds that are greater than c inc . This is the reason for which in [2] an error/disturbance entropic bound, denoted by c ed and dinstinct from c inc , was introduced. However, it was also proved that the equality c inc = c ed holds when one of the target observables is discrete and sharp. Now, in the present paper, only sharp target observables are involved; although the argument of [2] can not be extended to the continuous setting, the optimal approximating joint observables we found in Theorems 17.(i) and 21.(i) actually are sequential measurements. Indeed, the optimal bi-observable in Theorem 17.(i) is one of the POVMs described in Examples 2 and 3 (see (74)); all these bi-observables have a (trivial) sequential implementation in terms of an unsharp measurement of Q u followed by sharp P v . On the other hand, in the vector case, it was shown in [29, Corollary 1] that all covariant phase-space observables can be obtained as a sequential measurement of an unsharp version of the position Q followed by the sharp measurement of the momentum P. Therefore, c inc = c ed also for target position and momentum observables, in both the scalar and vector case.
16,760.8
2017-05-28T00:00:00.000
[ "Physics" ]
Longitudinal vibrations of rods . Elastic structures of buildings and machines, the dynamic behavior mathematical model of which is the problem of longitudinal vibrations of rods, are widespread in modern technology. In this regard, the study of issues related to the longitudinal vibrations of the rods is also an urgent problem currently. In terms of solving such problems, we considered the problem formulation of longitudinal (free and forced) vibration of rods, obtained the spectra of natural frequencies  n and own forms  n (x) vibration, u(x, t) – the function of cross-sections displacement in the longitudinal direction of the rod has Introduction Let us consider all these points in more detail. Let us draw a design scheme (Fig. 1) and consider natural motions. It should be noted that they are described in detail in the following literature sources [1][2][3][4][5]. Fig. 1. Homogeneous rod and selected element dx Let us consider a homogeneous rod (Fig. 1) with a total mass m =   A (material density, Across-sectional area) vibrating in the longitudinal direction [6,7]. In this case, we admit the use of the flat sections and the d'Alembert principle hypothesis. Longitudinal sections' displacements are characterized by the function u(x, t). The relative deformation is  = x u   . Then, according to Hooke's law, it is possible to be written as: Let us consider the equilibrium of the selected element dx. The inertia force of (d'Alembert force) is equal to dx t u m dI 2 2    All forces applied to the selected element to the main axis of the rod are projected in the following way: and taking into account (1), we have: After some transformations, we finally obtain the equation of transverse vibration of a homogeneous rod of constant cross section, which is a homogeneous differential equation in partial derivatives of hyperbolic type. Note that the equation (2) is identical to the classical equation of a single-span homogeneous string natural motion. Further, the boundary conditions are added to the problem (2), which can be very diverse depending on the conditions for fixing the ends of the rod [8][9][10]. Let us consider a particular case of rigid attachment at both ends of the rod. In this case, the boundary conditions will have the form: The solution to the problem (2), (3) is found using the method of variables separation as the product Substitution by the method of variables separation in (2) gives: And substitution in boundary conditions (3) Next, we obtain the spectrum of natural frequencies of vibrations n and own forms n(x) l n a 2 For the input parameters a=1, l=1 the formula (4) gives the data for Table 1 of the first 4 natural frequencies The corresponding eigenmodes were found using the MATLAB programming system. The graphs are shown in Fig. 2. Forced vibrations from kinematic disturbances For kinematic perturbations, the following formula is obtained It gives the amplitude of stresses and its properties , In both cases, from cos k l = 0    (x) A σ , since in this case the frequency of disturbances ω coincides with natural frequencies n ω . This means that resonance leads to infinite values of the voltage amplitude. Here are the graphs of the H (x) amplitudes along the rod. The resonance frequency graphs are not shown, since they strive for . Stresses in the cross-sections of the rod at natural motions (n). In the process of vibrations, not only the deflections of the rod u(x, t), but also normal stresses σ(x, t) and longitudinal force N(x, t) change. Normal stresses according to Hooke's law will be These formulas determine the stresses and longitudinal force during natural motions. Based on the calculations results, it is possible to build the stress diagrams which will give an opportunity to carry out the calculations on strength. Table 4. Values of natural motions spectra Conclusion The obtained spectra of natural frequencies n and own forms n(x) vibration, the function of cross-sections displacement in the longitudinal direction of the rod u(x, t) has been found.
953.4
2021-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Effect of Protoneutron Star Magnetized Envelops in Neutrino Energy Spectra : The neutrino dynamics in hot and dense magnetized matter, which corresponds with protoneutron star envelopes in the core collapse supernova explosions, is considered. The kinetic equation for a neutrino phase space distribution function is obtained, taking into account inelastic scattering by nuclear particles. The transfer component in a momentum space using transport properties is studied. The energy transfer coefficient is shown to change from positive to negative values when the neutrino energy exceeds four times the matter temperature. In the vicinity of a neutrino sphere, such effects are illustrated to lead to the energy strengthening in the neutrino spectra. As this paper demonstrates, such a property is favorable for the possibility of observing supernova neutrino fluxes using Large Volume Neutrino Telescopes. Introduction Type II supernovae (SNe) represent one of the most energetic events, as they can, plausibly, give high energy cosmic rays, sites for synthesis of heavy nuclides (e.g., e-, s-, p-, r-process atomic nuclei), renew other nuclear components, and so on. Moreover, the SN explosion mechanism constitutes an important issue that is still under debate. In particular, the mechanism of energy transfer to all-star matter (initially bound) represents the main SN problem. Neutrino flux and/or magnetic pressures plausibly make a key contribution to explosive shock wave formation, in addition to core bounce and thermal pressure, respectively. Being in the vicinity of neutrino sphere strong convection [1], and/or magnetorotational instability [2], can trigger a turbulent dynamo leading to a magnetic field amplification of up to tens of teratesla (TT). Prompt SN explosions can be associated with magnetic pressure contributing significant energy into the stellar matter, and bringing the predominant mechanism of shock wave formation, respectively. Another possible explosion mechanism is related to neutrino heating and reviving the stopped shock wave using the neutrinos and antineutrinos that are emitted by a cooling protoneutron star [3][4][5]. Neutrino absorption of nucleons gives rise to the increasing temperature and pressure of the matter behind the shock that starts to expand, pushing the shock forward. The efficiency of such a delayed CCSN mechanism depends on neutrino luminosity and the hardness of their spectra [4,5]. Since neutrinos or magnetic pressure are capable of making a significant contribution to the mechanism of supernova explosions, an analysis of neutrino transport in supernova matter, taking into account magnetic effects, represents an important issue. In addition, the possible magnetic influence on neutrino spectra are crucial for the interpretation of the r-and neutrino processes that can be also affected by magnetic fields [6][7][8][9]. As has Particles 2022, 5 129 been recently shown [10,11], the energy exchange in neutrino-nuclear scattering can be noticeably enhanced due to the magnetic field. In the next section we consider neutrino kinetics, paying particular attention to the transfer in momentum space, on the basis of an energy transfer cross section [11]. As is illustrated in Section 3, such an approach gives a clear picture of the influence of inelastic scattering on neutrino spectra. Possibilities to observe the possible effects of this using Large Volume Neutrino Telescopes are analyzed in Section 4. Conclusions are given in Section 5. Neutrino Kinetics To describe neutrino kinetics, we use quite a general kinetic equation for the phasespace distribution function f(r, p, l) where a distance passed by neutrino l = c t with speed of light c and time t, ∂r and ∂p representing the partial derivatives with respect to the spatial, r, and momentum coordinates, p = z E/c, with the unit vector z defining the direction of neutrino momentum at energy E. Here, we take into account that since supernova neutrinos possess typical energies in the MeV range, much larger than the experimental rest-mass limit for active flavors <1 eV, they essentially propagate with the speed of light c. The momentum derivative in Equation (1), ∂p/∂l accounts for an energy and momentum exchange during the neutrino collisions with environment particles. On the right hand side of Equation (1), Λ stands for all the rates of neutrino production, absorption, and annihilation, whereas the term St[f], accounts for fluctuations in scattering processes. Neutrino Dynamics in Magnetized Neutrino Sphere In this work, we concentrate on neutrino kinetics in a region of neutrino decoupling from matter for dynamo active SNe. In such neutrino sphere regions, the neutrino dynamics change from diffusive to free streaming. Spectra of neutrinos emerging from a protoneutron star can be parameterized by the following equation: Here, Ω denotes the solid angle of vector z, E av is an average energy, and α is a numerical parameter describing the amount of spectral pinching; the value α = 2 corresponds to a Maxwell-Boltzmann spectrum, and α = 2.3 to a Fermi-Dirac distribution with zero chemical potential. Beyond the protoneutron star surface in a neutrino sphere region, it is impossible to maintain both the chemical equilibrium between neutrinos and stellar matter and diffusion; however, a noticeable energy exchange between neutrinos and strongly magnetized stellar material can affect neutrino spectra. Since heavy-leptonic neutrinos only interact with star matter through a neutral current, they are energetically less coupled to stellar plasma than the electron flavor when neutral and charged currents are involved; therefore, the heavy-leptonic neutrinos break out of thermal equilibrium in the energy sphere, which is significantly deeper inside the nascent protoneutron star than the transport sphere (near to neutrino sphere), where the transition from diffusion to free flow breaks in. Within the scattering atmosphere, respective heavy-leptonic neutrinos still frequently collide with neutrons and protons. As is demonstrated in Sections 3.2 and 3.3, in this case, magnetic effects noticeably enhance the energy exchange in neutrino-nucleon scattering due to the neutral current. Neutrino Sphere Properties Matter in the neutrino sphere region corresponds to a moderate density n~0.1-10 Tg cm −3 (1 Tg = 10 12 g) and temperature T~5-10 MeV. We assume strong fluctuations of temperature T and density n in this region since it meets a strong convection of matter and corresponds with the vicinity of the bifurcation point for stellar material between a collapse of the central compact object and supernova ejecta. Figure 1a shows the Fermi energy of nucleons E N F and electrons E e F versus a beta equilibrium parameter Y e at density n = 1 Tg/cm 3 . One sees that at the realistic numbers of the beta equilibrium parameter Y e~0 .2-0.3, these values for nucleons (protons-E p F~0 .6 MeV, neutrons-E n F~1 .1 MeV) and electrons (E e F~3 5 MeV) are small and large when compared with temperature, respectively. Therefore, nucleon components, with E N F « T, represent non-degenerate gas, whereas an electron gas, with E e F T, is strongly degenerated. As a consequence, a neutrinoelectron scattering cross section is strongly suppressed because of the Pauli principle. Such a blocking effect also leads to the actual termination of a charged current component in neutrino-nucleon scattering. Magnetization gives rise to the effective increase in Fermi energy and further diminution of respective scattering. Moreover, the corresponding mean free path (mfp) rises up to 10ths km at considered densities; therefore, we hereafter neglect the right hand side of Equation (1). On the contrary, neutrino-nucleon scattering due to a neutral current component can be considered an independent process with corresponding mfp l f = (N N σ GT0 ) −1~1 00 m. Here, N i = n i /m i represents the number density of i-th nuclear particle (N denotes nucleon), with mass m i and contribution n i to the total mass density n, and σ GT0 denotes the respective cross section σ GT0 ≈ 10 −40 cm 2 (E/37 MeV) 2 ; see [4,5]. Neutrino Sphere Properties Matter in the neutrino sphere region corresponds to a moderate density n~0.1-10 Tg cm −3 (1 Tg = 10 12 g) and temperature T~5-10 MeV. We assume strong fluctuations of temperature T and density n in this region since it meets a strong convection of matter and corresponds with the vicinity of the bifurcation point for stellar material between a collapse of the central compact object and supernova ejecta. Figure 1a shows the Fermi energy of nucleons and electrons versus a beta equilibrium parameter Ye at density n = 1 Tg/cm 3 . One sees that at the realistic numbers of the beta equilibrium parameter Ye~0.2-0.3, these values for nucleons (protons-~0. 6 MeV, neutrons-~1.1 MeV) and electrons ( ~35 MeV) are small and large when compared with temperature, respectively. Therefore, nucleon components, with « T, represent non-degenerate gas, whereas an electron gas, with ≫ T, is strongly degenerated. As a consequence, a neutrino-electron scattering cross section is strongly suppressed because of the Pauli principle. Such a blocking effect also leads to the actual termination of a charged current component in neutrino-nucleon scattering. Magnetization gives rise to the effective increase in Fermi energy and further diminution of respective scattering. Moreover, the corresponding mean free path (mfp) rises up to 10ths km at considered densities; therefore, we hereafter neglect the right hand side of Equation (1). On the contrary, neutrino-nucleon scattering due to a neutral current component can be considered an independent process with corresponding mfp lf = (NN σGT0) −1~1 00 m. Here, Ni = ni/mi represents the number density of i-th nuclear particle (N denotes nucleon), with mass mi and contribution ni to the total mass density n, and σGT0 denotes the respective cross section σGT0 ≈ 10 −40 cm 2 (E/37 MeV) 2 ; see [4,5]. The energy transfer cross section in a magnetized neutrino sphere for ν + N → ν′ + N′ scattering has been considered by Kondratyev et al. [11]. We briefly recall that this value is defined as S1 i = −dε ε (dσ i ν→ν′/dε) with an energy transfer ε differential cross section (dσ i ν→ν′/dε). The nucleon energy levels with spin magnetic moments directed along (spin up) and opposite (spin down) the magnetic field are split on the value Δ = |gα| μNH ≡ |gα|ωL because of an interaction with field H. Here, μN denotes the nuclear magneton, ωL = μNH gives the Larmour frequency, and gα is the nucleon g-factor. At the temperature T for neutral GT0 neutrino nucleon scattering, the energy transfer cross section reads [11] S1 ≈ σGT0 Δ (2δE − (1 + δ 2 E) th(δT/2))|Δ<E,T The energy transfer cross section in a magnetized neutrino sphere for ν + N → ν + N scattering has been considered by Kondratyev et al. [11]. We briefly recall that this value is defined as S 1 i = − dε ε (dσ i ν →ν /dε) with an energy transfer ε differential cross section (dσ i ν →ν /dε). The nucleon energy levels with spin magnetic moments directed along (spin up) and opposite (spin down) the magnetic field are split on the value ∆ = |g α | µ N H ≡ |g α |ω L because of an interaction with field H. Here, µ N denotes the nuclear magneton, ω L = µ N H gives the Larmour frequency, and g α is the nucleon g-factor. At the temperature T for neutral GT0 neutrino nucleon scattering, the energy transfer cross section reads [11] S where δ E = ∆/E, δ T = ∆/2 T and th(x) is the hyperbolic tangent. For a magnetized nucleon gas, the average transferred energy in inelastic neutrino scattering is evaluated as being <δE> = −<ε> ≈ S 1 /σ GT0 . This value depends on temperature T, splitting ∆ in the nucleon Particles 2022, 5 131 gas, and the energy of the incoming neutrino E. Figure 1b shows the average transferred energy <δE> for a magnetized nucleon gas in units of ∆ as a function of neutrino energy E at various temperatures T. As is seen in Figure 1b, the average transferred energy varies from a positive value for a hot nucleon gas to a negative value for a cold system. This change corresponds to a transition from exo-energetic to endo-energetic neutrino scattering, which occurs under the conditions corresponding to the temperature T ≈ E/4, see [11]. The physical reason of such a transition is obviously because of the decreasing thermal population of the upper split energy level of the nucleons, which results in a suppression of the contribution of GT0 transitions from this level to the underlying level. The condition of such a transition from one mode to another is well described by the relation of E ≈ 4T, which is independent on splitting value ∆. Energy Transfer in Neutrino Spectra Making use of Equation (4), we determine an energy transfer coefficient as where the sum goes over nuclear particles and the energy transfer length reads l t = E 2 /(∑ i N i σ i GT0 )∆ 2 ≈ 100 m (3 MeV/∆) 2 (10 Tg cm −3 /n). Neglecting the right hand side of Equation (1) for the uniform flux z ∂f/∂r = 0, the solution of Equation (1) is given by replacing E with the solution of Equation (5); see Appendix A, i.e., with e l = exp{l/l t }. Figure 2 shows the energy transfer effect in neutrino energy spectra during evolution in the vicinity of the neutrinospheric region. The Maxwell-Boltzmann distribution corresponds to α = 2 and E av = 10 MeV in Equation (2) and is taken as the initial one. One sees that the energy transfer effect leads to an increase in neutrino energy at the maximum distribution. When neutrino path l approaches a mean energy transfer length l t , we obtain a spreading in distribution W(E) with the maximum point increasing in a near linear manner with a growing e l . Such an acceleration is particularly effective at larger gas temperatures. Fluctuation Effects in Energy Spectra The strong convection in the vicinity of the neutrino sphere and bifurcation point gives rise to large fluctuations in the properties of respective stellar materials. We average the results of the energy spectra modification over the fluctuations. For temperature T, we assume a uniform distribution in a range from 5 to 10 MeV, which is independent from density fluctuations. As is seen in Figure 3a, the maximum distribution W(E) is shifted towards larger energies, approaching the region of 10-20 MeV. The properties of such averaged energy distribution resemble the results of 10 MeV, which support an effective acceleration mechanism at higher temperatures. The case of a single effective collision in Figure 2 clarifies the obtained results. In this case, the relationship between corresponding exo-and endo-energetic regimes is determined by a ratio of the occupation of respective nucleon levels and neutrino phase space volume in the exit channel (i.e., Exp{δ T } (1 − δ E ) 2 θ(1 − δ E )/(1 + δ E ) 2 , with step function θ(x)). When this ratio is less than 1, the number of endo-energetic collisions is larger than the number of exo-energetic collisions, and vice versa; therefore, for neutrino dynamics in magne-Particles 2022, 5 132 tized nucleon gas, it is preferable that a change of acceleration will occur, and that the stopping regimes correspond to the condition Such a switch in neutrino dynamics is also displayed in Equations (3) and (4). For large energies E > 4T, such effects proceed faster, compared with small energies E < 4T, because of the strong energy dependence of the energy transfer coefficient. Fluctuation Effects in Energy Spectra The strong convection in the vicinity of the neutrino sphere and bifurcation point gives rise to large fluctuations in the properties of respective stellar materials. We average the results of the energy spectra modification over the fluctuations. For temperature T, we assume a uniform distribution in a range from 5 to 10 MeV, which is independent from density fluctuations. As is seen in Figure 3a, the maximum distribution W(E) is shifted towards larger energies, approaching the region of 10-20 MeV. The properties of such averaged energy distribution resemble the results of 10 MeV, which support an effective acceleration mechanism at higher temperatures. Fluctuation Effects in Energy Spectra The strong convection in the vicinity of the neutrino sphere and bifurcation point gives rise to large fluctuations in the properties of respective stellar materials. We average the results of the energy spectra modification over the fluctuations. For temperature T, we assume a uniform distribution in a range from 5 to 10 MeV, which is independent from density fluctuations. As is seen in Figure 3a, the maximum distribution W(E) is shifted towards larger energies, approaching the region of 10-20 MeV. The properties of such averaged energy distribution resemble the results of 10 MeV, which support an effective acceleration mechanism at higher temperatures. Strengthening of Spectrum Hardness and Large Neutrino Detector Sensitivity The spectrum hardness can be characterized quantitatively using the average energy of particles: In Equation (7), we used the Saddle Point method justified at el~1. As can be seen in Figure 3b, the value <E> increases almost linearly with the increasing path length l. One sees an effective energy absorption by neutrinos from magnetized stellar material. Such a feature can result in double (or multiple) peaked SN light curves. The growing hardness of neutrino energy spectra makes it a favorable feature of the considered effect for detector sensitivity. Strengthening of Spectrum Hardness and Large Neutrino Detector Sensitivity The spectrum hardness can be characterized quantitatively using the average energy of particles: In Equation (7), we used the Saddle Point method justified at e l~1 . As can be seen in Figure 3b, the value <E> increases almost linearly with the increasing path length l. One sees an effective energy absorption by neutrinos from magnetized stellar material. Such a feature can result in double (or multiple) peaked SN light curves. The growing hardness of neutrino energy spectra makes it a favorable feature of the considered effect for detector sensitivity. Strongly variable transient particle fluxes can be detected using large-volume neutrino telescopes: KM3NeT [12], Baikal-GVD [13], and so on. Sensitivity to neutrinos on a scale of 10 MeV can be achieved by observing a collective increase, using multiple detectors to measure the rate of counting coincidences. A sharp increase in the spatially uniform neutrino flux Φ(t) is associated with a CCSN infall phase that occurs during the half a second [14] that defines the observation time. Assuming a spherically uniform neutrino emission, one obtains Φ(t) = L(t)/4πd 2 with the neutrino luminosity L, and the distance to the source d. The CCSN neutrino detection rate, r SN (t), can be evaluated as where the sum index i ∈ {p, e − , 16 O} represents the most important target components producing energetic charged particles (i.e., e +/− , n i is the number of targets, σ i (ε) the total interaction cross section for the given target i, and W(ε) gives the energy spectrum from Equation (3)). The detector efficiency ϕ corresponds to the ratio between the number of detected events and the number of interacting neutrinos per unit of water [15]. At time t of time interval δt, the probability of a triggered detector p = r(t) δt/Π, where Π is the number of detectors and total the detection rate r = r SN + r B includes the background event rate r B . The multiple coincidences of k detectors meet with the probability given by Poissonian law p k /k! e −p . In this case, the ratio signal/background is given by (1 + r SN /r B ) k ≈ (1 + k r SN /r B ). Evidently, the k-fold coincidence enhances by a factor of k, which is the detection sensitivity for a weak SN neutrino signal. When the condition (k r SN /r B ) 1 corresponds to value k approaching ten, then an excess of hundreds or thousands in terms of the total number of detectors Π is required for d~10 kpc, respectively. Conclusions We considered energy transfer for neutrino nuclear scattering in strong magnetic fields, which may plausibly arise in supernovae, and its respective effect in neutrino energy spectra. Nuclear magnetization is shown to bring new, neutral-current induced reaction channels, giving additional noticeable mechanisms to the dynamic of neutrinos being weakly coupled with matter. The energy transfer coefficient in kinetic equations is demonstrated to change from positive to negative values with increasing neutrino energy. For magnetized nondegenerate nucleon gas, such cross over between acceleration and stopping regimes occurs when neutrino energy is about factor four of the gas temperature, while the nucleon Larmour frequency is sufficiently small. This switching in dynamical properties originates from the detailed balance principle, and a difference of phase space volume for neutrinos in the final channel when scattering spin-up and spin-down nucleons, and independent on splitting value ∆. Consequently, such a property is insensitive to the magnetization geometry. The respective acceleration and/or stopping rates are determined by the product of splitting ∆ and scattering cross section σ GT0 in the nucleon gas. For realistic properties of stellar material (see Section 3.1), such neutrino-nuclear scattering effects result in an increase in the hardness of neutrino energy spectra. Since electronic neutrinos decouple from matter at the neutrino-sphere, and thereafter, they experience a few (single in average) effective collisions, the corresponding acceleration effect is relatively small. Beyond the energy sphere, the dynamics of heavy-leptonic neutrinos is mainly governed by collisions with nucleons. Within the scattering atmosphere (up to the neutrino-sphere) these collisions are frequent enough to maintain spatial diffusion for heavy-leptonic neutrinos. The significant completed path l, within a magnetized region of a star, leads to a considerable acceleration effect in the case of a heavy-leptonic component. The strengthening of neutrino hard energies is favorable for supernova neutrino observations that use Large Volume Neutrino Telescopes. In this case, the CCSN neutrino flux is revealed from an increase in the counting rate of multiple detectors that are coincidentally triggered. As illustrated, the k-fold coincidence enhances the detection sensitivity for a weak SN neutrino signal by factor k. Finally, we notice that such strong magnetization also arises in neutron star mergers, magnetar crusts, and heavy-ion collisions. Conflicts of Interest: The authors declare no conflict of interest. Appendix A To find neutrino energy E l after passing a distance l in neutrinospheric region we rewrite Equation (5) as The solution is given by and leads to Equation (6).
5,196.8
2022-04-12T00:00:00.000
[ "Physics" ]
Heme oxygenase-1 modulates ferroptosis by fine-tuning levels of intracellular iron and reactive oxygen species of macrophages in response to Bacillus Calmette-Guerin infection Macrophages are the host cells and the frontline defense against Mycobacterium tuberculosis (Mtb) infection, and the form of death of infected macrophages plays a pivotal role in the outcome of Mtb infections. Ferroptosis, a programmed necrotic cell death induced by overwhelming lipid peroxidation, was confirmed as one of the mechanisms of Mtb spread following infection and the pathogenesis of tuberculosis (TB). However, the mechanism underlying the macrophage ferroptosis induced by Mtb infection has not yet been fully understood. In the present study, transcriptome analysis revealed the upregulation of heme oxygenase-1 (HMOX1) and pro-ferroptosis cytokines, but downregulation of glutathione peroxidase 4 (GPX4) and other key anti-lipid peroxidation factors in the peripheral blood of both patients with extra-pulmonary tuberculosis (EPTB) and pulmonary tuberculosis (PTB). This finding was further corroborated in mice and RAW264.7 murine macrophage-like cells infected with Bacillus Calmette-Guerin (BCG). A mechanistic study further demonstrated that heme oxygenase-1 protein (HO-1) regulated the production of reactive oxygen species (ROS) and iron metabolism, and ferroptosis in BCG-infected murine macrophages. The knockdown of Hmox1 by siRNA resulted in a significant increase of intracellular ROS, Fe2+, and iron autophagy-mediated factor Ncoa4, along with the reduction of antioxidant factors Gpx4 and Fsp1 in macrophages infected with BCG. The siRNA-mediated knockdown of Hmox1 also reduced cell survival rate and increased the release of intracellular bacteria in BCG-infected macrophages. By contrast, scavenging ROS by N-acetyl cysteine led to the reduction of intracellular ROS, Fe2+, and Hmox1 concentrations, and subsequently inhibited ferroptosis and the release of intracellular BCG in RAW264.7 cells infected with BCG. These findings suggest that HO-1 is an essential regulator of Mtb-induced ferroptosis, which regulates ROS production and iron accretion to alter macrophage death against Mtb infections. Introduction Tuberculosis (TB) is a chronic disease caused by the infection of Mycobacterium tuberculosis (Mtb), which remains a major public health burden in many developing countries, with approximately 2 billion latent infections and 9.87 million new cases in 2020 (WHO). A recent study confirmed that a co-infection of COVID with Mtb could worsen the COVID-19 infection (Visca et al., 2021). Although considerable efforts have been made in combating Mtb infections and TB disease, the elusive pathogenesis in the development of TB leads to the difficulty and challenges in the prevention and treatment of this ancient disease (Sheedy and Divangahi, 2021). There is not an effective TB vaccine currently available for adults, and clinical treatments for TB are largely dependent on antibiotics such as rifampin. It is therefore a necessity to better understand the pathogenesis of TB disease. Macrophages are the host cells and the frontline defense of Mtb infections (Korb et al., 2016). The outcome of the hostpathogen interaction between macrophages and Mtb is certainly critical for the development of TB (Xu et al., 2014). In this context, intracellular Mtb either can be eradicated through macrophage apoptosis (Lee et al., 2009) and autophagy (Alam et al., 2017), or can persistently survive and grow in macrophages, and induce macrophage necrosis and spread infection to other cells by evolving an immune escape mechanism (Liu et al., 2017;Pajuelo et al., 2018;Zhai et al., 2019). Therefore, understanding the molecular mechanism of Mtb-induced macrophage deaths, particularly macrophage necrosis, may enable us to uncover novel targets for host-directed therapy (HDT) of TB by altering macrophage death in response to Mtb infections. In addition to necroptosis and pyroptosis, the two most studied forms of Mtb-induced macrophage necrosis, ferroptosis is another form of programmed cell death (PCD) that is a type of necrosis dependent on iron (Dixon et al., 2012;Amaral et al., 2019). Interestingly, macrophage necrosis induced by Mtb infection shared the typical characteristics of ferroptosis (Shastri et al., 2018;Amaral et al., 2021;Baatjies et al., 2021). In this regard, an external stress such as Mtb infection could elevate intracellular levels of Fe 2+ and reactive oxygen species (ROS) to trigger the Fenton reaction, and result in the production of large amounts of hydroxyl radicals, sequentially impair glutathione peroxidase 4 (GPX4) activity and anti-lipid peroxidation capacity, and ultimately lead to overwhelming lipid peroxidation of intracellular membrane phospholipids and cell disintegration and death (Dixon et al., 2012;Li et al., 2020). Indeed, the ferroptosis inhibitor could effectively reduce Mtbinduced macrophage necrosis and bacterial load in a mouse infection model (Amaral et al., 2019). These findings suggest that intracellular ROS and lipid peroxidation are essential for ferroptosis in response to Mtb infections, implying that oxidation-related ferrous ion imbalance is involved in cell ferroptosis death; however, the molecular mechanism underpinning the ferrous ion production in Mtb-induced macrophage ferroptosis has not been completely elucidated. Heme oxygenase 1 protein (HO-1) is encoded by HMOX1 gene, which is an important stress-responsive enzyme highly expressed in lungs, which catalyzes to degrade heme to Fe 2+ , carbon monoxide (CO), biliverdin, and bilirubin, and is essential in the balance of intracellular Fe 2+ and ROS (Araujo et al., 2012). Moreover, HO-1 is also considered as one of the biomarkers for the diagnosis of TB in clinical settings (Rockwood et al., 2017;Yong et al., 2019;Uwimaana et al., 2021;Yang et al., 2022); whether it plays a role in the regulation of Mtb-induced ferroptosis in macrophages, however, has yet been investigated. In the present study, the involvement of HMOX1 in macrophage ferroptosis in response to a mycobacterial infection was interrogated by transcriptome analysis of peripheral blood in TB patients, and its mechanism in ferroptosis was further investigated in mice and macrophage-like RAW264.7 cells by infection of Bacillus Calmette-Guerin (BCG). Our results demonstrate that HO-1 is a negative regulator of murine macrophage ferroptosis in response to BCG infections, in part through a mechanism by which HO-1 inhibits the intracellular ROS production and iron accretion, but induces Gpx4 expression in murine macrophages infected by BCG. 2 Materials and methods 2.1 Mice C57BL/6 mice were purchased from Gempharmatech Co., Ltd (Jiang Su, China). Experiments using mice were approved by the Laboratory Animal Welfare Ethics Review Committee of Ningxia University (NXU-2018-011). Sixteen mice were randomly divided into two groups, PBS control and BCG infection. Mice were housed in specific pathogen-free conditions in a 12-h light/dark cycle with ad libitum access to food and water. All animal studies were conducted at the Laboratory Animal Center of Ningxia University (Yinchuan, China). Bacteria and infection Mycobacterium tuberculosis attenuated strain BCG was purchased from Chengdu Institute of Biological Products Ltd (Chengdu, China). The lyophilized bacteria preparation was dissolved and washed with PBS, and subsequently gently sonicated to disrupt bacterial clumps for single-cell suspension in DMEM or PBS. The bacterial suspension was aliquoted and stored at −20°C, and was used within 2 weeks after the preparation. For bacterial infection in mice, mouse was infected with BCG in 100 µl at a dose of 5 × 10 6 colonyforming units (CFU)/mouse via tail vein injection. The lung tissues were harvested for analysis at 30 days post-infection. 2.4 Cell culture, in vitro BCG infection, and siRNA transfection RAW264.7 murine monocyte/macrophage-like cells were cultured in DMEM containing 10% calf serum at 37°C in 5% CO 2 atmosphere. For bacterial infection, RAW264.7 cells were seeded in a six-well plate at a density of 2 × 10 5 /well and cultured for 16 h. Cells were then pretreated with NAC for 1 h or transfected with siRNA for 12 h prior to be infected with bacteria at a multiplicity of infection (MOI) of 1, 5, or 10. The cells were harvested for analysis at 24 h post-infection. siRNA was transfected with Lipofectamine RNAiMAX per manufacturer's instructions (ThermoFisher Scientific, Waltham, MA, USA). To test the efficiency of siRNA-mediated knockdown of protein of interest, total proteins of transfected cells were extracted at 24 h post-transfection and were analyzed by Western blotting assay. Western blotting Total proteins of cells were lysed with RAPI buffer containing protease inhibitor (#P0033, Beyotime, Shanghai, China). The protein concentration of soluble fraction of cell lysates was determined using the Pierce ™ BCA agents (#23225, ThermoFisher). The protein samples were separated by SDS-PAGE prior to being transferred to PVDF membranes. The PVDF membrane was then blocked with 5% fat-free milk for 1 h at room temperature (RT), before it was incubated with primary antibodies to protein of interest for overnight at 4°C with shaking (45 rpm). The membrane was then washed 3 × 10 min in TBST solution containing 0.2% Tween-20 prior to being incubated in appropriate horseradish peroxidase (HRP)-conjugated secondary antibody for 1 h at RT. The specific binding of protein was developed in Western Lightning ® Plus-ECL (#NEL105001EA, PerkinElmer) and visualized in Amersham Imager 600 (Cytiva, USA). The abundance of protein was semi-quantified with ImageJ 1.52a as described elsewhere (Pillai-Kastoori et al., 2020). Beta-actin or tubulin served as internal loading control; GraphPad was used to calculate "Mean ± SD" and statistical analysis. The antibodies used in this study are listed in Supplementary Table S1. Measurement of apoptosis and necroptosis Cells were treated with the Annexin V/PI Apoptosis Detection kit (KGA108-1, keygen Biotech, Nan Jing, China). Cells were washed with PBS, followed by being centrifuged at 2,000 rpm for 5 min, and the medium supernatant was discarded. The resulting cell pellet was washed twice with PBS by centrifugation. A total of 1-5 × 10 5 cells were suspended in 500 µl of binding buffer before 5 ml of Annexin V-FITC and 5 ml of propidium iodide were added consecutively. The cell suspension was incubated for 5 min at room temperature in the dark. The above samples were detected using a flow cytometer. The processed samples were detected by flow cytometry. Measurement of cell death Cell viability was measured by Trypan Blue staining assay. 0.4% Trypan Blue solution was added to a single-cell suspension at a ratio of 1:9 in volume. Live/dead cells were read using Invitrogen Countess 3 (ThermoFisher) and the percentage of live cells was calculated. 2.8 Bacterial colony-forming unit count RAW264.7 macrophages were preincubated in medium containing 2.0 mM NAC for 1 h, or transfected with siRNA-Hmox1 for 24 h, followed by incubating with BCG at an MOI of 5 for 1 h. The cells were then rinsed with medium to remove unattached bacteria and cultured with fresh medium for an additional 24 h. The culture medium was then collected for accessing the release of intracellular bacteria due to cell necrotic death including ferroptosis by spreading series diluted medium in 7H10 agar plates and incubated at 37°C for 21 days. The number of colonies on the plates was counted as CFU numbers. The final bacterial CFU number was calculated by multiplying the dilution factor and the count on plates. Intracellular iron measurement The content of intracellular ferrous iron (Fe 2+ ) was measured using the Iron Assay kit (#ab83366, Abcam). Only ferrous (Fe 2+ ) can bind to the iron probe to form a stable-colored complex that can be accessed by reading an absorption peak at a wavelength of 539 nm. Solutions of standard curve and reaction were prepared in accordance with the manufacturer's instructions. Intracellular ROS detection Intracellular ROS detection was measured with the CellROX ™ Deep Red kit (#C10491, ThermoFisher). CellROX ™ Red is a fluorescent probe for measuring ROS in living cells. CellROX ™ Red emits a stable bright red fluorescence upon an oxidation by ROS. The maximum absorption/emission wavelength is approximately 485/520 nm. Mitochondrial fluorescent staining Live cell mitochondria were stained with CellLight ™ Mitochondria-GFP BacMam 2.0 (#C10508, ThermoFisher). CellLight ™ Mitochondria-GFP was added to the cell cultures, and cells were continuously incubated for an additional 16 h at 37℃. The mitochondrial fluorescence was then observed in a laser confocal microscope (SP5, Leica, Germany). Lipid peroxidation assay An Image-iT ® Lipid Peroxidation Kit (#C10445, ThermoFisher) was used to assay cellular lipid peroxidation. The BODIPY ® 581/591 C11 probe binds to polyunsaturated fatty acids in the cell membrane. Due to the oxidation of fatty acids, the absorption peak of the probe shifts from 590 nm (red) to 510 nm (green); the change in value of the absorption peaks at different wavelengths was calculated using the arithmetic mean. For immunofluorescence assays, live cells were incubated with the BODIPY ® 581/591 C11 reagent for 30 min. Cell membrane lipid peroxidation was then respectively observed under excitation (581/488 nm)/emission (591/510 nm) conditions. The staining was observed and imaged under a laser scanning confocal microscope (SP5, Leica, Germany). 2.13 Immunocytofluorescence staining RAW264.7 cells were seeded on collagen pre-coated sterile cover slides. Cells were fixed in 4% paraformaldehyde for 30 min, washed in PBS for 3× 5 min, and penetrated with 0.1% Triton X-100 (PBST) for 15 min at RT. Cells on slides were then blocked with 5% BSA for 1 h at RT, followed by incubating in primary antibody overnight at 4°C. Slides were then washed 3× 5 min in PBST before they were incubated with appropriate fluorescent secondary antibody with light proof for 1 h at RT. Cell nuclei were counterstained with DAPI. Images were acquired under a laser scanning confocal microscope (SP5, Leica, Germany). Immunohistochemistry staining The lung tissues were fixed in 4% paraformaldehyde and embedded in paraffin. Four-micrometer-thick sections were cut with a Leica RM2135 Microtome (Germany). After xylene deparaffination, rehydration, and antigen retrieval, the sections were blocked in 10% donkey serum blocking buffer. Slides were then incubated in primary antibodies overnight at 4°C, followed by being incubated with HRP-conjugated secondary antibody for 1 h at RT. The antigen-antibody binding was detected with DAB substrate solution. The sections were counterstained with hematoxylin for nuclei. The staining was observed and imaged under the microscope (BA400Digital, Motic, USA). Immunohistochemical (IHC) images were analyzed using ImageJ-IHC profiler for semi-quantification as described elsewhere (Crowe and Yue, 2019). GraphPad 8.0.1 was used for statistical analysis. Transmission electron microscopy test Cell pellets were resuspended in 0.5% glutaraldehyde fixative and incubated for 10 min at 4°C. Cells were re-pelleted and treated with 1% osmium tetroxide. The cell samples were then dehydrated with a low to high concentration of acetone prior to be embedded in epoxy resin. Fifty-nanometer-thick sections were prepared using Ultramicrotome EM UC7 (Leica, Germany). Samples were stained with uranyl acetate and lead citrate. Transmission electron microscopy was used to observe cell morphology. GEO data analysis The GSE83456 dataset is sourced from the Gene Expression Omnibus (GEO) database. The Ethic Committee of Human Research at General Hospital of Ningxia Medical University approved the protocol of human study. The dataset includes peripheral blood transcriptome sequencing information for three groups, namely, 30 healthy control subjects (Control), 30 individuals with extra-pulmonary tuberculosis (EPTB), and 30 patients with pulmonary tuberculosis (PTB). The demographics of TB patients whose blood samples were collected and used in this study are listed in Supplementary Table S2. The dataset was normalized by Log2 transformation in R software 3.4.1. Subsequently, the gene IDs were converted to gene names. The significance of the three groups of datasets was analyzed in GraphPad 8.0.1 using one-way ANOVA followed by Tukey's multiple comparisons test. ELISA testing of clinical samples The Human Heme Oxygenase 1 (HO-1) ELISA Kit (#ab207621, Abcam) was used to detect HO-1 protein levels in the peripheral blood of clinical TB patients (n = 59) and healthy individuals (n = 26). The Hemin Assay Kit (#ab65332, Abcam) was employed for determining the concentration of H in the serum of TB patients (n = 57) and healthy individuals (n = 26). The measurement was performed per the manufacturer's instructions. The collection and processing of human peripheral blood samples passed the approval of the Medical Research Ethics Review Committee at General Hospital of Ningxia Medical University General Hospital (NO. 2020-410). Statistics All graphical data are expressed as mean ± SD. Graphical data and statistical analyses were processed with GraphPad Prism 8.0.1. Comparisons between the two groups were carried out using an unpaired two-tailed t-test and triple comparisons were statistically analyzed using one-way ANOVA. Charts and legends provide the number of independent experiments and the results of a representative experiment. Transcriptome analysis of peripheral blood revealed differential expression of ferroptosis-related genes in patients with tuberculosis To investigate whether ferroptosis is involved in the pathogenesis of TB development in clinic, peripheral blood of healthy individuals (Control, N = 30), patients with PTB (N = 30), and patients with EPTB (N = 30) was collected for transcriptome analysis, and compared with the publicly available GEO database (GSE83456). Collated datasets were processed using log2 normalization, and the ID_REF was converted to gene symbols. The transcriptome analysis demonstrated that anti-lipid peroxidation factor GPX4, a major scavenger of phospholipid hydroperoxides, and the key regulator of ferroptosis, was significantly downregulated ( Figure 1A), but anti-oxidative stress factor HMOX1 was upregulated ( Figure 1B) in both PTB and EPTB patients, compared to healthy control cohorts. Moreover, the increased level of HMOX1 transcript was in line with the more abundant HO-1 protein in peripheral blood from TB patients (6.333 ng/ ml, N = 57) compared to healthy individuals (2.58 ng/ml, N = 26), as determined by ELISA ( Figure 1C). Heme B is a substrate for the enzymatic reaction of HO-1, and forms a chlorinecontaining porphyrin, namely, Hemin in blood. Hemin is a potent inducer of HO-1 and plays a certain extent of antiinfective roles in the body (Kim et al., 2021). Hemin content in peripheral blood was significantly higher in TB patients (0.85 ± 0.44 ng/ml, N = 57) than healthy individuals (0.29 ± 0.26 ng/ml, N = 26) ( Figure 1D). This result was in line with the concentration of HO-1 protein in peripheral blood TB patients. In order to reveal a correlation between HMOX1, an oxidative stress signature, and ferroptosis regulators, a CorrPlot map was constructed using Spearman rank correlation ( Figure 1D). The CorrPlot map showed that the HMOX1 expression was negatively associated with the level of circulating GPX4, suggesting that HO-1 might play a regulatory role in the ferroptosis in TB pathogenesis. In addition, transcriptome analysis also unraveled the differential expression of other ferroptosis genes in peripheral blood of TB patients (Supplementary Figure S1). Among them, the transcripts of ACSL4, LPCAT3, ALOX5, COQ10A, VDAC2, NOX4, xCT, and FTH1 genes were increased in peripheral blood of both patients with PTB and EPTB (Supplementary Figures S1A, D The color in the CorrPlot map represents the correlation coefficient, darker color represents stronger correlations. Red represents positive correlation while blue represents negative correlation. Cluster analysis using the ward.D2 method in Hiplot. Data were processed using GraphPad Prism 8.0.1 software and ImageJ 1.52.a. Unpaired t-test was used to analyze the differential changes of two groups. One-way ANOVA and Tukey's multiple comparisons test was used to analyze the differential changes of multiple groups. Data represented mean ± SD; significant differences were indicated with asterisks (**p < 0.01; ***p < 0.001). ns, no statistical difference. data clearly evidence that ferroptosis is involved in the TB pathogenesis. BCG induced ferroptosis in macrophages To test whether the infection of Mycobacteria could induce ferroptosis in vivo, 8-week ICR mice were infected with the Mtbattenuated strain BCG at a dose of 5 × 10 6 CFU/mouse in 100 µl volume via tail vein injection, and the ferroptosis was assessed by the expression of Gpx4 and HO-1 proteins in lungs at 30 days post-infection ( Figure 2A). IHC staining assay showed a reduced Gpx4 protein, the signature of cell ferroptosis, and an increased HO-1 protein abundance in lungs of BCG-infected mice compared to the control group ( Figures 2B, C), which was in accordance with the transcriptomic findings in peripheral blood of TB patients (Figure 1 and Supplementary Figure S1). Given the Mtb-infected macrophage death in TB pathogenesis, we next investigated whether the BCG infection could induce ferroptosis in RAW264.7 murine macrophage-like cells. Immunoblotting assay demonstrated a dose-dependent induction of Gpx4 and HO-1 in this type of cells, except for an inhibition of Gpx4 expression in cells infected with a low dose of BCG at an MOI of 5 ( Figures 2C-F). Interestingly, the inhibition of Gpx4 and induction of Hmox1 was in a time-dependent manner, when cells were infected with BCG at an MOI of 5 for the 24-h time period ( Figures 2G, H, I). Annexin V/PI flow cytometry revealed a dosedependent reduction of cell necrosis but an increase of apoptosis in RAW264.7 cells at 24 h post-infection of BCG at an MOI range of 5 to 15 ( Figure 2J and Supplementary Figure S2A). As expected, the infection of BCG significantly reduced cell viability ( Figure 2K) and increased the production of intracellular ROS ( Figure 2L; Supplementary Figure S2B) and the intracellular level of Fe 2+ ( Figure 2M). Importantly, the infection of BCG increased the fraction of cells with lipid peroxidation as determined by BODIPY 581/591 C11 assays ( Figures 2N and 3A; Supplementary Figure S2C). The ratio of lipid peroxidation cell (Green)/normal cell (Red) was 17.22% in the BCG-infected cells, while it was 3.52% in uninfected cells ( Figure 2K). Transmission electron microscope (TEM) observation of BCG-infected RAW264.7 macrophages showed increased mitochondrial membrane ridge breaks compared to uninfected cells ( Figure 3B). Mechanistic study by immunocytochemistry (ICC) ( Figure 3C) and Western blotting ( Figures 3D-F) demonstrated a decrease of anti-ferroptosis regulators Gpx4 and Fsp1, along with the decrease of cell mitochondria staining ( Figure 3C), but an increase of proferroptosis marker HO-1 in BCG-infected cells compared with uninfected cells. These observations were further corroborated by a molecular study using Western blotting assay (Figures 3D-F). In addition, the expression of anti-ferroptosis molecules Alox5 (Supplementary Figure S3B), Vdca2 (Supplementary Figure S3C), and Ncoa4 (Supplementary Figure S3D) was also inhibited, while pro-ferroptosis factor xCT (Supplementary Figure S3E) was increased in cells infected with BCG, relative to the uninfected controls. These results indicated that the infection of BCG could induce ferroptosis of RAW264.7 cells, and the increase of intracellular ROS production and the alteration of HO-1 were part of the underlying mechanism of BCG-induced ferroptosis. 3.3 Intracellular ROS contributes to BCG-induced ferroptosis in RAW264.7 macrophages Next, we sought whether the intracellular ROS contributed to ferroptosis in RAW264.7 cells in response to BCG infections. In order to investigate the effect of ROS in ferroptosis, H 2 O 2 , a major ROS source, and RSL3, one of the most common inhibitors of GPX4 and inducer of ferroptosis, separately served as positive controls. Optimization experiments showed that the highest inhibition of the ferroptosis regulator Gpx4 was found in RAW264.7 cells exposed to H 2 O 2 and RSL3 at concentrations of 0.2 mM and 2 mM, respectively (Supplementary Figures S4A, B). These optimized concentrations were therefore used in further experiments of this report. Indeed, similar to treatments of H 2 O 2 and RSL3, the infection of BCG also displayed effects of inhibition of cell viability ( Figure 4A), inductions of intracellular Fe 2+ ( Figure 4B) and ROS production ( Figure 4C Figure S4D) in RAW264.7 macrophages at an MOI of 5. As a consequence, these treatments induced mitochondrial membrane ridge breaks (arrows) in RAW264.7 macrophages as determined by electron microscope observation ( Figure 4F). Molecular analysis further demonstrated that the infection of BCG significantly inhibited the expression of Gpx4 and Fsp1 proteins, but increased HO-1 expression ( Figure 4G). Of note, the treatments of H 2 O 2 or RSL3 failed to increased HO-1 protein, although these treatments inhibited Gpx4 and Fsp1 in RAW264.7 cell as seen in BCG-infected cells ( Figure 4G). The infection of BCG is comparable to intracellular ROS production and cell peroxidation, and to the inhibition of the expression of Gpx4 and Fsp1 with that of 0.2 mM of H 2 O 2 , although the effect was to a lesser extent compared to that of 2.0 mM RSL in RAW264.7 cells. These results suggest that HO-1-mediated ROS production may play a major role in the regulation of ferroptosis in macrophages against Mtb infections. ROS scavenger NAC reduces BCGinduced macrophage ferroptosis NAC is a potent antioxidant and ROS scavenger. It is able to reduce the excessive lipid peroxidation induced by BCG infection in macrophages through the scavenging of ROS. To The infection of BCG induced lipid peroxidation in RAW264.7 cells at an MOI of 5 for 24 h as determined by BODIPY 581/591 C11 assays. Upon oxidation, its excitation of Red/590 nm shifts to 510 nm (Green). The ratio of Green/Red cells in the BCG-infected cells was 17.22%, while the uninfected cells was 3.52%. Data obtained from three independent experiments were processed using GraphPad Prism 8.0.1 software and ImageJ 1.52.a. Unpaired t-test was used to analyze the differential changes of the two groups. One-way ANOVA and Tukey's multiple comparisons test was used to analyze the differential changes of multiple groups. Data represented mean ± SD from three independent experiments; significant differences are indicated with asterisks (**p < 0.01; ***p < 0.001). test the effect of NAC on BCG-induced macrophage ferroptosis, RAW264.7 cells were preincubated in medium containing 2.0 mM of NAC for 1 h prior to being infected with BCG. As expected, the treatment of NAC significantly increased anti-lipid peroxidation factor Gpx4 in BCG-infected cells in a dosedependent manner (Supplementary Figure S5A). The NAC treatment alone significantly increased the cell viability ( Figure 5A) and reduced intracellular Fe 2+ concentration ( Figure 5B), intracellular ROS production ( Figure Figure 5F). These data support the notion that intracellular ROS plays a key role in ferroptosis of macrophages in response to Mtb infection (Kuang et al., 2020), and a modulation of Hmox1 gene may alter ROS production and ferroptosis. Knockdown Hmox1 increases macrophage ferroptosis in response to BCG infections HO-1 is a key enzyme against oxidative stress by inhibiting ROS production and reducing ROS cytotoxicity (Seiwert et al., 2020); therefore, an inhibition of Hmox1 may impact the ROS ROS scavenger NAC reduces BCG-induced macrophage ferroptosis. RAW264.7 macrophages were preincubated in medium containing 2.0 mM NAC for 1 h prior to being infected with BCG at an MOI of 5 for 24 h, before they were harvested for analysis. (F) Representative blots and semi-quantitative analysis of Gpx4, Fsp1, and HO-1 proteins of RAW264.7 cells treated with the indicated conditions. The NAC pretreatment increased Gpx4 and Fsp1 expression, but decreased Hmox1 protein in BCG-infected cells. Data obtained from three independent experiments were processed using GraphPad Prism 8.0.1 software and ImageJ 1.52.a. All values are presented as mean ± SD (**p < 0.01, and ***p < 0.001; n = 3). production and ferroptosis in macrophages in response to Mtb infections. To test this hypothesis, the expression of Hmox1 gene in RAW264.7 cells was knockdown by transfection of siRNA to Hmox1 ( Figure 6A). The three siRNA candidates to Hmxo1 exhibited an ability to knock down the gene expression at the protein level. siHmxo1-193 and siHmxo1-172 showed more efficiency in the inhibition of HO-1 expression compared to siHmxo1-27, but the inhibition mediated by siHmxo1-193 and siHmxo1-172 showed no difference ( Figure 6A). The mixture (siHmxo1) of equal molar ratio of siHmxo1-193 and siHmxo1-172 was therefore employed for further experiments in this report. In addition to the reduced expression of HO-1 protein, the siRNA-mediated knockdown of Hmox1 amplified the inhibition of the expression of Gpx4 and Fsp1, and the induction of Ncoa4 expression in cells infected with BCG ( Figures 6B-F). Functionally, the siRNA-mediated reduction of Hmox1 significantly amplified the BCG-inhibited cell viability ( Figure 6G), and BCG-induced intracellular Fe 2+ concentration ( Figure 6H), intracellular ROS production ( Figure Figure S6B) to some extent in cells uninfected with BCG. In addition, morphological observation revealed worsening mitochondrial membrane ridge breaks in siRNA-transfected RAW264.7, compared to untransfected macrophages infected with BCG ( Figure 6L). More importantly, the pretreatment of NAC significantly inhibited the release of BCG into the culture medium from lytic death of cells caused by necroptosis/ferroptosis during the infection ( Figure 7A), while Hmox1 knockdown induced the BCG release ( Figure 7B), compared to BCG-infected untreated cells as determined by the CFU assay ( Figures 7A, B). These results clearly suggest the importance of HO-1-modulated ROS in ferroptosis of macrophages against Mtb infections. Discussion Macrophages are the host cells and the frontline defense against Mtb infection, and the form of death of infected macrophages plays a pivotal role in the outcome of Mtb infections. Therefore, a better understanding of the mechanisms of macrophage death induced by Mtb infection will allow us to identify novel targets for HDT in TB treatments. Ferroptosis is a PCD induced by overwhelming lipid peroxidation reactions, and one of mechanisms of Mtb spread following the infection. In the present study, we found that heme oxygenase-1 (HMOX1) and pro-ferroptosis cytokines were upregulated, but glutathione peroxidase 4 (GPX4) and other key anti-lipid peroxidation factors were downregulated in TB patients and lungs of mice infected with BCG. A mechanistic study further demonstrated that HO-1 regulated the lipid peroxidation-mediated production of ROS and ferroptosis in macrophages in response to BCG infection. These findings suggest that HO-1 is an essential regulator of Mtb-induced ferroptosis, which regulates ROS production and alters macrophage death in response to Mtb infections, suggesting that HO-1 is a potential target for developing HDT in TB treatments. Metabolic interactions between host cells (macrophages) and pathogen (Mtb) significantly affect the host immune responses and the proliferation of pathogen, and the outcome of Mtb infection (Cambier et al., 2014;Olive and Sassetti, 2016). Iron is one of the most important substances essential to the growth of both pathogen and host cells, which is required for the maintenance of macrophage functions, and the survival of intracellular Mtb (Baatjies et al., 2021). Therefore, the competition in the use of iron (Fe 2+ ) source between the macrophages and Mtb may affect the intracellular level of Fe 2+ , and may be critical for the determination of the form of host cell death (necroptosis or apoptosis) and the fate of intracellular pathogens (eliminated or escaped) (Jones and Niederweis, 2011;Mitra et al., 2019;Zhang et al., 2020;Rodriguez et al., 2022). Ferroptosis is mainly triggered by extra-mitochondrial lipid peroxidation (Dixon et al., 2012), which is characterized by lipid peroxidation as a consequence of iron-dependent accretion of ROS (Dixon and Stockwell, 2014;Amaral et al., 2019). This new form of cell death was also corroborated in macrophages in response to Mtb infection, and it was the main cause of lung tissue necrosis caused by Mtb (Amaral et al., 2019). This finding has gained a great interest in the field of TB research. In the present study, differential expression of genes related to ferroptosis was identified in peripheral blood of TB patients by transcriptome analysis. In particular, the ferroptosis signature gene GPX4 was downregulated, accompanied by the upregulation of HMOX1 gene, HO-1, and Hemin content in TB. These observations were further confirmed in RAW264.7 cells infected with BCG. The in vitro study demonstrated that HO-1-regulated ROS played a key role in macrophage ferroptosis induced by BCG infection. Mechanistically, scavenging ROS with NAC inhibited BCG-induced macrophage ferroptosis and the release of BCG into the culture medium caused by necroptosis/ferroptosis during the infection, while siRNA-mediated knockdown of Hmox1 gene expression resulted in opposite effects to NAC treatments. Our results suggest that HO-1 plays a key role in Mtb-induced ferroptosis, through a mechanism by which it regulates ROS production and alters macrophage death in response to Mtb infections. Heme is a major source of iron for Mtb growth and survival, implying that Mtb has evolved a complex heme enzymatic mechanism (Jones and Niederweis, 2011;McLean and Munro, 2017;Mitra et al., 2019;Zhang et al., 2020). During the Mtb infection, increased ROS production, oxidative mitochondrial damage, and the inducible isoform of HO-1 were observed in macrophages (Amaral et al., 2016;Amaral et al., 2019). All these lines of evidence suggest that ferroptosis may be a primary form of Mtb-induced macrophage death (Amaral et al., 2016;Amaral et al., 2019). Functionally, HO-1 catalyzes the degradation of oxidant heme into biliverdin, iron, and carbon monoxide (CO) (Szade et al., 2021). HO-1 was significantly upregulated in TB patients and in experimental animals infected with Mtb, and was a potential target for HDT for TB (Chinta et al., 2021;Uwimaana et al., 2021). In the present report, the increase of both HO-1 protein and Hemin content, a main source of iron for Mtb and host cells, was also observed in peripheral blood of TB patients, at both transcriptional and/or translational levels, which was strongly correlated with several cytokines. In particular, it was negatively associated with the level of ferroptosis inhibitors (regulators) GPX4 and FSP1 in peripheral blood of TB patients and RAW264.7 cells infected with BCG. Of great interest, the BCG-inhibited Gpx4 was only observed in RAW264.7 cells infected with BCG at a low dose (MOI = 5). A high dose of BCG infection (MOI of 10 and 15) induced Gpx4 expression in this type of cells ( Figures 2D, E). Flow cytometry assay further revealed that the low dose (MOI = 5) of BCG infection induced necrotic cell death, while the high doses (MOI = 10 and 15) of BCG promoted apoptotic cell death in RAW264.7 cells ( Figure 2J). These data suggest that a low dose of Mtb infection favors macrophage ferroptosis, but an increased load of bacteria in a certain range may induce cell apoptosis. We currently do not fully understand the underlying mechanism why the dynamic changes of Gpx4 in RAW264.7 cells infected with different doses of BCG, i.e. an inhibition of Gpx4 expression at an MOI of 5, while an induction at an MOI of 10 and 15. Biochemistry studies demonstrate that ferroptosis is mainly induced through lipid ROS generated by the iron-catalyzed Fenton reaction. Fenton reaction requires ROS and ferrous iron for initiation and space of the integrity of cells (Tang et al., 2021) (Stockwell, 2022). Indeed, scavenging ROS with NAC significantly increased Gpx4, and reduced ferroptosis and intracellular bacteria release in RAW264.7 cells in response to BCG infection. In addition, cell ferroptosis does not produce apoptotic vesicles as cell apoptosis does, and does not cause severe cell morphological changes as necrosis either; therefore, it maintains the cell integrity A B FIGURE 7 Effect of ROS and Hmox1 on intracellular bacterial loads in RAW264.7 macrophages. Intracellular ROS was scavenged by NAC treatment, and the expression of Hmox1 was inhibited by the transfection of siRNA to Hmox1. NAC-pretreated or si-Hmox1-transfected RAW264.7 macrophages were incubated with BCG at an MOI of 5 for 1 h, and cells were rinsed to remove uninfected bacteria prior to being cultured with fresh medium for an additional 24 h. The bacteria released in the culture medium was counted by CFU assay. (A) The count of colonies in medium of cells pretreated with NAC. (B) The count of colonies in medium of cells transfected with si-Hmox1. The NAC pretreatment significantly reduced bacteria released from cell necrosis/ferroptosis death, while si-Hmox1-mediated knockdown of Hmox1 gene strikingly increased CFU count in RAW264.7 cells infected with BCG. Data obtained from three independent experiments were processed using GraphPad Prism 8.0.1 software. Unpaired t-test was used to analyze the differential changes of the two groups. ***p < 0.001; n = 3. and favors Fenton reaction and leads the infected RAW264.7 cells to undergo ferroptosis at an MOI of 5 in this study. In contrast, the increase of bacteria load with an MOI of 10 or 15 in RAW264.7 cells induced cell apoptosis to remove the number of intracellular bacteria ( Figure 2J), which might in turn inhibit cell ferroptosis by two possible mechanisms. First, the process of formation of apoptotic vesicle has the potential to reduce intracellular ROS and block Fenton reaction. Second, divalent iron is highly oxidizing and easily converted into trivalent iron due to loss of electrons (Sukhbaatar and Weichhart, 2018;Nairz and Weiss, 2020). Trivalent iron is stored in ferritin of cells, while only cells in a viable state have an intact iron metabolism able to degrade ferritin, subsequentially release trivalent iron from ferritin, and ultimately reduce the trivalent iron to divalent iron (Li et al., 2020;Mesquita et al., 2020). However, whether the increase of apoptosis is correlated with a reduced ferroptosis needs further study. ROS generated from lipid peroxidation is crucial in cell ferroptosis (Conrad et al., 2018), and HO-1 is an essential cytoprotective enzyme that inhibits inflammation and oxidative stress (Araujo et al., 2012;Rockwood et al., 2017;Seiwert et al., 2020;Chinta et al., 2021;de Oliveira et al., 2022), and a target for HDT of TB (McLean and Munro, 2017;Chinta et al., 2021). The expression of HO-1 is largely dependent on oxidative stress in Mtb-infected cells, where an elevated expression of HO-1 is a strategy of host cells in response to oxidative stress triggered by intracellular bacteria (Rockwood et al., 2017). HO-1 exhibits an ability to inhibit ROS formation, DNA damage, and cytotoxicity induced by heme iron in human colonocytes (Seiwert et al., 2020). Importantly, the interplay between HO-1 and iron metabolism has been demonstrated to play a critical role in modulating immune responses of macrophages, as iron is a key product of HO-1 activity for cellular biological processes in both eukaryotic cells and bacteria (Li and Stocker, 2009). Therefore, HO-1 regulates intracellular iron levels to modulate the cellular oxidation and immune responses of macrophages (de Oliveira et al., 2022). Despite the fact that HO-1 is known to play a protective role in host cells during an infectious process, the upregulated HO-1 in Mtbinfected cells increases intracellular iron level and ROS production, and subsequentially leads to lipid peroxidation and ferroptosis (Yang et al., 2022). In this report, the siRNAmediated reduction of Hmox1 expression increased ROS production, lipid peroxidation, and intracellular iron, and further induced ferroptosis in RAW264.7 cells infected with BCG at an MOI of 5. In addition, the siRNA silence of Hmox1 gene also increased expression of iron autophagy protein Ncoa4, a critical cytokine in the maintenance of iron homeostasis (Bellelli et al., 2016). Together, the findings of others and this study, and the discrepancy of results from different studies imply the complicated and dynamic biological process of ferroptosis in cells in response to Mtb infections. The process of ferroptosis is tightly regulated, and may depend on the species, dose and virulence of pathogen, and host cell-type context, which requires further investigation. Collectively, in the present report, we revealed the upregulation of HMOX1 but a downregulation of GPX4 in TB patients, and lungs of mice infected with BCG. An in vitro mechanistic study using murine macrophage-like RAW264.7 cells further demonstrated that Hmox1 regulated intracellular levels of ROS and Fe 2+ , and subsequentially modulated cell ferroptosis induced by BCG infection. The siRNA-mediated knockdown of Hmox1 gene increased intracellular ROS, Fe 2+ , and Ncoa4, and promoted cell ferroptosis and the release of intracellular BCG, while scavenging ROS demonstrated opposite effects to that of siRNA-mediated knockdown of Hmox1 gene. Our results suggest that HO-1 is an essential regulator of Mtbinduced ferroptosis, which regulates ferroptosis by modulating intracellular ROS production and Fe 2+ to alter macrophage death against Mtb infections. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ supplementary material. Ethics statement This study was reviewed and approved by The Ethic Committee of Human Research at General Hospital of Ningxia Medical University. The patients/participants provided their written informed consent to participate in this study. The animal study was reviewed and approved by The Laboratory Animal Welfare Ethics Review Committee of Ningxia University.
9,132.2
2022-09-23T00:00:00.000
[ "Biology", "Medicine" ]
Virtual Sensors for Designing Irrigation Controllers in Greenhouses Monitoring the greenhouse transpiration for control purposes is currently a difficult task. The absence of affordable sensors that provide continuous transpiration measurements motivates the use of estimators. In the case of tomato crops, the availability of estimators allows the design of automatic fertirrigation (irrigation + fertilization) schemes in greenhouses, minimizing the dispensed water while fulfilling crop needs. This paper shows how system identification techniques can be applied to obtain nonlinear virtual sensors for estimating transpiration. The greenhouse used for this study is equipped with a microlysimeter, which allows one to continuously sample the transpiration values. While the microlysimeter is an advantageous piece of equipment for research, it is also expensive and requires maintenance. This paper presents the design and development of a virtual sensor to model the crop transpiration, hence avoiding the use of this kind of expensive sensor. The resulting virtual sensor is obtained by dynamical system identification techniques based on regressors taken from variables typically found in a greenhouse, such as global radiation and vapor pressure deficit. The virtual sensor is thus based on empirical data. In this paper, some effort has been made to eliminate some problems associated with grey-box models: advance phenomenon and overestimation. The results are tested with real data and compared with other approaches. Better results are obtained with the use of nonlinear Black-box virtual sensors. This sensor is based on global radiation and vapor pressure deficit (VPD) measurements. Predictive results for the three models are developed for comparative purposes. Introduction Crop growth is primarily determined by climatic variables of the environment and the amount of water and fertilizers applied through irrigation. Therefore, controlling these variables allows for control of the growth. The greenhouse environment is ideal for farming because these variables can be manipulated to achieve optimal growth and plant development. All crops need solar radiation, CO 2 , water, and nutrients to produce biomass (roots, stems, leaves, and fruits) through the process of photosynthesis. During this process, and when the leaves stomata are opened to capture the CO 2 , the plant emits water vapor through the transpiration process. This becomes a cost that the crop must make to produce dry matter. Moreover, water is lost through evaporation from the soil. The sum of these water losses is known as evapotranspiration. The losses must be compensated through irrigation. Besides, it has been demonstrated in padded greenhouses with soil covered with plastic blankets that the amount of evaporation is negligible. This happens when dealing with hydroponic cultivations [1]. According to this, water should be applied in precise amounts to cover only water losses due to crop transpiration. Excess water would mean an excessive washing out of fertilizers. In turn, it could lead to contamination of the subterranean water, or the flooding of the substratum or radicular asphyxiation. Otherwise, a hydric deficit may be provoked if irrigation does not provide enough water. This can lead to a decrease in production and can even be dangerous for the crop growth. Hence, automatic irrigation control systems are fundamental tools to supply water to the culture in the required amount and frequency. Moreover, as water is a limited resource in many agricultural areas, optimizing productivity through efficient and adequate irrigation is a basic objective. In order to design a good automatic irrigation system, the following questions must be answered: what should the frequency of the irrigations be, and how much water should be applied in the irrigation? To answer these questions, it is necessary to know how much water should be applied to replenish the losses due to the transpiration during the plant's respiration. In most of these works, the microlysimeter became the basic measurement device to record the water losses in crops, subtracting the water content in an instant (t) by the water content in another instant (t−1). However, on many occasions, the measurements were not continuous due to the irrigation process or during the water drainage. Furthermore, it is seldom used by farmers since this device is expensive to acquire and to maintain. From an operational point of view, it is important to find alternatives to this irrigation system gadget. Thus, virtual sensors based on transpiration become a good option to reduce total system cost, especially in the agriculture sector where profit margins are so narrow. Such virtual sensors must be based on sensors that are typically installed in greenhouses for climate control (temperature, humidity, and solar radiation), thereby reducing the installations costs. Virtual sensors become a very efficient and powerful tool that has been successfully used in other fields [10][11][12]. These sensors utilize models in order to estimate features from low-cost measurements. Ideally, the virtual sensors should be simple and obtainable from the collected data. It should not require extensive training. Virtual sensors are useful in replacing physical sensors, thus reducing hardware redundancy and acquisition cost, or as part of the fault detection methodologies by having their output compared with that of a corresponding actual sensor. Virtual sensors may be developed based on mathematical models obtained directly from the Physics of the system and first principles. In many cases, such mathematical models are unavailable, or their exact parameter values are unknown, or they are too complicated to be used. For this reason, the development of virtual sensors often has to be based on system identification [13]. The purpose of the current study is to develop a virtual sensor to infer transpiration from other easily measured variables. For this purpose, the use of the microlysimeter as the sensor to calculate the transpiration must be substituted, due to its high cost in acquisition and maintenance. In this paper, the development of the virtual sensor for transpiration makes use of different techniques for data preprocessing, including the selection of variables, the construction of appropriate training, the selection of test sets, the final validation and the performance assessment. The resulting virtual sensor has been validated and compared with real data and with other virtual sensors in the literature, providing promising results. The paper is organized as follows: Section 2 shows a background of different sensors used to take transpiration measurements. The different virtual sensors are shown in the Section 3. Section 4 gives an overview of the greenhouse where the experiments were performed, and its main characteristics, as well as the collected experimental data. The main results and discussions are summarized in Section 5. Finally, the major conclusions are drawn. Crop Transpiration The irrigation control systems are essential tools to provide water to the crop in the required amount and frequency. Moreover, water is a limiting resource in many agricultural areas. In such places, it should be a basic objective to optimize their management and productivity through adequate and efficient irrigation. The proposed control algorithm design ( Figure 1) is a hierarchical control system, consisting of two levels: • The control level uses an event-based PI controller [14], to control when a certain event occurs, either by time, by variation of a particular climatic variable such as radiation V S R ), or by a particular state crop. A PI controller is used to achieve the setpoint of water supply (X Q W r) as it considers the top layer of the architecture. • The setpoint generation level is based on the greenhouse climate conditions, including: (1) vapor pressure deficit (VPD, V V P D ), which is a function of the temperature and the relative humidity, (2) global radiation, and (3) the state of the crop, measured through the Leaf Area Index (LAI, X LAI ). The setpoint is fixed by the user and is defined as the crop transpiration accumulated (X ET ) until an irrigation event occurs (X Q W ). This event could be a determined transpiration accumulated setpoint or other predetermined conditions. The virtual sensor accumulates the amount of water lost by transpiration from the plant's last irrigation. This measurement is compared with the fixed amount that activates irrigation (setpoint). The irrigation starts when this fixed value is exceeded, and finishes as soon as the amount of water lost has been replenished. A virtual sensor requires an accurate calibration and validation of the instantaneous transpiration measurements at each sampling instant. The measurement of transpiration includes direct (observation, porometer, lysimeter, etc.), and indirect methods (water budget, energy balance, etc.). The indirect methods are more complicated for prediction and verification of the transpiration values because of the meteorological factors involved. The transpiration data can be collected through direct methods such as: • The porometer [15] allows the determination of the leaf conductivity as an index of the stomatal opening and the closing of stomata. It measures the flow of gases or diffusion that takes place through the stomata. The latest porometers allow computerized records. • The bag method [16] collects the water transpired by introducing a branch in a clear plastic bag. The transpired water condenses inside the bag. The total water lost by transpiration corresponds with the weight of the placed water. The time between measurements is undefined, as it depends on the water collected. • In the cobalt chloride method [17], the transpiration is indicated by a color change of a piece of filter paper impregnated with a 3% solution of cobalt chloride. It is applied on a leaf and held in place with a clip. It is blue when dry and pink when wet. The speed at which the paper changes color is an indication of the rate of transpiration. This method can be used to measure the relative rates of transpiration of different species. • The microlysimeter [1,4] is used in plants growing in pots completely closed. The plants are first weighed before measuring, and then they are weighed again at convenient time intervals. Soil evaporation is avoided by covering it with waterproof material. This method can be used with small plants and crops in soilless culture. The results are expressed in grams or milliliters of water transpired per leaf area per unit time. The microlysimeter is the basic measurement device used to continuously record the water losses in crops. Contrarily, the rest of the described sensors take measurements at intervals as they modify the working conditions of the plants. The porometer and microlysimeter are expensive to acquire and maintain, and are difficult for the farmers to manage. Virtual Sensors This section is devoted to describing the main features of three different types of virtual sensors for transpiration designed to replace the microlysimeter in the automatic irrigation system. The aim of such virtual sensors is to substitute the expensive microlysimeter in measuring the transpiration and controlling the irrigation. Figure 2 shows the input variables of a virtual sensor for crop transpiration: (1) global (solar) radiation, (2) Vapor Pressure Deficit (VPD), which is a function of the temperature (T) and the relative humidity (RH), and (3) Leaf Area Index (LAI). These variables are measured by two different sensors typically installed in commercial greenhouses: a psychrometer for the temperature and the relative humidity (VPD), and a pyranometer for the solar radiation. The transpiration model needs to calculate the LAI. The LAI measurements are taken in a noncontinuous way. In this paper, a simplified TOMGRO model [18] adapted to the Mediterranean conditions is utilized to estimate the LAI. The simplified TOMGRO model needs the temperature as the input. Based on the architecture of Figure 2, three different virtual sensors are considered in this paper. The first one is based on a pseudo-physical structure. The others have a more empirical structure with linear and nonlinear behaviors, respectively. All three kinds are described in the following. Pseudo-Physical (Grey-Box) Virtual Sensor Most transpiration estimators are based on the Penman-Monteith equation. In 1948, Penman derived an equation that combined the energy balance and the convective transport of vapor. Afterwards, this model was adapted by Monteith to estimate actual evapotranspiration from plants [19]. This equation essentially combines the equation for heat transfer between the crop and the mass of the surrounding air. In this way, various authors have obtained new formulations without satisfactory results for various crops. The tomato evapotranspiration studies of [20] have as a main drawback the estimation of the leaf stomatal and aerodynamical resistances. Jolliet and Bailey [15] concluded that a layer model proposed in [1] predicts accurately the transpiration in tomatoes. These authors observed the tomato crop transpiration (X ET ) increases linearly with the radiation (V S R ), the vapor pressure deficit (V V P D ), and the wind speed inside the greenhouse. They also pointed out that transpiration can be regulated by not only humidity but also radiation and wind speed. These same authors simplified the Penman-Monteith equation [21]. to describe the transpiration (X ET ) as a process based on two main variables: the solar radiation (V S R ) arriving at a particular depth in the canopy plant, and the vapor pressure deficit V V P D Equation (1). The reduced virtual sensor is shown in the following equation: where the coefficients C A and C B are the parameters dependent on the crop. In [2], it was observed that the coefficient C B increases with the X LAI , and, furthermore, the coefficient adopted different values during the day due to oscillations in stomatal resistance Equation (2). These daily oscillations was corrected with two different parameters for diurnal (C B D ) and nocturnal (C B N ) periods. where X ET is the crop evapotranspiration (gm −2 min −1 ), C λ is the latent heat of evaporation (kg/ o C), C k is the light extinction coefficient for crops (it is related to the leaf inclination angle and the leaf arrangement with regard to the Leaf Area Index, and provides an indication of the plant's efficiency on intercepting the solar radiation), X LAI is the leaf area index (m 2 m −2 ), V V P D is the vapor pressure deficit (KP a), and V G R is the global radiation reaching the crop (W m −2 ). The coefficients C A (unitless) and C B (kgm −2 h −1 kP a −1 ) are constants dependent on the crop. To obtain more reliable results of the virtual sensor, the parameter C B is obtained for diurnal (C B D ) and nocturnal (C B N ) periods through calibration. Black-Box (Empirical) Virtual Sensors The identification system tries to solve the problem of constructing mathematical models of dynamic systems based on the data they observe [22]: inputs u(t) and outputs y(t). The goal is to infer the relationship between sampled outputs and inputs. The identification process was carried out based on prior knowledge of dynamic behavior of gases [14] and the behavior of transpiration ( [1,2,6]). VPD and global radiation was selected as climatic inputs and the LAI as crop growth input. This method is used with the objective of knowing how the dynamics of the systems work and responding to some of the questions raised while drawing up this document. Linear Black-Box Virtual Sensors The parametric virtual sensors (or black box) are diagrams capable of representing any system without having any knowledge about the physical process dynamics. Parametric virtual sensors are not obtained (at least not completely) from the application of physical laws. These virtual sensors are constructed using observations carried out on the system so as to select a concrete value of the parameters. This value is chosen in such a way that the virtual sensor can accommodate the results to the acquired data. This process is called identification. The adjustment of the parameters is the simplest part of the problem of identification [23]. Online identification concerns an algorithm that efficiently uses the measured information when it is obtained from the plant in real time. In this way, it is possible to detect the changes in the dynamics of the system and adjust the virtual sensor conveniently. Under some circumstances, these methods can be rather simple (e.g., the method of minimal recurrent squares developed in this section). The black-box virtual sensors are developed using a system identification technique shown in [23]. The virtual sensors family contains 32 possible formulations based on Equation (3). To obtain each one of the structures, it is necessary to determine the polynomial order as well as the coefficient of the numerator and the denominator for each transfer function [23]. The effects to the inputs, output, and the disturbances are defined as: where, y(t) is the transpiration, u(t) is the different input variables, and e(t) is the estimation error. A(z), B(z), C(z), D(z), E(z), and F (z) represent the polynomials that define the output (transpiration), inputs, and the estimation error. The order of the polynomial equations is defined by regressors, where na outlines the outputs, nb is the order of the input, and nk is the delay of the radiation solar (nk V G R ) and the vapor pressure deficit (nk V V P D ). System Identification Matlab's Toolbox [24] was used for the identification process. The ARX, ARMAX, OUTPUT ERROR, BOX JENKINS, and FIR formulations were tried out. The differences among these virtual sensors are the way in which the inputs, outputs and disturbances are defined with parametric equations. The System Identification Toolbox of Matlab (R) software was used to obtain the virtual sensor. Figure 3 shows the main interface of the Toolbox and the whole process to obtain a virtual sensor. The top of the figure displays the calibration and validation process interface, and the bottom shows the output analysis process. The toolbox allows to process data, estimate the parameters of different types of structures, and validate virtual sensors using different strategies. For the identification process, only the data from the experiments are required. The data can be handled in the time domain or frequency domain, and the experiments can have one or multiple inputs and/or outputs [25]. Nonlinear Black-Box Virtual Sensors A nonlinear component was added to the transpiration virtual sensor to get better fitting. This component is introduced as result of the strong nonlinear behavior in the system inputs. Moreover, these nonlinearities add complexity to the virtual sensor. This increase in complexity is not always translated into higher performance. In System Identification the mathematical relationships between the system's inputs u(t), and outputs y(t) can be computed. Such outputs, inputs, and nonlinearities are introduced in an ad hoc form, relying on a priori knowledge about the system. An important step in system identification is to choose a structure, and generally start testing the simpler structures, and lower order. The first structure tested, and in the end chosen as virtual sensors, was the nonlinear ARX (4). Also the Hammerstein-Wiener virtual sensor was tried out, which are very useful in the case of the nonlinearities affect to sensors, and actuators, such as dead zones or saturation [24]. On the other hand, Nonlinear ARX (NonARX) is more flexible [24]. The general structure for Nonlinear ARX virtual sensor is [26]: where y(t) is the output variable in t time; u and y are the different input and output variables (regressors); and f is the nonlinear function. The current transpiration value is predicted as a weighted sum of past values, and current and past inputs values. With such information the equation becomes: u(t − 1), ..., u(t − nb − 1 are the regressors, the so-called delayed inputs and outputs. Nonlinear ARX regressors can be both delayed input-output variables and more complex nonlinear expressions of delayed variables. The nonlinearity estimator block maps the regressors to the virtual sensor output using a combination of nonlinear and linear functions (Figure 4). [24]. u(t) are the model inputs, and y(t) are the model outputs. Available nonlinearity estimators can be selected from a canopy of different structures such as neural networks, tree-partition networks, wavelet networks and piecewise polynomial approximation. The nonlinearity estimator block can include linear and nonlinear blocks in parallel [24]. Greenhouse Environment The research data used in this work have been obtained from greenhouses located in the Experimental Station of Cajamar Foundation, in El Ejido, in the province of Almeria, Spain (2 • 43'W, 36 • 48'N, and 151 m elevation). The crops grows in a multispan "Parral-type" greenhouse ( Figure 5); [18]. The greenhouse has a surface of 877 m 2 (37.8 × 23.2 m), polyethylene cover, automated ventilation [27] with lateral windows in the northern and southern walls, flap roof window in each span, mesh-protected anti-trips "bionet" of 20 × 10 thickness, and night heating applied with a 95 kW hot air heater that is programmed to maintain the minimum temperature above 14 • C. The greenhouse orientation is east-west with the crop rows aligned north-south. Cropping conditions and crop management are very similar to those in commercial greenhouses. Climatic parameters are continuously monitored within the greenhouse. Outside the greenhouse, a meteorological station was installed, in which air temperature, relative humidity, solar and photosynthetic active radiation (PAR), rain detector, wind direction, and velocity measurements were taken. The cover temperature sensors were located on the faces oriented to the east (two sensors), and west (two sensors). During the experiments, the inside climate variables were also taken, among which stand out: air temperature, and relative humidity with a ventilated psychrometer (model MTH-A1, ITC, Almeria, Spain), solar radiation with a pyranometer (model MRG-1P, ITC, Almeria, Spain), and Photosynthetic active radiation (PAR) with a silicon sensor (PAR Lite, Kipp-Zonnen, Delft, The Netherlands). Among all the climate sensors installed in the greenhouse, only solar radiation and psychrometer was used for the transpiration virtual sensors. The daylight air temperature and humidity are controlled by the top and side windows through the PI controller [28]. Potentiometers allow for knowing the window's position in each control instant. The night air temperature and humidity is controlled by the windows and the heating system [28]. Setpoints of both systems are established at 24 • C [27], and 14 • C for the ventilation and heating, respectively. All the actuators are driven by relays designed for this task. All climatic data was recorded every minute with a personal computer. The acquisition system is formed by two different National Instrument Compact-Fieldpoints connected through Ethernet protocol. For the growth model, it was necessary to know the evolution of leaf area index. It was determined through the leaf area measurements of each plant removed for biomass task, the pruning, and deleafing were also taken into account. The biomass was made up of a destructive sampling of five randomly selected plants every 21 days, duration accorded in the research protocol. The choice of 21 days is twofold: first, it was the sufficient amount of time to find growth differences; and second, it helped to avoid the elimination of too much vegetal stuff in the greenhouse which could end in a modification in the climate or transpiration measurements. The biomass process was measured against: number of nodes, leaf area, number of fruits per bunch, fresh and dry weight of leaves, stem, and fruits. The plant material and fruits were introduced into a drying oven where they remained for 24-48 h (depending on the phenological state) at a temperature of 65 • C. Based on this, the dry matter of leaves, stems, and fruits was determined by analytical balance. The matter of leaves and secondary stems pruning came from the selected plants for biomass while kept in production; once removed from the plant, the fresh and dry weight was taken, such as the biomass. In the case of pruning, stems and leaves are measured separately. Both the bare and the pruning are carried out for the leaf area index measures, executed, as in the biomass, through electronic planimeter (Delta-T Devices Ltd). Microlysimeter was the system chosen to take the transpiration measurement in the present paper ( [18]; Figure 6). The device consists in two electronic weighing scales connected to a personal computer. The first (150 kg ± 1 g, Sartorius) records the weight of a bag with six plants, and a support structure. The second weighing scale (20 kg ± 0.5 g, Sartorius), which follows the first, measures the weight of the drainage from the substrate bag. This system has been developed by the Automation, Electronics, and Robotics Research Group at the University of Almería. The transpiration is calculated as the weight difference between two consecutive time-instants. The six plants scale is required for this calculation. Moreover, The two scales system, microlysimeter, allows for knowing when irrigation begins by changes in weight of the crop unit, as well as knowing when drainage starts (balance of drain) and when both end. As discussed above, an increase in the weight of the scale with the growing unit indicates that irrigation has begun. The process that follows is drainage warned by the heavy increase in the drainage scale, whose end would be indicated through the weight stabilization. From that time, the crop scale would start again to measure the weight loss (transpiration). During the process of irrigation drainage, the value of transpiration is considered as constant, taking the value of transpiration of the moment immediately preceding the irrigation beginning. Transpiration Measurements Validation An important step was to validate the calculation of crop transpiration for each of the cycles, seeing that this data corresponds to the transpiration of the crop at that time. For this issue, five trays with twelve plants were installed and evenly distributed throughout the greenhouse so as to make their mean representative of the entire crop. The trays consisted of two bags of substrate with six plants in each bag, which gives us a total of twelve plants per tray and 60 in the entire test. All trays had drainage connected to a bucket whose sample was collected daily at the same hour. Thus, the average value of these buckets were taken daily to calculate the real drainage. Two differently located droppers were selected to collect daily irrigation amounts in the greenhouse; the final value was estimated from the average of both measures. With data from the drainage and from the droppers, the daily measured consumption was calculated and compared with accumulated daily transpiration from the data every minute, obtaining the graphs (Figures 7 and 8 As shown in the graphs, daily and accumulated transpiration almost exactly match, according to R 2 of the regression. In conclusion, transpiration data calculated from the balance system gives a closer idea of the values of real transpiration. Results of regression show a high R 2 value and a slope with a value close to one, obtaining a graph of estimated and measured values approaching a 1:1 line, which would mean that for an instant "t", a value of transpiration very close to real plant consumption was obtained. Virtual Sensors Calibration Once the transpiration values were obtained, the next step was searching a substitution of this system with transpiration virtual sensors based on Penman-Monteith (P-M). The equation combines the energy balance with the vapor convective transport. In the last years, different physical and pseudo-physical virtual sensors (grey-box) based in the P-M equation have been developed and tested by different authors. For this paper, the pseudo-physical P-M simplification proposed by [2] was chosen because of the good results obtained by [6] for pepper crops in the same conditions. In this paper, these good results were obtained by fitting the parameters differentiating summer and winter seasons and with the different crop development stages. Furthermore, a delay is reported between measured and predicted values for some particular conditions. The causes of the observed delay were explored and a climate variables dependency was found as other authors assert ( [6,29]). On the other hand, the proposed black-box dynamic virtual sensors would be used to design events based on fertirrigation controller. In order to use a modern control algorithm, the use of dynamic virtual sensors joined system identification techniques and are presented as an alternative to physical or pseudo-physical virtual sensors. The proposed virtual sensors incorporate the dynamics of transpiration and will be of varying complexity, beginning with linear black-box virtual sensors fitted to data. Nonlinear virtual sensors based in system identification was tried out obtaining good results. Nonlinearities will be introduced in an ad hoc form, relying on a priori knowledge about the system. Grey-Box Virtual Sensor The calibration of the virtual sensor of transpiration proposed by [2] were performed with two different seasons: one in spring, in 2005 (Table 1), and the other in autumn-winter, in 2006-2007. For the calibration of the spring-summer cycle, all the data gathered during the months of February and July 2005 was used. In contrast, in the case of the winter cycle, the data used was from August 2006 to February 2007. The parameters were determined using an iterative sequential algorithm to minimize the least square error criterion between the real and the estimated transpiration (Montecarlo algorithms). The second phase of the calibration process was based in genetic algorithms to fix the final parameters. The values obtained from the extinction ratio of the radiation (C K ), 0.64 for spring-summer cycles and 0.6 for autumn-winter, matched the results obtained by other authors in the autumn-winter cycle, with equivalent closeness, who obtained an extinction ratio of 0.63. [1] determined a value of 0.64 to cultivate the tomato, [6] obtained values of 0.63 to cultivate the cucumber. For most horticultural crops in greenhouses, the values of (C K ) fluctuate between 0.4 and 0.8 [6]. Values obtained from parameters C A , C B D and C B N are different for both groups of crop cycles to which have been referred, obtaining values in the spring cycle for C A , C B D , and C B N of 0.49, 11.2, and 8.28 respectively, and for the autumn-winter cycle, the same parameters obtained values: C A , C B D , and C B N of 0.3, 18.7, and 8.3, respectively. The next table shows the results of the different parameters calibration: Table 1. Results of the calibration in spring-summer and autumn-winter seasons. Seasons Spring-summer 0.49 11.2 8.28 0.64 In general, the values obtained from the ratio fell within the interval of values obtained by distinct authors, and gathered by [30]. A good behavior in the dynamic was observed in the virtual sensor, even though there is a small overestimation of the estimated values opposite those measured in the start and the end of the crop seasons. Even though this overestimation really exists, it does not reach a significant level, with which in the first analysis it could be concluded that correlation would not be necessary, though it is recommended. The presence of a phenomenon known as delay, which is translated to an advancement of the dynamics of the estimated values for real ones, was also demonstrated, as other authors have also previously asserted ( [6,29]). To calibrate grey-box virtual sensor described in section (Section 3.1), a growth model is required to try out the LAI estimations. The simplified model Tomgro [18] rises as an option to remove the complexity of the full virtual sensor proposed by [31] and make them available to online control systems while retaining their physiological characteristics [32]. The parameter that influence the dynamics of the X LAI was calibrated and validated, first by [18] and later by [33] in the same greenhouse for tomato crops. Linear Black-Box Virtual Sensors To obtain a virtual sensor, it was necessary to choose two groups of transpiration data to try out in the system identification toolbox. One group from spring 2007 was taken for identification, resulting in a total of 53,490 data. For identification validation, 49,990 data from winter 2004 was used. The remaining data was used to obtain the virtual sensor's reliability. The black-box virtual sensor cannot contain data with time slots without data. For this reason, smaller groups of data are used. The Table 2 shows the virtual sensors that have been obtained. More than 1,500 structures were tested, leaving to validation two ARX virtual sensors and an ARMAX virtual sensor. In this case, LAI was not introduced into the system as an entry. The reason for this is that the rate of leaf area index remains constant in the same day, lacking the dynamic of remaining input variables. X LAI was used to divide the crop cycle in different intervals, from 0 to 0.7 , from 0.7 to 1.5, and above 1.5 (m 2 crop m −2 soil ). For the division, it is easy to change the LAI, for instance, by days after planting (DAP), or others time units. Table 2. Results of the calibration in spring-summer and autumn-winter seasons. Where LAI is the Leaf Area Index (dimensionless); na is the parametric equation's order that defines the outputs; nb is the order of the input; nc is the error order; and nk the delay of the radiation solar (nk V G R ) and the vapor pressure deficit (nk V V P D ). LAI interval Virtual sensor na nb nc nk V G R nk V V P D 0.7 or lower ARX450 4 5 0 0 0.7 to 1.5 ARX540 5 4 0 0 1.5 or higher ARMAX55240 5 5 2 4 0 The main problems encountered in the static virtual sensor are sought to be corrected by using dynamic virtual sensors, such as the overestimation that happens with low values of leaf area index and underestimation in very high values, as well as the presence of a phenomenon, which is translated in an advancement of the real dynamics over the estimated values. Nonlinear Black-Box Virtual Sensors The first processing step was to eliminate the middle and both trends of inputs and outputs. It is worth noting that X LAI was not introduced into the system as an entry, because the leaf area index remains constant on the same day and lacks the dynamics as an input variable. To reach this conclusion, many tests with the Matlab toolbox were realized. First, X LAI was introduced in the virtual sensor as a regressor (X LAI (t − i)), obtaining a bad fix in the output. A regressor based on the equation of solar radiation reaching a given depth in canopy ( [2]; Equation (6)) was tried out, but with the same results, such results was the expected: where V G R is the global radiation reaching the crop (W m −2 ), −C k is the extinction coefficient of radiation (unitless), and X LAI is the leaf area index (m 2 crop m −2 soil ). As noted above, LAI remains constant during a chosen day as the parameter (C k ), which means the regressor is constant during a day, depending exclusively on the radiation. This virtual sensor was also evaluated by using an estimated VDP as a function of the inside temperature and the relative humidity. Despite this estimation, the virtual sensors obtained with these two variables show no improvement compared with those calibrated only using VPD. In the end, the radiation and the vapor pressure deficit remain as the unique inputs. Table 3. Results of the nonlinear virtual sensor calibration: na is the parametric equation's order of outputs; nb is the order of the input; and nk is the delay of the solar radiation (nk V G R ) and of the vapor pressure deficit (nk V V P D ). Virtual sensor na nb nk In order to obtain the number of terms in the regressors, it is necessary to obtain a model to try out different combinations. In addition to the number of terms obtained, the nonlinearities were estimated through different possibilities: wavelet network, tree partitions, and sigmoid network. Of all these tested ways to obtain the nonlinearities, the wavelet network gave the best fit joining a nonlinear block and a linear block. Table 3 shows the parameters of the resulting virtual sensor. This virtual sensor was obtained with prediction aim. Virtual Sensors Validation All available data (more than one million for each variable) has been used for the validation. In total, nine different spring-summer and autumn-winter seasons were used. Figures 9, 10 and 11 show an example of the results obtained in the validation process of the grey-box, as well as the linear and nonlinear dynamic virtual sensors in the different cycles. The validation of the virtual sensors can be seen in Figure 9. The presence of delay is demonstrated in Figure 10. In the end, Figure 11 samples a day detailed with the three virtual sensors. In some instances, the system dynamics is not well captured by the virtual sensors, as happens in Figure 10. This is caused by the difficulty in calculating the transpiration by using the microlysimeter. Furthermore, as Figure 9 shows, the transpiration behaves similarly to the sunlight: rising in the morning, lowering in the afternoon, and remaining almost constant at night. The grey-box virtual sensor shows good dynamics. A small overestimation exists in the start and the end of the crop seasons but does not reach a significant level, with which in the first analysis it could be concluded that a correlation would not be necessary, although it is recommended. Furthermore, Figure 10 shows the delay phenomenon, which characterizes the advancement of the virtual sensor dynamics over the underlying physics, as other authors have asserted ( [6,29]). One of the characteristics of the calibration of linear and nonlinear black-box virtual sensor is that the first validation is done during the identification, since the trial and error processes must be done to choose the virtual sensor. Figures 9, 10 and 11 show that both black-boxes follow the dynamics of the tomato crop transpiration, presenting some adjustment problems in nocturnal areas. The problem of night setting is not of interest because it is sought. The night transpiration estimated through the virtual sensor obtains values higher that the pseudo-physical sensor. This overestimation is remarkable, but it is not very important for the final result, due to the fact that night transpiration measurements are very low in comparison with the daylight ones. The transpiration is so low, in fact, that it is difficult to take it with the microlysimeter, due to its 1 g sensor precision. Moreover, this dynamic virtual sensor does not show the anticipation phenomenon, as is evident in the static virtual sensor. It is the resistance of the plant transpiration resulting in a delay of about the same processes that produce it, graphically shown as a delayed action on transpiration estimated. For all the seasons included in this work (from the autumn in 2004 to the autumn in 2008-2009), the virtual sensor's goodness was obtained (the calibration seasons are marked with * in the tables). This goodness for a data series is calculated through the minimum mean square error (MMSE). In all cases, the dynamic virtual sensor obtained good results (MMSE<6%), as can be seen in the three validation errors (see Table 4 and Figure 12). Tables 5, 6 and 7 show a full review of the errors of each model. Conclusions Significantly, the research was conducted over three years, for a total of eight cycles of cultivation in those which took the different climatic and physiological variables of a tomato crop. All data were taken at intervals of one minute to give approximately over one million input data for each variable. The main objective, as noted at the beginning of this paper, is to implement the virtual transpiration sensor of tomato crops for the design of irrigation controllers. For this occasion, we had a measurement system based on transpiration weighing the difference that occurs on a scale from one moment to the next, and to measure water loss with a good approximation. The next task was to seek a virtual sensor that would replace the system based on the weight difference caused by the loss of water by the crop. One was based on sensors that are typically installed in greenhouses: temperature, humidity, and solar radiation. The aim is to reduce total installation costs and to avoid the constant maintenance that the scales require. After preliminary assessment of some of these virtual sensors, Nonlinear ARX had a better fit and, in the end, was the best election for the irrigation virtual sensor proposed. This virtual sensor had good results in the calibration and validation of the virtual sensor. An average error of 5%, for all cycles taken, shows how the choice of this virtual sensor was successful. However, it also presents some problems, such as the overestimation at night which occurs. On the other hand, linear and nonlinear black-box virtual sensors have demonstrated the absence of a advance phenomenon. It was translated to an advancement of the dynamics of the estimated values for real ones. System identification techniques were chosen to obtain a dynamic virtual sensor. Matlab software package was a good option to work with the identification techniques. A large number of different nonlinear dynamic virtual sensors structures was tested. The proposed dynamic for virtual sensors require only two inputs, global radiation and vapor pressure deficit, thus eliminating the inclusion of the X LAI . This virtual sensor presents the possibility of using modern control algorithms that cannot be used with the grey-box sensor. As summary: • Grey-box virtual sensor has a good fixing as advantage, but some disadvantages such as the overestimation at different moments of the year, a higher final error result, and the advance phenomenon. • The black-box virtual sensors obtain better results and also allows the elimination of the grey-box problems: advance phenomenon and overestimation. An overestimation only appears during nocturnal periods. • Nonlinear black-box had the best results. This paper has dealt with the transpiration from an industrial point of view, as a process in which there are entries and exits. The crop itself, and some aspects of the climate inside the greenhouse, are considered disturbances affecting the dynamics. This shifts the focus away from the strictly agronomic, agronomy classic, which studies the exchanges that occur in the greenhouse by static virtual sensors based on fundamental principles, without including the system dynamic effect.
9,780.4
2012-11-08T00:00:00.000
[ "Computer Science" ]
The smallest singular value of certain Toeplitz-related parametric triangular matrices Let L be the infinite lower triangular Toeplitz matrix with first column (µ, a 1, a 2, ..., ap , a 1, ..., ap , ...) T and let D be the infinite diagonal matrix whose entries are 1, 2, 3, . . . Let A := L + D be the sum of these two matrices. Bünger and Rump have shown that if p = 2 and certain linear inequalities between the parameters µ, a 1, a 2, are satisfied, then the singular values of any finite left upper square submatrix of A can be bounded from below by an expression depending only on those parameters, but not on the matrix size. By extending parts of their reasoning, we show that a similar behaviour should be expected for arbitrary p and a much larger range of values for µ, a 1, ..., ap . It depends on the asymptotics in µ of the l 2-norm of certain sequences defined by linear recurrences, in which these parameters enter. We also consider the relevance of the results in a numerical analysis setting and moreover a few selected numerical experiments are presented in order to show that our bounds are accurate in practical computations. Introduction and preliminaries Given p real numbers a , a , ..., ap , µ, denote for an integer k by k = k mod p the residue modulo p of k. De ne the in nite array A = (a ij ) i,j= , ,... by The left upper n × n subarrays of A de ne matrices which we denote by A(µ, a , ..., ap , n). We will suppress some of the parameters if context allows. The lower triangular nature of such matrices follows from the fact that i < j implies a ij = ; that they are mostly Toeplitz follows since if i − j ≠ , the entry a ij depends only on i − j. The exception is the main diagonal which is not constant but forms an arithmetic progression. The columns and rows exhibit an almost periodic behaviour due to the periodicity of the map l → l. Example. When n = and p = , we get the matrix A =A(µ, a , a , a , a , ) In the paper [1] by Bünger and Rump it is shown via elegant reasoning but apparently tailored for the case p = that if µ > , ≤ a ≤ µ + , and ≤ a ≤ a < a + , then for all n the smallest singular value of A(µ, a , a , n) is bounded from below by µ+ +θ(µ,a ,a ) where θ(µ, a , a ) is an expression in whose de niton n does not enter and which, hence, is independent of n. With this they solved a problem posed by Yoshitaka Watanabe from Kyushu University at the Open Problems session of the workshop Numerical Veri cation (NIVEA) 2019 in Hokkaido. The present paper treats in Section 2 the question of a dimension independent lower bound of the singular values for arbitrary p. We transfer the problem more consciously into a question belonging to the asymptotic theory of di erence equations. By using recent results of this theory, we hope to be able to show in the near future that the strong hypotheses of our main theorem can in many cases be provably justi ed; currently we can o er experimental reasons for such a justi cation. The evidences gathered are somehow surprising since in the pure Toeplitz setting the minimal singular value can present a remarkable dependency on the matrix size n (see [2][3][4]). Since the structures studied in this note are encountered in queuing theory, Markov chains, spectral factorizations and the solution of Toeplitz related linear systems, our results can be useful in those areas. Recall that the spectral conditioning is crucial for understanding the achievable precision in the computation of the solution of related linear systems and hence out results are relevant in a numerical analysis context. Therefore in Section 3 numerical experiments are conducted and critically discussed, while Section 4 contains Mathematica © code that allows to experiment conveniently with the sequences de ned in the main result (Theorem 1), and Section 5 ends with conclusions and open problems. The Main Result The section is devoted to the main result regarding lower bounds for the minimal singular value of matrices A(µ, a , ..., ap , n). Concerning notations, σ (X) ≥ σ (X) ≥ · · · ≥ σn(X) ≥ denote the singular values of a square matrix X of size n, · F , · , · , and · ∞ denote the Frobenius norm, the spectral norm, the l induced matrix norm, and the l ∞ induced matrix norm, respectively, where X = σ (X) is the spectral (or l induced) norm, for X being a square matrix of size n. When it is clear from the context, for a given matrix X and for a proper index j, instead of σ j (X) we will use σ j . Our main result concerns the use of the Frobenius norm (see Lemma 1 and Theorem 1), but the other two norms, very popular in a Numerical Analysis setting, are also of interest for the problem under consideration. Before stating and proving the main result, we need a preparatory lemma. Lemma 1. Given an invertible matrix X of size n we have σn(X) ≥ X − − F . Proof. It is known that the squared Frobenius norm of a matrix is the sum of the squares of its singular values, see page 421, about 3 centimeters from rst text row in [5] ( [5, p421c3]). Thus, using the notations above, we have , from where the claim follows. Proof. We give a dimension-independent upper bound for the Frobenius Norm of A − and this will imply via the previous lemma a lower bound for the smallest singular value. De ne the n × n matrix R = (r ij ) by the formula See below for an example of R. We haveã ij = n ν= r iν a νj . Since for i ≤ p + , r iν = δ iν we see for these i, thatã ij = a ij . So the rst p + rows ofà coincide with the rst p + rows of A. Now assume i ≥ p + . Then, since r i = , a ij = n ν= r iν a νj = n ν= r iν a νj = n ν= (δ iν − δ i−p,ν )a νj = a ij − a i−p,j for j = , , ..., n. Thus with obvious notation, we get for Consider this case and think of running with j from to n. We have i − p ≥ . Thus for j = , .. Let e = [ , , , ..., ] T . Note Re = e so that also R − e = e . Now consider solving the systemÃc = RAc = e . Then c = A − R − e = A − e . We see that c is the rst column of A − . At the same time the i-th of the equations codi ed byÃc = e is the dot product row i (Ã) · c = δ i . Hence we nd from the respective rows the following equations. where j mean j 0s stacked one over another and the c(i + µ) :n−i+ should be read as columns. Now the hypothesis on c = c( + µ) is that c( + µ) ≤ θ(µ) ( +µ) for any µ. Therefore, Now for the last sum we get an estimate by telescoping: Hence A − F ≤ θ(µ) µ , and by Lemma 1 therefore σn(A) ≥ µ θ(µ) . Notes: o. An example of a × matrix R associated to p = is the following. b. Bünger and Rump give in their paper [1, eq. 24] an inequality A − F ≤ +θ(µ) +µ . Due to an oversight there is a small mistake in that paper. The phrase on pdf-page 5 'Then has the same pattern as A withμ = µ + j instead of µ ' should end with '. . . instead of µ + .' Correction of the ensuing reasoning leads to the inequality A − F ≤ +θ(µ) µ instead of the one given. Also we may observe that the diagonal of A − will be ((µ + ) − , (µ + ) − , ..., (µ + n) − ). We could divide the computation of the l -norm of A − into the sum of the l -norm of the diagonal of A and the l -norm of the remaining entries of A − . This is in principle what leads to [1, eq. 22]. Roughly then, the + θ(µ) in [1] corresponds to our θ(µ). c. It is not senseless to speak about the inverse of the in nite array A we have introduced at the beginning of the paper. If one formally applies the inversion algorithm of a tridiagonal matrix to such an in nite array one gets an in nite array that can sensibly be multiplied with A and the sums in the multiplication to be computed would all be nite and give an in nite identity matrix. The upper bound for A − F we computed actually is the upper bound for the Frobenius norm of such an inverse array. It is of course reasonable to ask to what extent one should believe in the validity of the strong hypotheses of the theorem. The few experiments we did indicate that the hypotheses holds under a wide variety of selections of {a , ..., ap} ⊆ R ≥ , much larger than what was established for certain in [1]. Below a list of results. The third line, for example, should be read: the sequence µ → ( + µ) c(µ, , , ) : has values 1.40858, 1.00256 for the cases µ = and µ = , respectively, and is decreasing as µ runs from 0 to 100. Here for N reasonably large, c(µ, a , ..., ap) :N is considered as an approximation for c (µ, a , ..., ap) . It is thus natural to conjecture that indeed even c(µ, a , ..., ap) = θ (µ) ( +µ) for some strictly decreasing function θ (µ). This list can be enlarged in short order by applying the Mathematica © program in Section 4. For integers without decimal points or rationals a i given as fractions the program works in exact arithmetics. The output is in oating point because at the end a command is used to translate it into this form. Numerical tests and other norms In this short section we complement the theoretical analysis of Section 2. It is well known that the spectral norm of a normal matrix coincides with the spectral radius ρ(·) and that the spectral radius is bounded from above by any matrix norm which is induced by a vector norm. To see these claims join the observations [5, p417c5, p295c-2, p297c3 ]. Hence, starting from A which is not normal (unless it is diagonal), write A −H for (A − ) * and consider A − A −H which is Hermitian and hence normal. Therefore where the latter represents an alternative lowerbound to the minimal singular value. In [1] Watanabe's problem was tested with a MATLAB program. We generalized this program for di erent p and µ := − . The smallest singular value σn of A, A − F , and ω are computed for varying dimension by MATLAB, version 2018. Figures (1-8) show that the lower bounds we gave for A − − F for di erent p are asymptotically good. In fact since It is clear that σn is much closer to A − − F when σ n− is much larger than σn and that there is a gap between σn and A − − F when the quantity is large. Notice that < ≤ √ n but the case = √ n cannot be attained for |a | + |a | + · · · + |a i | > since A is not even normal, while the case = √ n is attained when σ = σ = · · · = σn = c > , that is when A is a multiple of a unitary matrix: notice that also in the case |a | + |a | + · · · + |a i | = the matrix is not a multiple of a unitary matrix. We conclude by observing that all these considerations nd a numerical con rmation in Figures (1-8). It is nally worth observing that the numerical lower bound for σn related to the l (or l ∞ ) norms is tighter than that related to the Frobenius norm, at least for moderate sizes (see again Figures (1-8)). A theoretical study of this matter will be the subject of future investigations. Figure 6: a , a , a , a , a , a , Figure 7: a , a , a , a , a , a , a , . Figure 8: a , a , a , a , a , a , a , a , Conclusions We computed theoretical lowerbounds for the smallest singular value of certain Toeplitz-related parametric triangular matrices with linearly increasing diagonal entries associated with a nonnegative parameter µ. More speci cally, the smallest singular value of these matrices is bounded from below by a constant which depends on special entries and on the parameter µ of our matrices and it is independent of the dimension n. The proven result is somehow surprising since in the pure Toeplitz setting the minimal singular value can show a remarkable dependency on the matrix size n. The use of di erent matrix norms has been considered and some open problems remain concerning the most useful norm in the context of the considered problem. A few selected numerical experiments have been presented and critically discussed, in order to give evidence that our bounds are accurate in practical computations, even if the numerics clearly indicate that the bounds are not sharp and hence there is still room for theoretical improvements.
3,403.4
2021-01-01T00:00:00.000
[ "Mathematics" ]
The equilibrium response to idealized thermal forcings in a comprehensive GCM: implications for recent tropical expansion . Several recent studies have shown the width of the tropical belt has increased over the last several decades. The mechanisms driving tropical expansion are not well known and the recent expansion is underpredicted by state-of-the art GCMs. We use the CAM3 GCM to investigate how tropical width responds to idealized atmospheric heat sources, focusing on zonal displacement of the tropospheric jets. The heat sources include global and zonally restricted lower-tropospheric warmings and stratospheric coolings, which coarsely represent possible impacts of ozone or aerosol changes. Similar to prior studies with simplified GCMs, we find that stratospheric cooling – particularly at high-latitudes – shifts jets poleward and excites Northern and Southern Annular Mode (NAM/SAM)-type responses. We also find, how-ever, that modest heating of the midlatitude boundary layer drives a similar response; heating at high latitudes provokes a weaker, equatorward shift and tropical heating produces no shift. Over 70 % of the variance in annual mean jet displacements across 27 experiments is accounted for by a newly proposed “Expansion Index”, which compares mid-latitude tropospheric warming to that at other latitudes. We find that previously proposed factors, including tropopause height and tropospheric stability, do not fully explain the results. Results suggest recently observed tropical expansion could have been driven not only by stratospheric cooling, but also by mid-latitude heating sources due for example to ozone or aerosol changes. Introduction Recent observational analyses show the tropics have widened over the last several decades. Estimates range from 2-5 • latitude since 1979 (Seidel et al., 2008) and are based on several metrics, including a poleward shift of the Hadley cell (Hu and Fu, 2007), increased frequency of high tropopause days in the subtropics (Seidel and Randel, 2007) and increased width of the region with tropical column ozone levels (Hudson et al., 2006). Studies have also inferred a poleward shift in the tropospheric jets, based on enhanced warming in the mid-latitude troposphere (Fu et al., 2006) and cooling in the mid-latitude stratosphere (Fu and Lin, 2011). Zhou et al. (2011) showed a poleward shift of cloud boundaries associated with the Hadley cell, as well as a poleward shift of the subtropical dry zones. Clearly, tropical expansion has important implications for both global, and regional climate. Climate models also show current, and future, global warming is associated with tropical expansion. Using the Intergovernmental Panel on Climate Change (IPCC) Coupled Model Intercomparison Project, Phase 3 (CMIP3) simulations, Yin (2005) found a poleward shift in the mid-latitude storm tracks, which was accompanied by poleward shifts in surface wind stress and precipitation. Similarly, Lorenz and DeWeaver (2007) found a poleward shift (and strengthening) of the tropospheric jets in response to global warming, which was accompanied by poleward and upward shifts in transient kinetic energy and momentum flux. Lu et al. (2007) showed CMIP3 models yield poleward displacement (and weakening) of the Hadley cell and subtropical dry zones, which R. J. Allen et al.: Tropospheric jet displacements is associated with an increase in extratropical tropopause height, and subtropical static stability. Models also show that tropical expansion projects onto the leading pattern of variability (Kushner et al., 2001), with about half of CMIP3 model-simulated Hadley cell and subtropical dry zone expansion during the next century explained by positive trends in the Northern and Southern Annular Mode (NAM/SAM) (Previdi and Liepert, 2007). Although both GCMs and observations show tropical widening over the last 2-3 decades, models underestimate the magnitude of observed trends. For example, Johanson and Fu (2009) show the largest CMIP3 tropical widening trends are ∼1/5 of the observed widening. This significant underestimation exists across five scenarios, as well as three separate definitions of Hadley cell width, including dynamical and hydrological definitions. Lu et al. (2009) used the GFDL atmospheric model AM2.1 to show observed changes in sea surface temperatures (SSTs) and sea-ice cannot explain increased tropical width, as defined by the tropopause probability density function. A similar simulation, however, that also included the direct radiative effects of anthropogenic and natural sources better reproduced the observed widening. Polvani et al. (2011) showed that broadening of the Hadley cell and poleward expansion of the subtropical dry zone over the latter half of the 20th century in the SH−particularly during December-January-February -have been primarily caused by polar stratospheric ozone depletion. Idealized climate models (e.g., no moist processes, no topography) have been used to better understand the mechanisms involved with tropical expansion. Polvani and Kushner (2002) and Kushner and Polvani (2004) found that cooling of the polar winter stratosphere, which is associated with reduced stratospheric wave drag, results in a poleward tropospheric jet shift and strengthening of surface wind. Haigh et al. (2005) showed that uniform heating of the stratosphere (e.g., via increased solar or volcanic activity), or heating restricted to high-latitudes, forces the jets equatorward; heating in low latitudes forces them poleward. Frierson et al. (2007) used both simple and comprehensive GCMs to show tropical expansion occurs with increased global mean temperature, and secondly, with an increased pole-to-equator temperature gradient. They argued that the response was due to increased static stability, which reduces baroclinic growth rates and pushes the latitude of baroclinic instability onset poleward, in agreement with the Hadley cell width scaling of Held (2000). This was further supported by Lu et al. (2008), who showed that poleward expansion of the Hadley cell and shift of the eddy-driven jet in CMIP3 global warming experiments are related to a reduction in baroclinicity, which is primarily caused by an increase in subtropical static stability. This relationship was most significant during austral summer (December-January-February), particularly in the Southern Hemisphere (SH). Lorenz and DeWeaver (2007) showed that increasing the tropopause height (as expected in a warmer troposphere) in a simple dry GCM resulted in poleward jet displacement. This response was largest when the tropopause on the poleward flank of the jet was raised; however, the opposite response occurred if the tropopause was raised on the equatorward side of the jet. This tropopause-jet relationship is consistent with Williams (2006). Recently, Butler et al. (2010) used a simplified GCM to try to attribute storm track shifts to temperature changes in particular regions. They found that warming in the tropical troposphere, or cooling in the high-latitude stratosphere, each shifted the storm tracks poleward, whereas polar surface warming shifted them equatorward. Such results are qualitatively consistent with earlier studies (Chen and Held, 2007;Chen et al., 2008), arguing that the observed poleward shift in the surface westerlies has been due to increased Rossby wave phase speeds, which results in poleward displacement of the region of wave breaking in the subtropics. Kidston et al. (2011) argued that an increase in eddy length scale, a robust response to global warming , causes the poleward shift of the mid-latitude eddy-driven jet streams. Both Brayshaw et al. (2008) and Chen et al. (2010) used aquaplanet GCM simulations to show that high-latitude SST warming poleward of the climatological jet results in an equatorward jet displacement. For low-latitude warming that extends poleward of the climatological jet latitude, a poleward jet displacement occurred. Expanding upon these studies, we use a comprehensive GCM to gain a better understanding of how tropical width -particularly tropospheric (850-300 hPa) jet displacement -responds to different types of simple heating at realistic amplitudes. The thermal forcings examined include zonally uniform heat sources in the troposphere or heat sinks in the stratosphere. Our study differs from past studies in specifying heat sources that are representative of possible non-CO 2 climatic forcings, rather than imposing characteristic temperature perturbations. Investigation of the effects of such heat sources on tropical width is of interest due to the significant 20th century increases in anthropogenic aerosols, including absorbing aerosols like black carbon (Bond et al., 2007) and reflecting aerosols like sulfate (Smith et al., 2011), as well as tropospheric ozone (Shindell et al., 2006) and ozone precursors (van Aardenne et al., 2001). Our objective is to clarify the sensitivity of tropical width to different types of heating, with the ultimate goals of gaining insight into the observed widening and better understanding of the responses seen in past GCM studies. Our results show the importance of perturbed tropospheric temperature gradients and a wavemodulated stratospheric pathway in driving zonal jet displacements. We also show that previously proposed tropical expansion mechanisms are unable to fully explain our results. We build upon these results in a subsequent paper, which will examine the responses to more realistic representations of non-CO 2 forcings. This paper is organized as follows: in R. J. Allen et al.: Tropospheric jet displacements 4797 Sect. 2 we discuss the CAM GCM and our experimental design. In Sect. 3 we present the response to idealized stratospheric cooling and tropospheric heating, and compare these responses to a doubling of CO 2 . Section 4 discusses expansion scenarios, including the tropospheric and stratospheric pathways. Conclusions are presented in Sect. 5. CAM description The Community Atmosphere Model (CAM) version 3 (Collins et al., 2004), is the fifth generation of the National Center for Atmosphere Research (NCAR) atmospheric General Circulation Model (GCM) and is the atmospheric component of the Community Climate System Model (CCSM). CAM uses a Eulerian spectral transform dynamical core, where variables are represented in terms of coefficients of a truncated series of spherical harmonic functions. The model time step is 20-min, and time integration is performed with a semi-implicit leapfrog scheme. The vertical coordinate is a hybrid coordinate, with 26 vertical levels. The model has a relatively poorly-resolved stratosphere, with ∼9 levels above 100 hPa and a top level at 2.9 hPa. The total parameterization package consists of four basic components: moist precipitation processes/convection, clouds and radiation, surface processes, and turbulent mixing. The land surface model is the Community Land Model (CLM) version 3 (Oleson et al., 2004), which combines realistic radiative, ecological and hydrologic processes. Experimental design CAM is run at T42 resolution (∼2.8 × 2.8 • ) with a slab ocean-thermodynamic sea ice model. All experiments are run for at least 70 yr, the last 30 of which are used in this analysis, during which the model has reached equilibrium (i.e., no significant trend in TOA net energy flux). Stratospheric cooling experiments (10PLO3; see Table 1) were performed by reducing the stratospheric ozone by 10 % globally, as well as individually for the tropics (±30 • ), mid-latitudes (30-60 • N/S) and high-latitudes (60-90 • N/S). The stratosphere is defined as the model levels above the tropopause, which is estimated by a thermal definition using the method of Reichler et al. (2003). We use CAM's default ozone boundary data set, which contains zonal monthly ozone volume mixing ratios, and reduce the ozone by 10 % at the appropriate latitudes and stratospheric pressures on a monthly basis. A 10 % ozone reduction is in rough agreement with the change in stratospheric ozone from 1979-2000 (Newchurch et al., 2003). The ozone perturbation is seasonally invariant, as are all perturbations in this study, and is not meant to represent the real seasonal cycle of ozone change. Our standard set of tropospheric heating experiments (LTHT) adds a 0.1 K day −1 (∼3.5 W m −2 ) heating source to the lower troposphere (surface to ∼700 hPa). Such a heating rate is comparable to recent satellite-based estimates of present-day anthropogenic aerosol solar absorption (Chung et al., 2005). We conduct a globally uniform heating experiment, as well as latitudinally restricted heating of the tropics, mid-and high-latitudes. Although heating is only applied to the lower troposphere, the globally uniform temperature response resembles that based on a doubling of CO 2 . This is due to destabilization of the lower atmosphere and increased convection, which vertically redistributes the heat throughout the depth of the troposphere. Similar experiments with midtropospheric and upper-tropospheric heating do not destabilize the lower atmosphere, and result in maximum tropospheric warming near the altitude of heat input. Table 2 lists the suite of tropospheric heating experiments. In all cases, the response is estimated as the difference between the experiment and a corresponding control, which lacks the added heat source. A standard global warming experiment is also performed, where the CO 2 volume mixing ratio is doubled from 3.55 × 10 −4 to 7.10 × 10 −4 . We also conduct an extreme global warming experiment, where the CO 2 volume mixing ratio is increased by a factor of eight. The resulting climate signals are named 2×CO2 and 8×CO2, respectively. We compare our CAM integrations to 12 2×CO2 CMIP3 equilibrium (slab ocean) experiments, as well as 10 1 % to 4×CO2 transient CMIP3 experiments. Table 3 lists the CMIP3 models used in this study. For the 1 % to 4×CO2 experiments, we compare the 25 yr prior to CO 2 quadrupling (years 115-139) to the corresponding control. Finally, we evaluate the robustness of some of our CAM results -specifically the response to lower tropospheric heating -using an alternate GCM, the GFDL AM2.1 (Anderson et al., 2004). Because the GFDL model does not include a slab ocean model, these experiments are run with climatological SSTs. GFDL experiments are integrated for 40 yr, the last 30 of which are used to estimate the climate response. Tropospheric jet Several jet-based measures of tropical width were explored. This includes the latitude of the main jet, which we locate by finding the maximum of the zonally and monthly averaged zonal wind (U ) in either hemisphere (NH or SH). The poleward jet displacements are then estimated by taking the difference of the mean jet location (experiment minus control) in either hemisphere. We computed this measure on each pressure level and averaged the 850-300 hPa displacements to obtain a tropospheric jet displacement. Because our jet definition is based on the entire troposphere, it primarily represents the subtropical jet, and secondarily the mid-latitude eddy-driven jet. Displacements of the tropospheric and eddy-driven jet, however, are closely related; Signal Description 10PLO3 Global 10 % reduction in stratospheric ozone 10PLO3 TR As 10PLO3, but ozone reduced over tropics (±30 • ) only 10PLO3 ML As 10PLO3, but ozone reduced over mid-latitudes (30-60 • N/S) 10PLO3 HL As 10PLO3, but ozone reduced over high-latitudes (60-90 • N/S) Signal Description LTHT Global lower-tropospheric (surface to ∼700 hPa) heating of 0.1 K day −1 LTHT TR As LTHT, but heating of tropics (±30 • ) only LTHT ML As LTHT, but heating of mid-latitudes (30-60 • N/S) LTHT HL As LTHT, but heating of high-latitudes (60-90 • N/S) LTHT TRML As LTHT, but heating of tropics and mid-latitudes (±60 • ) LTHT MLHL As LTHT, but heating of mid-and high-latitudes (±30-90 • ) LTHT2x As LTHT, but double the heating rate (0.2 K day −1 ) LTHT4x As LTHT, but quadruple the heating rate (0.4 K day −1 ) MTHT Heating the mid-troposphere (∼700-400 hPa) UTHT ML Mid-latitude heating of the upper troposphere (4 levels below tropopause) LTHT 10PLO3 Global lower-tropospheric heating of 0.1 K day −1 and 10 % reduction in stratospheric ozone Table 3. Definition of the CMIP3 2×CO2 equilibrium (slab ocean) and the 1 % to 4×CO2 transient experiments used in this study. A "Y" ("N") indicates this model was (was not) used for the given experiment. the correlation between 2×CO2 CMIP3 jet displacements using the annual mean 850-300 hPa U maximum and the near-surface (10-m) U maximum -which others have used as a measure of the eddy-driven jet (e.g., Kidston and Gerber, 2010) -is 0.83 in the NH and 0.90 in the SH. The correlation is weakest during JJA in the SH (r = 0.57), which is consistent with a winter-time decoupling of the subtropical and eddy jets, resulting in a double jet structure (e.g., Gallego et al., 2005). We also investigated an additional method for locating the jet, where we located the "sides" of the jet and then found their midpoint; the sides were based on a specified percentile value of zonal wind. Although both methods yielded similar displacements, testing indicated that the percentile method yielded somewhat more stable results; thus only the results from the percentile method, using the 75th percentile (p75), are shown. We do note, however, that the percentile method yields smaller displacements than the maximum method, and as the percentile is decreased (e.g., from 95 to 70), consistently smaller jet displacements are obtained. This is illustrated in Fig. 1, which compares tropospheric jet displacements in 12 CMIP3 2×CO2 equilibrium experiments using the maximum method, and the percentile method (with p75). A correlation of 0.95 shows both methods yields similar displacements; however, displacements tend to be larger with the jet maximum approach. The ensemble annual mean jet displacement using the maximum method is 0.62 • in the NH and 0.96 • in the SH; corresponding values using p75 are 0.46 • and 0.73 • . This result shows the jet displacement is non-uniform. Figure 1 also shows the CMIP3 4×CO2 ensemble annual zonal mean tropospheric jet response, and the corresponding control. The response is not a uniform jet shift; there is some distortion of its shape, resulting in a poleward skew, which is larger for the faster winds. This is particularly evident in the SH, and helps to explain the larger poleward displacements with the jet maximum method. Similar, but weaker results exist for the 2×CO2 equilibrium experiments (not shown). Other measures Additional measures of tropical displacement (Johanson and Fu, 2009) include (1) the latitude of the subtropical Mean Meridional Circulation (MMC) minima, defined as the latitudes where the MMC at 500 hPa becomes zero poleward of the subtropical maxima; and (2) the latitudes where precipitation minus evaporation (P − E) becomes zero on the poleward side of the subtropical minima (a measure of subtropical dry zone expansion). All displacements are estimated by first smoothing the zonal monthly mean of the appropriate model field(s) and interpolating to 0.5 • resolution using cubic splines. Smoothing was performed by taking a running mean over ∼10 degrees of latitude. Nearly identical results are obtained without interpolating. In addition to zonal displacements, we also quantify the changes in the strength and altitude of the jet. The altitude of the jet was quantified by interpolating the zonal wind to 10 hPa vertical resolution, and locating the pressure of maximum monthly zonal wind. This procedure is only done poleward of ∼20 • , since the jet is not well defined in the tropics. The strength of the jet was quantified by locating the maximum zonal wind in each hemisphere for each pressure level and month. A similar procedure was used to quantify the strength of the Hadley circulation, using the maximum magnitude (i.e., absolute value) of the tropical MMC at 500 hPa. The change in strength or jet altitude is then estimated as the difference between experiment and control. Throughout this manuscript, statistical significance is estimated with a standard t-test, using the pooled variance. The influence of serial correlation is accounted for by using the effective sample size, n(1 − ρ 1 )(1 + ρ 1 ) −1 , where n is the number of years and ρ 1 is the lag-1 autocorrelation coefficient (Wilks, 1995). Figure 2 shows the annual and zonal mean temperature and wind response for the stratospheric cooling (10PLO3) experiments. Also included is the meridional temperature gradient (T y ) response, with Southern Hemisphere (SH) T y multiplied by −1 (and in all subsequent figures) so that negative T y always represents colder air poleward. As expected, temperatures are generally colder, by ∼1 K, because of reduced solar absorption where the ozone reduction was imposed. Several non-local responses also occur, including tropospheric warming for the all-, high-, and mid-latitude experiments. These three experiments also yield an increase in zonal wind (U ) near 60 • , whose magnitude decays downward through the troposphere. This U increase occurs near the poleward flank of the tropospheric jet, while an opposite signed anomaly appears near the equatorward flank, indicating a poleward jet displacement. Note that reducing stratospheric ozone in the tropics (10PLO3 TR ) yields the opposite response; however, the magnitude of the tropospheric wind anomaly is weak and not significant. Table 4 quantifies the annual mean poleward displacement of the tropospheric (850-300 hPa) jets. As suggested by Table 5 shows the additional metrics of tropical displacement are generally consistent with the jet response. For 10PLO3, 10PLO3 HL and 10PLO3 ML , both P − E and MMC yield annual mean poleward displacement, although smaller than that based on the tropospheric jet. Figure 3 shows the T , U and T y response for the lower tropospheric heating experiments (LTHT). Similar to CO 2 forcing, globally uniform near-surface heating causes a local warming maximum in the tropical upper troposphere due to moist convection, and high-latitude near-surface warming amplification due to snow and ice albedo feedbacks, as well as the higher static stability in polar regions. The zonal wind response to LTHT implies a poleward displacement of the NH tropospheric jet, but not the SH one. These shifts, however, are not statistically significant (see Table 4) in the annual mean or in any season. We note that in coupled ocean-atmosphere models (and observations), the SH warming will be much less than that here due to uptake of heat by the Southern Ocean, which will affect how much the jet shifts. Sensitivity to the latitudinal distribution of near-surface heating Heating the individual latitude bands separately yields maximum warming at the heated latitudes, though with some spillover to most of the troposphere in the cases of LTHT HL and LTHT ML . Generally, however, the latitudes that are heated experience the largest temperature response, which is consistent with a down-gradient eddy heat flux response (i.e., oriented away from the latitude of maximum heating; not shown). There are also some dynamically induced remote cooling responses, including significant stratospheric cooling for LTHT and weaker tropospheric high-latitude cooling for LTHT TR . A much stronger impact on tropical width occurs with heating restricted to midlatitudes than for the globally uniform case. LTHT ML shows both reduced U on the equatorward flank of the jet and increased U on the poleward jet flank, yielding significant poleward jet displacement of 0.66 • in the NH and 1.02 • in the SH. Significant displacements also occurred in experiments where either low or high latitudes were heated at the same time as mid-latitudes (LTHT TRML and LTHT MLHL , respectively), supporting the robustness of this result. For example, Table 5 shows simultaneous heating of the low-and mid-latitudes yields a poleward jet displacement of 0.41 • in the NH and 0.20 • in the SH. LTHT TRML jet displacements become significant, and Heating at high-latitudes (LTHT HL ) produced an opposite result, reducing U on the poleward jet flank to produce an equatorward jet displacement of −0.42 • over the two hemispheres, about a quarter of the poleward shift with midlatitude heating. Tropical heating (LTHT TR ) increased the peak U throughout the atmosphere, but without significantly shifting the jet position except upward. While the above conclusions are based on jet shifts, similar responses are found among other tropical displacement measures (Ta-ble 5), especially for the midlatitude heating which produced the strongest response. Figure 4 shows the poleward displacement of the maximum meridional tropospheric temperature gradient, and the jet, for LTHT experiments as a function of pressure. LTHT HL yields equatorward displacement of the maximum T y whereas LTHT ML features poleward displacement. This is consistent with the corresponding LTHT HL and LTHT ML tropospheric jet displacement -both quantities move poleward or equatorward together, in general agreement with thermal wind balance. For LTHT ML , heating of the mid-latitudes weakens the temperature gradient on the equatorward flank of the maximum T y , but increases it on the poleward flank, as shown in Fig. 3. The tropospheric jet then responds by shifting poleward. The opposite occurs for LTHT HL . Small displacements of the maximum T y generally occur for LTHT TR , in agreement with the small jet displacement. Over all experiments included in Fig. 4, the correlation between displacements of the maximum T y and tropospheric jet is 0.81 in the NH and 0.92 in the SH. Although not shown, displacements of the maximum T y are also similar to that of the tropospheric jet for the stratospheric cooling experiments. Butler et al. (2010Butler et al. ( , 2011 also examined the impact of tropical heating, and found a shift similar to that obtained here with heating from 0-60 • N/S (LTHT TRML ), in contrast to our null result with tropical-only heating. While this result seems contradictory, the tropical heating employed by Butler et al. (2010Butler et al. ( , 2011 differed from ours by projecting significantly onto the mid-latitude isentropes, weakening baroclinicity in the subtropics while strengthening it in the mid-latitudes. Poleward jet displacement is also absent in LTHT4x TR (Table 4), which features a heating rate more comparable to Butler et al. (2010Butler et al. ( , 2011. These results taken together are consistent with a particular sensitivity of the jet to heating in highly baroclinic, midlatitude regions, with relatively little sensitivity in the tropics. Figure 5 further shows that geostrophic adjustment to the altered meridional temperature gradient explains most of the annual mean tropospheric wind response. Zonal wind shear for each pressure level is estimated from the corresponding meridional temperature gradient, according to thermal wind balance. To estimate the zonal wind, we use the 900 hPa zonal wind as a boundary condition. Taking the difference between the experiment and control yields the corresponding response, as shown in the center panel of Fig. 5. The actual zonal wind response closely corresponds to that estimated from thermal wind balance. The difference between the two (estimate -actual) shows no significant differences at most latitudes, except near the equator where meridional temperature gradients are small and geostrophy becomes a poor approximation. Thus, most of the tropospheric jet shift in our LTHT experiments is consistent with a geostrophic adjustment to the altered meridional temperature gradient. Although the eddy-driven jet is not the focus of this study, in our experiments displacements of the surface wind (a measure of the eddy-driven jet) are similar to those of the tropospheric jet. For the suite of lower tropospheric heating and stratospheric cooling experiments, and 2×CO2, the corresponding correlation is 0.92 in the NH and 0.91 in the SH. This is despite the fact that transient eddies should play a more important role in displacements of this jet. We also note that displacements of the surface wind correspond to those of the maximum Eady growth rate (Lindzen and Farrell, 1980). For example, the correlation between displacements of the maximum surface wind and the maximum 850 hPa Eady growth rate is 0.80 for the NH and 0.84 for the SH. Since the Eady growth rate is proportional to T y , this is consistent Atmos. Chem. Phys., 12, 4795-4816 with our baroclinicity argument for tropospheric jet displacements and with the notion that storms tend to form in regions of high baroclinicity. We note that the changes during El Niño events are consistent with our results. El Niño is associated with tropical tropospheric warming by warmer Pacific SSTs, mid-latitude tropospheric cooling due to eddy-driven upward motion, and high-latitude tropospheric warming . The tropospheric jet, in turn, intensifies near the equatorward jet flank and weakens near the poleward flank, resulting in a strengthening and equatorward shift of the jet. The stronger jet is consistent with tropical warming and our LTHT TR experiment. The equatorward shift is consistent with cooling in the mid-latitudes and warming in the high-latitudes, as illustrated by our LTHT ML and LTHT HL experiments. GFDL tropospheric heating experiments To evaluate the robustness of the CAM responses to lowertropospheric heating, we conducted analogous experiments with the GFDL AM2.1 (Anderson et al., 2004) using climatological SSTs. Figure 6 shows the corresponding annual mean temperature and zonal wind responses for LTHT2x, LTHT2x TR , LTHT2x ML , LTHT2x HL . Results are similar to that based on CAM (Fig. 3). Heating of the tropics results in negligible jet displacement of −0.09 • in the NH and −0.04 • in the SH. However, high-latitude heating results in equatorward jet displacement (−0.26 • in the NH and −0.10 • in the SH) and mid-latitude heating results in poleward jet displacement of 0.75 • in the NH (95 % significant) and 0.24 • in the SH. The weaker GFDL response -particularly the SH response to mid-latitude heating -is likely due to the use of climatological SSTs, which mutes the tropospheric response. Repeating the GFDL mid-latitude heating experiment with double the heating rate (0.4 K day −1 ) results in significant poleward jet displacement in both the NH and SH at 1.98 • and 0.45 • , respectively (not shown). We note that the main discrepancy between CAM and GFDL occurs for uniform heating of all latitudes. GFDL LTHT2x yields poleward jet displacement of 0.13 • in the NH and 0.39 • in the SH, the latter of which is significant at the 90 % confidence level. Although CAM LTHT2x yields similar, but weak, poleward jet displacement in the NH (0.12 • ), significant poleward jet displacement in the SH occurs (−0.73 • ; Table 4). Table 4 shows that the LTHT responses are similar, but generally larger, when the heating rate is doubled (LTHT2x). This includes tropical expansion for mid-latitude heating, tropical contraction for high-latitude heating, and negligible displacement for tropical heating. Unlike LTHT however, LTHT2x yields an "overall" equatorward jet displacement (NH + SH) of 0.61 • , which is dominated by the SH jet which moves equatoward by 0.73 • . Similarly, LTHT4x shows significant equatorward jet displacement of 1.46 • , which again is dominated by the SH jet. Evidence of nonlinear responses The bottom panel of Fig. 4 shows that the relationship between displacements of the maximum meridional tropospheric temperature gradient and the tropospheric jet also applies for the LTHT2x experiments. Note that as the heating rate is increased and the tendency for jet displacement is equatorward, the maximum T y also shows a similar tendency of equatorward displacement. LTHT4x, for example, shows significant equatorward displacement of the maximum T y in both the SH and NH, in agreement with equatorward jet displacement (Table 4), particularly in the SH. One aspect of the uniform heating experiments that can be deduced from the above figures (e.g., Fig. 3) is that the responses are often nonlinear. For example, the sum of the poleward SH jet displacements in the LTHT TR , LTHT ML and LTHT HL experiments is 0.77 • , while the shift with uniform heating is smaller and in the opposite direction (−0.13 • ). This behavior recurs in the LTHT2x experiments, with values of 0.40 • and −0.73 • respectively. This nonlinear response is similar to the idealized experiments of Butler et al. (2010). This nonlinearity could be caused by the effects of localized heating on the vertical propagation of wave energy, and linear interference effects between the wave response and the background stationary wave (Smith et al., 2010;Fletcher and Kushner, 2011). We note that the amplitude of jet displacement, however, appears to be more linear based on CMIP3 CO 2 experiments. Using the 10 4×CO2 CMIP3 models, we calculate the jet displacement using the 25 yr prior to doubling and the 25 yr prior to quadrupling (each compared to the corresponding control). For 4×CO2, the ensemble annual mean NH jet displacement is 0.98 • , compared to 0.35 • for 2×CO2; in the SH, the corresponding jet displacements are 1.74 • and 0.81 • , respectively. Thus, doubling the CO 2 forcing tends to yield double the jet displacement in CMIP3 experiments. This is similar to Wang et al. (2012), who found a linear relationship between the amplitude of the temperature response in the tropics and the tropospheric jet shift using a dry, idealized model (however, the eddy-driven jet exhibited an abrupt shift when tropical warming exceeded a critical amplitude). Nearly all of the CAM heating experiments, as well as 2×CO2, weaken the mean meridional circulation and strength of the tropospheric jet (not shown), in agreement with behavior of other GCMs and explainable by thermodynamic arguments (Held and Soden, 2006). LTHT TR , how-ever, strengthens the tropical circulation, with a 2 % increase in jet strength and a 4 % increase in mean meridional circulation strength. This strengthening increases to 5 % and 7 %, respectively, for LTHT2X TR , so this particular result is relatively linear. These latitudinally restricted heating responses are consistent with Brayshaw et al. (2008). Figure 7 shows the annual mean T and U response for LTHT and 2×CO2. Both feature similar patterns of warming, with maximum warming in the tropical upper troposphere and at high-latitudes. Both also feature an increase in the height of the tropopause, as well as an upward displacement of the tropospheric jets (∼10 hPa for 2×CO2 and ∼5 hPa for LTHT), which generally occurs with tropospheric heating (e.g., Lorenz and DeWeaver, 2007). The zonal jet displacement is also similar for the two experiments, with small SH displacements and larger NH poleward displacements, the latter of which is reminiscent of the positive NAM pattern. Note that the LTHT signal is much weaker than 2×CO2. However, the global annual mean surface temperature response for LTHT is also much weaker: 0.91 K versus 2.52 K for 2×CO2. Our experiments from Sect. 3.1 show that stratospheric cooling causes poleward jet displacement. Therefore, one reason why the LTHT experiments may yield less poleward jet displacement than those of 2×CO2, is because 2×CO2 is associated with significant stratospheric cooling due to increased longwave emission to space; LTHT, however, has no directly imposed stratospheric cooling (although there is an indirect stratospheric cooling response). This was evaluated by rerunning the LTHT, LTHT2x and LTHT4x experiments, but with a 10 % stratospheric ozone reduction (LTHT 10PLO3 , LTHT2x 10PLO3 , LTHT4x 10PLO3 ). For both LTHT2x and LTHT4x, adding stratospheric cooling yields less equatorward jet displacement, particularly the SH jet for LTHT2x, but the differences are generally not large (Table 5)). This suggests that the tropospheric warming is more important than the stratospheric cooling. Discussion of expansion scenarios Prior studies have attributed tropical expansion in a warmer climate to increases in the tropopause height (e.g., Lorenz and DeWeaver, 2007;Williams, 2006) and/or extratropical dry static stability (e.g., Frierson et al., 2007;Lu et al., 2007). Figure 8 shows the changes in these two quantities, in addition to the 500 hPa T y change and its climatology for four of the tropospheric heating experiments. These four experiments were chosen because they allow the evaluation of these previously proposed mechanisms. The top two panels compare the response based on heating of the mid-latitudes in either the lower (LTHT ML ) or upper (UTHT ML ) troposphere. Of the two, the latter results in a larger increase in gross dry static stability of mid-latitudes, as expected. Although both experiments yield poleward jet displacement, tropical expansion is generally larger with LTHT ML . Table 4 shows this is particularly true in the SH, where the annual mean jet displacement is 1.02 • for LTHT ML versus 0.53 • for UTHT ML . Similar conclusions exist based on the other metrics of tropical expansion (Table 5), particularly P − E. The larger LTHT ML poleward jet displacement is inconsistent with a smaller increase in stability; however, it is consistent with thermal wind balance and a larger poleward displacement of the 500 hPa SH T y . The bottom two panels show that heating of the highlatitudes and tropics results in an increase in tropopause height (decrease in pressure), yet neither experiment is associated with tropical expansion. LTHT2x HL actually yields significant equatorward jet displacement of −0.74 • (−0.32 • and −0.42 • for NH and SH, respectively) and LTHT2x TR yields negligible jet displacement of −0.23 • (Table 4). These responses are again consistent with the change in T y at 500 hPa, with a weakening of T y at high latitudes for LTHT2x HL , and a reinforcement of the climatological T y for LTHT2x TR . These results suggest the importance of other mechanisms in driving jet displacements, at least using CAM under our experimental design. Stratospheric pathway Section 3.1 showed cooling of the high-latitude stratosphere resulted in poleward jet displacement. Table 4 shows the largest NH jet displacements in response to stratospheric cooling occur during March-April-May (MAM), and the equivalent season in the SH (SON) similarly shows the largest response in its jet. The maximum spring response is likely due to a combination of two factors: the presence of solar radiation, so that the imposed ozone loss results in stratospheric cooling; and westerly stratospheric flow, which is conducive to strong planetary wave-mean flow interaction. The cooling of the high-latitude stratosphere increases the local meridional temperature gradient, and the stratospheric vortices in both hemispheres intensify in accord with thermal wind balance. The downward propagation of the stratospheric wind anomaly may be related to enhanced equatarward refraction of Rossby waves (Shindell et al., 2001;Rind et al., 2005). As a diagnostic tool to estimate the importance of this "stratospheric pathway", we estimate the wave refraction (λ) as the ratio of meridional to vertical Eliassen Palm (EP) flux: where u * v * is the meridional eddy momentum flux, v * T * is the meridional eddy heat flux, N is the Brunt-Vaisala frequency, R d is the dry air gas constant, H is the scale height, f is the Coriolis parameter and primes denote a zonal deviation. Because both eddy fluxes are estimated from monthly data, λ represents the refraction of the quasi-stationary, as opposed to the transient, waves. Table 6 shows that all stratospheric cooling experiments feature an increase in MAM NH wave refraction by 15-35 % -the season of maximum poleward jet displacement in the NH. Figure 9 shows the MAM responses of T , U and T y for one stratospheric cooling experiment, 10PLO3 HL . Also included is the leading pattern of zonal wind anomalies, and the mean meridional circulation, associated with the NAM/SAM pattern. This pattern is based on a principal component analysis of geopotential heights for the domains extending from 20-90 • N/S and from 1000 to 10 hPa. Data are weighted by the square root of the cosine of the latitude, as well as by the square root of the pressure interval represented by that level (Thompson and Wallace, 2000). The wind fields are then regressed upon the resulting standardized leading principal component (PC) time series. The changes in zonal wind and mean meridional circulation closely resembles the NAM (and to a lesser extent, the SAM) pattern, which suggests the response may involve wave mean-flow interaction and downward control theory (Haynes et al., 1991;Baldwin and Dunkerton, 1999). The MAM 10PLO3 HL response also features an anomalous tropospheric meridional circulation, with rising motion poleward of 60 • , sinking motion between 30-60 • , and equatorward flow in the upper troposphere, somewhat like an intensified Ferrel Cell, but stretched poleward. This thermally indirect circulation coincides with warming near its sinking branch (near 45 • ), and cooling in the rising branch (near 70 • ). Imposition of these temperature anomalies on the background state produces a poleward displacement of the maximum tropospheric T y , consistent with the tropospheric zonal wind anomaly near 60 • . Figure 9 suggests that this anomalous residual circulation -particularly in the NHis balanced by a poleward shift of eddy westerly momentum flux convergence near 60 • , which sustains the westerly wind anomaly. The response is also associated with an increase in downward, equatorward wave energy and EP-flux divergence in the mid-latitude stratosphere and troposphere. We find that our analysis of the NH spring-time response also approximately holds for the SH (not shown). For example, each stratospheric cooling experiment features an increase in SH SON wave refraction: 23 %, 21 %, 32 % and 2 % for 10PLO3, 10PLO3 HL , 10PLO3 ML and 10PLO3 TR , respectively. For 2×CO2 and LTHT, Table 4 shows NH tropical expansion primarily occurs during two seasons: MAM and DJF. Similar to the stratospheric cooling experiment, both LTHT and 2×CO2 feature a NAM-like U response and meridional circulation response pattern, which is associated with a wavemodulated stratospheric pathway (not shown). Both signals feature an increase in wave refraction (Table 6), which is associated with an increase in downward, equatorward wave energy and EP-flux divergence in the mid-latitude stratosphere and troposphere. An anomalous meridional circulation in the troposphere and a poleward shift of eddy westerly momentum flux convergence near 60 • N, also occurs. Similarly, CMIP3 2×CO2 also features an increase in NH MAM and DJF wave refraction of 15 % and 13 %, respectively (Table 6). Moreover, a significant, but weak, relationship exists between CMIP3 2×CO2 NH wave refraction and jet displacement for both DJF and MAM, with correlations of 0.44 and 0.43, respectively. This suggests that a wave-modulated stratospheric pathway may play an important role in warming induced tropical expansion, particularly in the NH during MAM and DJF, regardless of the cause of the warming. We also note that similar behavior occurs for the mid-latitude lower-tropospheric heating experiments. Both LTHT ML and LTHT2x ML feature an increase in MAM downward, equatorward wave energy (increase in wave refraction, Table 6) and EP-flux divergence, as well as cooling of the polar stratosphere, a decrease in polar stratospheric geopotential heights and a decrease in high-latitude surface pressure (not shown). This is analogous (but opposite) to the negative NAM response to anomalous Eurasian snow cover (e.g., Cohen et al., 2007;Fletcher et al., 2009;Allen and Zender, 2010). Figure 10 shows a scatterplot of the tropospheric (850-300 hPa) jet displacement versus the difference in midand high-latitude warming amplification for the 5 global warming experiments (LTHT, LTHT2x, LTHT4x, 2×CO2 and 8×CO2) and 6 latitude-restricted heating experiments (LTHT TR , LTHT ML , LTHT HL , LTHT2x TR , LTHT2x ML , LTHT2x HL ). Warming amplification of mid-latitudes (AMP ML ) is defined as the log-pressure area weighted temperature (i.e., thickness) response between 30-60 • minus that between 0-30 • . For high-latitudes (AMP HL ), the log-pressure area weighted temperature response between 60-90 • is differenced with that between 30-60 • . Table 7 lists the amplification factors. This choice of this metric was inspired by the responses found in the latitude-restricted tropospheric heating experiments. When high-latitudes warm relative to mid-latitudes, AMP HL is positive, and we expect equatorward jet displacement. When the mid-latitudes warm relative to low-latitudes, AMP ML is positive, and we expect poleward jet displacement. Taking the difference, AMP ML − AMP HL , results in a quantity that accounts for these two competing effects. As the difference becomes more positive/less negative, then mid-latitude warming amplification dominates, and we expect more tropical expansion/less contraction; vice versa as AMP ML − AMP HL becomes less positive/more negative. We call this quantity the "Expansion Index" (EI). Based on The global warming experiments are generally consistent with this notion. Over all five experiments and seasons, the relationship is significant at the 99 % confidence level for NH, SH and both hemispheres, accounting for 42 %, 72 % and 55 % of the jet displacement, respectively. For the annual mean only, the expansion index accounts for 76 % of the NH and SH jet displacement. The dominant response in these experiments -equatorward SH jet displacement -is consistent with the large SH high-latitude warming, and large AMP HL . The diagnostic also explains the increased equatorward displacement when the heating rate is increased in the LTHT experiments. Increasing the heating rate generally results in amplified high-latitude warming, which is associated with equatoward jet displacement. Table 7 shows that the annual mean SH AMP HL increases from 0.22 to 1.02 K for LTHT to LTHT4x; and from 0.02 to 0.64 K in the NH. At the same time, however, AMP ML generally decreases, particularly in the NH. Furthermore, AMP HL is generally largest in the SH, relative to the NH, consistent with equatorward SH jet displacement in nearly all cases. The relationship is weakest in the NH for DJF and MAM, which may be related to the wave-modulated stratospheric pathway during these seasons. Without DJF and MAM, the expansion index accounts for 81 % of the variation in NH jet displacement. Similar conclusions exist when the three mid-tropospheric global warming experiments (MTHT, MTHT2x and MTHT4x) are included in the analysis. Figure 10 further supports the idea that part of the jet shift can be thought of as a geostrophic adjustment to an altered temperature profile -not only when certain latitude bands are heated, but also for global warming experiments like LTHT and 2×CO2. Our experiments suggest that poleward jet displacement is partially driven by mid-latitude heating, while equatorward jet displacement is partially driven by highlatitude heating. However, a wave-modulated stratospheric pathway during the NH active seasons is also important, resulting in poleward NH jet displacement during MAM and DJF which projects onto the positive phase of the NAM. For LTHT this mechanism eventually weakens with increased heating (e.g., LTHT4x), where high-latitude amplification dominates and the maximum T y is displaced equatorward, resulting in equatorward displacement of the tropospheric jets. Figure 10 also shows a similar relationship between the expansion index and jet displacement exist based on CMIP3 2×CO2 equilibrium experiments. Even though this metric does not directly account for the effects of CO 2 induced stratospheric cooling, it accounts for 45 %, 67 % and 56 % of the of the variation in jet displacement in the NH, SH and both hemispheres, respectively. Based on the annual mean only, EI accounts for 76 % of the NH and SH jet displacement. Similar results are obtained if jet displacements are based on others percentiles. Using the 70th-95th percentile in 5 percentile increments, EI accounts for 61 % to 77 % of the annual mean jet displacement; using the alternate, maximum U method, 66 % of the annual mean jet displacement is accounted for. This relationship is somewhat better than the relationship Lu et al. (2007) found between tropical expansion and tropopause height (stability); there, increases in extratropical (35-55 • ) tropopause height accounted for 66 % of the variation in annual mean MMC expansion using CMIP3 A2 experiments. More recently, Lu et al. (2008) found a significant relationship between poleward MMC displacement and a decrease in Philips criticality, the latter of which occurred primarily due to an increase in extratropical static stability. In the SH during DJF, Philips criticality accounted for 67 % of the variation in MMC expansion. Similarly, the expansion index accounts for 92 % of the jet displacement in the SH during DJF. We also find that it accounts for most of the DJF SH variation in other metrics of tropical expansion, including P − E (81 %) and MMC (81 %). Similar, but weaker, results exist for SH ANN, where the expansion index accounts for 45 % and 46 % of the variation in P − E and MMC, respectively. Thus, the expansion index helps to explain not only dynamical measures of tropical expansion, but hydrological measures too, particularly in the SH. Tropospheric pathway We also estimated the relationship between the expansion index and tropospheric jet displacement using the 10 1 % to 4×CO2 CMIP3 experiments, and with five reanalyses, including NCEP/NCAR (Kalnay et al., 1996), NCEP-DOE (Kanamitsu et al., 2002), MERRA (Rienecker et al., 2011), ERA40 (Uppala et al., 2005) and ERA-Interim (Dee et al., 2011). The first three reanalyses are analyzed from 1979ERA40 from 1979and ERA-Interim from 1989. Based on the annual mean, the expansion index accounts for 70 % of the variance in NH and SH jet displacements in 4×CO2 CMIP3 experiments; and 55 % of the corresponding jet displacements in reanalyses. We conclude by comparing the CAM global warming experiments with the CMIP3 2×CO2 experiments. Similar to the CAM experiments, the EI-jet displacement relationship is weakest in the NH during DJF, where it accounts for only 46 % of the variation in jet displacement. We also note that the NH MAM CAM 2×CO2 jet displacement is much larger than the CMIP3 ensemble (1.70 • versus −0.01 • ), which is consistent with more MAM wave refraction in CAM 2×CO2 (56 % versus the CMIP3 ensemble mean of 15 %). CAM 2×CO2 also features less SH jet displacement than CMIP3 (0.09 • versus 0.73 • for the annual mean), despite a similar expansion index (−0.14 versus −0.18 for CMIP3). Although the reasons are not clear, the other metrics of CAM 2×CO2 tropical width both show greater SH displacement (0.60 • for P − E and 0.42 • for MMC). In addition to these equilibrium experiments, we have also conducted transient CO 2 CAM experiments over the latter half of the 20th century using several ensembles (not shown) and obtain similar minimal SH jet displacement. This suggests CAM3 is less sensitive to CO 2 -induced SH poleward jet displacement, relative to other CMIP3 models. Conclusions The CAM3 general circulation model is used to investigate how tropical width responds to idealized thermal perturbations, focusing on zonal displacement of the tropospheric jets. The heat sources include global and zonally restricted lower-tropospheric warmings and stratospheric coolings, which coarsely represent possible impacts of ozone or aerosol changes. Our results show that global stratospheric cooling, as well as stratospheric cooling of the high-and midlatitudes, yields poleward jet displacement. This response is related to wave-mean flow interaction and involves an increase in wave refraction, and downward propagation of the stratospheric wind anomaly. This response is in general agreement with similar studies using idealized models (e.g., Haigh et al., 2005) and supports the recent findings of Polvani et al. (2011), who showed stratospheric ozone loss is the main driver of twentieth century atmospheric circulation changes in the Southern Hemisphere (SH). CAM3 tropospheric heating experiments show that highlatitude heating results in equatorward jet displacement; midlatitude heating results in poleward jet displacement; and low-latitude heating yields negligible jet displacement (but a significant increase in the strength of the tropical circulation). Similar results were obtained with the GFDL AM2.1. Although our high-latitude response is consistent with a recent study using a simplified GCM (Butler et al., 2010), our tropical heating results differ - Butler et al. (2010) found tropical heating forces a significant poleward shift of the extratropical storm tracks (and tropospheric jets). We note that the Butler et al. (2010) results appear to contradict the El Ninõ response, which is associated with tropical tropospheric warming and equatorward displacement of the jets . Reasons for this discrepancy are unclear, but may be related to the meridional extent of the forcing and subsequent temperature response. The tropical heating in Butler et al. (2010) extends all the way to 45 • N/S, whereas our LTHT TR heating extends to 30 • N/S. As noted in Butler et al. (2010), the projection of this heating into the mid-latitudes may play a significant role in the jet shift. Perhaps a more appropriate comparison to their tropical heating experiment is our LTHT TRML experiment, where heat is added from 0-60 • N/S. Similar to the Butler et al. (2010) result, LTHT TRML yields mid-latitude warming and poleward jet displacement. Additional possibilities for the discrepancy are the existence of topography in our model, which is important for the wave-modulated stratospheric pathway; and moisture/convective processes which could change the way heat is distributed. Globally uniform lower tropospheric heating (LTHT) and 2×CO2 yield similar tropical width responses. Both yield negligible jet displacement in the SH and poleward jet displacement in the NH, particularly during DJF and MAM. Similar to the stratospheric cooling experiments, the boreal winter/spring expansion is related to a wave-modulated stratospheric pathway and a positive NAM-like response. This result is consistent with Previdi and Liepert (2007), who showed 50 % of the subtropical dry zone expansion can be explained by positive trends in the annular modes. Other metrics of tropical displacement, including P − E and MMC, generally yield a similar response. However, there are some differences that warrant further study. Jet shifts associated with the tropospheric heating experiments are related to zonal displacements of the maximum meridional tropospheric temperature gradient. Heating the mid-latitudes results in maximum mid-latitude warming, consistent with a down gradient eddy heat flux response (i.e., oriented away from the latitude of maximum heating). This weakens the tropospheric meridional temperature gradient (T y ) on the equatorward flank of the T y maximum and strengthens T y on the poleward flank of the maximum. The jet responds by moving poleward, consistent with a geostrophic adjustment to the altered meridional temperature gradient, in accord with thermal wind balance. The opposite occurs when heat is added to the high-latitudes. This relationship also exists for global warming experiments, including LTHT and 2×CO2. Some of our experiments are inconsistent with previously proposed mechanisms of tropical expansion (e.g., Lorenz and DeWeaver, 2007;Frierson et al., 2007;Lu et al., 2007). For example, heating the tropical troposphere results in a global increase in tropopause height, yet negligible poleward tropical displacement. Our experiments highlight the importance of altered tropospheric temperature gradients and a wavemodulated stratospheric pathway. For the global warming experiments, the "Expansion Index", which quantifies the difference between mid-latitude and high-latitude warming amplification, accounts for over half of the tropospheric jet displacements; this increases to over 70 % for annual mean jet displacements. A similar relationship also exists for 2×CO2 CMIP3 equilibrium experiments and 1 % to 4×CO2 CMIP3 transient experiments. Five reanalyses also show the relationship exists for recent climate trends. This study has important implications for heterogeneous warming agents, such as tropospheric ozone and absorbing aerosols, as briefly discussed by Allen and Sherwood (2011). Such non-CO 2 forcings -particularly those that warm the mid-latitudes and are underestimated by most models (e.g., Ramanathan and Carmichael, 2008;Koch et al., 2009)may help explain the discrepancy between modeled and observed estimates of recent tropical expansion. Moreover, a recent study by Scaife et al. (2012) found increased CO 2 in GCMs with a well-resolved stratosphere yielded an equatorward storm track shift, particularly over the Atlantic during winter. This implies the observed poleward shift may be due more to heterogeneous warming agents, as opposed to greenhouse gases. We are currently investigating the importance of non-CO 2 forcings in recent tropical expansion.
12,665.8
2011-12-01T00:00:00.000
[ "Environmental Science", "Physics" ]
Survey on Packet Marking Algorithms for IP Traceback Distributed Denial of Service (DDoS) attack is an unavoidable attack. Among various attacks on the network, DDoS attacks are difficult to detect because of IP spoofing. The IP traceback is the only technique to identify DDoS attacks. The path affected by DDoS attack is identified by IP traceback approaches like Probabilistic Packet marking algorithm (PPM) and Deterministic Packet Marking algorithm (DPM). The PPM approach finds the complete attack path from victim to the source where as DPM finds only the source of the attacker. Using DPM algorithm finding the source of the attacker is difficult, if the router get compromised. Using PPM algorithm we construct the complete attack path, so the compromised router can be identified. In this paper, we review PPM and DPM techniques and compare the strengths and weaknesses of each proposal. INTRODUCTION Distributed Denial of service (DDoS) attacks are becoming a major problem now a days.This type of attacks not only allows the authorized users from accessing the specific network services or resources but also propel a large amount of traffic on the network.There is a huge growth of internet users day to day.As the number of users are growing, the crime is also growing.Many techniques like input debugging, controlled flooding and ICMP messaging have been developed to identify attackers 1,4 but none of these techniques have been succeeded.To find the DDoS attackers the only method is IP traceback because the source address can be spoofed.IP traceback is the process of finding the source router of the attacker who created a heavy traffic by sending spoofed packets.The IP traceback can be done in two ways using Probabilistic Packet Marking algorithm (PPM) and Deterministic Packet Marking algorithm (DPM).In both techniques the routers on the path to the victim stores the traceback data in the identification field of IPv4 and may also use fields like Type of Service and Reserve flag fields shown in fig. 1.The victim after receiving the marked packets using the traceback data finds the source router of the attacker.In this paper we will review the PPM and DPM techniques. Probabilistic Packet Marking(PPM) Probabilistic Packet Marking algorithm helps in reconstructing the attack path from victim to the source.In this technique each router in the attack path as shown in fig. 2 marks the packet with the partial IP address information called the marking information.This marking information is placed into the IP packet with a fixed probability 5,12 .After receiving the partial path information from the marked packets the victim reconstructs the attack path.Some of the Probabilistic Packet Marking techniques are discussed hereafter. Practical network support for IP Traceback schemes by Savage, Wetherall, Karlin, Anderson Savage et.al 4 .in their method proposed two components, marking procedure and path reconstruction procedure.In marking procedure each router in the attack path generates a random number X.If the random number X is less than the marking probability P m then the router marks the packet with the part (fragment) of the marking information, if not the upstream routers' marking information is exclusive 'OR'ed with its corresponding part of the marking information.The marking information consists of IP address (32 bits) and a random hash value (32 bits) which is Bit interleaved (72 bits).The receiver after receiving this marking information constructs the attack path. The expected number of packets needed to reconstruct the attack path with probability q is where d is the distance Advantages ISP support not required. Advanced and Authenticated marking schemes for IP Traceback by Song, Perrig Song and Perrig 5 in their Advanced scheme-I marks the packet with the hash value of the IP address instead of the IP address itself.A 11 bit hash value is calculated to each IP address in the attack path.In this technique two independent hash functions are used to distinguish the order of two routers in the XOR result.The advanced marking scheme-II technique uses many number of hash functions.This approach uses flag field to indicate which hash function is used for the marking.If the FID is known then the R i is simply calculated using h(<FID, R i >).Thus different FIDs indicated different independent hash functions.In authenticated marking scheme, Song and Perrig proposed a technique to authenticate the packet marking so that the victim can detect the compromised routers. Advantages Low network and router overhead • Lower computation overhead • Authenticated marking scheme provides • efficient authentication of routers' markings. Disadvantages In this technique the 11 bit hash value is not • sufficient to avoid collision (i.e., the different router address may encode the same hash value). Though efficient and accurate than savage et • al technique, still gives many false positives in DDoS attacks.Network map is needed to reconstruct the • attack path. Hash-Based IP Traceback by Snoeren, Partridge, Sanchez, Jones, Tchakountio, Kent Snoeren et al 6 .proposed a Source Path Isolation Engine (SPIE) to trace the source of a particular IP packet.Packet's destination and time of receipt is provided to the routers to trace the path. Advantages Traceback is performed by using just a single packet. Disadvantages Requires large amount of storage space • and hardware changes for packet logging at router. A precise termination condition of the probabilistic packet marking algorithm by Wong Tsz-Yeung, Wong Man-Hon, Lui Chi-Shing This algorithm 7 uses the savage et.al. marking procedure but uses a precise termination condition while constructing the attack graph.It takes less number of packets and guarantees that the constructed graph is correct. Advantages Does not require any prior knowledge about • the network topology.Upon ter mination of the algorithm the • constructed graph is the attack graph. Disadvantages Because it is using the PPM algorithm, all the • disadvantages of PPM algorithm are brought into this method also. IP Traceback based on Chinese Remainder Theorem by Lih-Chyau, Liu Tzong-Jye, Yang Jyun-Yan In Lih-Chyau Wuu et.al 8 .technique the characteristic of the IP address is passed with the IP address inorder to reduce the false combination.The IP address characteristic is calculated using the Chinese Remainder Theorem.The marking information is divided into five fragments.The victim after receiving the IP address parts combines them and finds the characteristic of the combined IP address.If the calculated IP address characteristic is equal to the received IP address characteristic then that IP address is considered as valid. Advantages This technique has reduced the number of • combinations and hence the number of false positives. It takes less number of packets to reconstruct • the attack path. Disadvantages It cannot be applied directly to IPv6.• IP Traceback through Modified Probabilistic Packet Marking algorithm using Chinese Remainder Theorem by Bhavani, Janaki, Sridevi In this technique 9 a unique X value calculated using Chinese remainder theorem is It can be applied to IPv6.• Disadvantages Network map is needed to reconstruct the • attack path. Deterministic Packet Marking (DPM) Deterministic Packet Marking helps in finding the source router of an attacker's packet but it will not find the attack path from victim to attacker as done in PPM.In this technique only the ingress router as shown in fig. 3 marks the packet with its IP address 13,16 . IP Traceback with Deterministic Packet Marking Andrey Belenky and Nirwan Ansari 13 proposed a technique where the ingress router marks the packet with its IP address parts.The IP address is divided into two parts.When the first part is sent the reserved flag is set to "0" and to "1" if the second part is sent.At the victim the two parts are combined to find the attacker. Advantages It is easy to implement. Disadvantages Requires knowledge about ingress routers. • If the ingress router is compromised then • attacker is not found. Improved Deterministic Packet Marking Algorithm IDPM technique 14 is effective in finding the spoof packets.In this technique the ingress router will deterministically mark the packets with the IP address and the hash value of the IP address.The intermediate routers will calculate the hash value of the IP address in Identification field.If the calculated hash value is not equal to the hash value in identification field then it is assumed as a spoofed packet and it is dropped. Advantages It is simple and scalable. • It is suitable to find other types of attacks than • DDOS attacks. Disadvantages Requires knowledge about ingress routers.• False positives may be more.• The MOD server identifies the unique mark and stores the mark, source address and time stamp into its database.With the sudden increase amount of attack flows, finally, the other router may discover the attack and intimate MOD server.The MOD server will store this information in its database.When the victim performs the traceback process it requests the MOD server about the IP addresses related to this unique marks.In this way the victim is able to find the source attacker. Advantages It is simple and scalable. • Number of packets to reconstruct the attack • path is very less. Disadvantages MOD server is a bottleneck.• All packets will be enlarged, which will increase • the network overhead. Less overhead because As attackers send enormous all the routers participate number of packets marking in marking with all the packets is time some probability. consuming and overhead at ingress router.Network overhead is less All packets will be enlarged, than that of in DPM, because which will increase only some packets are the network marked at each router. overhead. If the router gets compromised If the ingress router gets then it can be identified compromised then it while constructing is impossible to the path back. find the attacker The number of packets The number of packets needed needed to reconstruct to find the ingress router the attack path (source router) is very large. is very less Finds complete Finds only the attack path. source router Flexible Deterministic Packet Marking: An IP Traceback system to find the real source of attacks FDPM technique 15 is effective in finding the real sources of the attackers.In this technique the marking of packets depend on the load of the router.If the load of the router exceeds some threshold value then that router differentiates between the normal packets and the attack packets.Only the attack packets are marked. Advantages Requires a small number of packets to • complete the traceback process. Traces a large number of sources in one • traceback process.Low false positive rate.• Disadvantages All packets will be enlarged, which will increase • the network overhead. If the ingress router is compromised then • attacker is not found. CONCLUSIONS Many packet marking techniques have been studied.These mechanisms differ in their working principle but are used to detect source of the attacker.In this paper, the advantages and disadvantages of PPM and DPM techniques have been discussed.The comparative study of these techniques is shown in Table1.Scope of the future work is to reduce the number of packets to reconstruct the attack path using PPM. Fig. 3 : Fig. 3: Deterministic Packet Marking process.Packets are marked by only the ingress routers deterministically with their IP address information as they pass through them
2,566.2
2017-06-20T00:00:00.000
[ "Computer Science" ]
Resolving the mesospheric nighttime 4 . 3 μ m emission puzzle : comparison of the CO 2 ( ν 3 ) and OH ( ν ) emission models In the 1970s, the mechanism of vibrational energy transfer from chemically produced OH(ν) in the nighttime mesosphere to the CO2(ν3) vibration, OH(ν)⇒ N2(ν)⇒ CO2(ν3), was proposed. In later studies it was shown that this “direct” mechanism for simulated nighttime 4.3 μm emissions of the mesosphere is not sufficient to explain space observations. In order to better simulate these observations, an additional enhancement is needed that would be equivalent to the production of 2.8–3 N2(1) molecules instead of one N2(1) molecule in each quenching reaction of OH(ν)+N2(0). Recently a new “indirect” channel of the OH(ν) energy transfer to N2(ν) vibrations, OH(ν)⇒ O(1D)⇒ N2(ν), was suggested and then confirmed in a laboratory experiment, where its rate for OH(ν = 9)+O(3P) was measured. We studied in detail the impact of the “direct” and “indirect” mechanisms on CO2(ν3) and OH(ν) vibrational level populations and emissions. We also compared our calculations with (a) the SABER/TIMED nighttime 4.3 μm CO2 and OH 1.6 and 2.0 μm limb radiances of the mesosphere–lower thermosphere (MLT) and (b) with groundand space-based observations of OH(ν) densities in the nighttime mesosphere. We found that the new “indirect” channel provides a strong enhancement of the 4.3 μm CO2 emission, which is comparable to that obtained with the “direct” mechanism alone but assuming an efficiency that is 3 times higher. The model based on the “indirect” channel also produces OH(ν) density distributions which are in good agreement with both SABER limb OH emission observations and ground and space measurements. This is, however, not true for the model which relies on the “direct” mechanism alone. This discrepancy is caused by the lack of an efficient redistribution of the OH(ν) energy from higher vibrational levels emitting at 2.0 μm to lower levels emitting at 1.6 μm. In contrast, the new “indirect” mechanism efficiently removes at least five quanta in each OH(ν ≥ 5)+O(3P) collision and provides the OH(ν) distributions which agree with both SABER limb OH emission observations and groundand space-based OH(ν) density measurements. This analysis suggests that the important mechanism of the OH(ν) vibrational energy relaxation in the nighttime MLT, which was missing in the emission models of this atmospheric layer, has been finally identified. Published by Copernicus Publications on behalf of the European Geosciences Union. 9752 P. A. Panka et al.: The 4.3 μm nighttime emission puzzle Introduction A detailed study of nighttime 4.3 µm emissions was conducted by López-Puertas et al. (2004) aimed at determining the dominant mechanisms of exciting CO 2 (ν 3 ), where ν 3 is the asymmetric stretch mode that emits 4.3 µm radiation.The nighttime measurements of SABER channels 7 (4.3 µm), 8 (2.0 µm), and 9 (1.6 µm) for geomagnetically quiet conditions were analyzed, where channels 8 and 9 are sensitive to the OH (ν ≤ 9) overtone radiation from levels ν = 8-9 and ν = 3-5, respectively.López-Puertas et al. (2004) showed a positive correlation between 4.3 µm and both OH channel radiances at a tangent height of 85 km.This correlation was associated with the transfer (Kumer et al., 1978) of energy of the vibrationally excited OH(ν) produced in the following chemical reaction (hereafter "direct" mechanism): and then further to CO 2 (ν 3 ) vibrations N 2 (1) + CO 2 (0) ↔ N 2 (0) + CO 2 (ν 3 ).(R3) However, López-Puertas et al. (2004) showed that calculations based on the Kumer et al. (1978) model do not reproduce the 4.3 µm radiances observed by SABER.Although accounting for energy transfer from OH(ν) did provide a substantial enhancement to 4.3 µm emission, a 40 % difference between simulated and observed radiance remained (for the SABER scan 22, orbit 01264, 77 • N, 3 March 2002, which was studied in detail) for altitudes above 70 km.In order to reproduce measurements these authors found that, on average, 2.8-3 N 2 (1) molecules (instead of Kumer's suggested value of 1) are needed to be produced after each quenching of OH(ν) molecule in Reaction (R2).Alternative excitation mechanisms that were theorized to enhance the 4.3 µm radiance (i.e., via O 2 and direct energy transfer from OH to CO 2 ) were tested but found to be insignificant.Recently, Sharma et al. (2015) suggested a new "indirect" mechanism of the OH vibrational energy transfer to N 2 , i.e., OH(ν) ⇒ O( 1 D) ⇒ N 2 (ν).Accounting for this mechanism, but only considering OH(ν = 9), these authors performed simple model calculations to validate its potential for enhancing mesospheric nighttime 4.3 µm emission from CO 2 .They reported a simulated radiance enhancement between 18 and 55 % throughout the MLT.In a latest study, Kalogerakis et al. (2016) provided a definitive laboratory confirmation for the validity of this new mechanism and measured its rates for OH(ν = 9) + O. We studied in detail the impact of "direct" and "indirect" mechanisms on the CO 2 (ν 3 ) and OH(ν) vibrational level populations and emissions and compared our calculations with (a) the SABER/TIMED nighttime 4.3 µm CO 2 and OH 1.6 and 2.0 µm limb radiances of MLT and (b) with the ground and space observations of the OH(ν) densities in nighttime mesosphere. The study was performed for quiet (non-auroral) nighttime conditions to avoid accounting for interactions between charged particles and molecules, whose mechanisms still remain poorly understood. Non-LTE model A non-LTE (non-local thermodynamical equilibrium) analysis was applied to CO 2 and OH using the non-LTE ALI-ARMS (Accelerated Lambda Iterations for Atmospheric Radiation and Molecular Spectra) code package (Kutepov et al., 1998;Gusev and Kutepov, 2003;Feofilov and Kutepov, 2012), which is based on the accelerated lambda iteration approach (Rybicki and Hummer, 1991). Our CO 2 non-LTE model is described in detail by Feofilov and Kutepov (2012).We modified its nighttime version to account for the "direct" mechanism, Reactions (R1)-(R3), in a way consistent with that of López-Puertas et al. ( 2004) and added the "indirect" mechanism of Sharma et al. (2015) and Kalogerakis et al. (2016) as described in detail below.Our OH non-LTE model resembles that of Xu et al. (2012).et al. (2015) suggested an additional mechanism that contributes to the CO 2 (ν 3 ) excitation at nighttime, and discussed in detail its available experimental and theoretical evidence.According to this mechanism, highly vibrationally excited OH(ν), produced by Reaction (R1), rapidly loses several vibrational quanta in collisions with O( 3 P) through a fast, spin-allowed, vibration-to-electronic energy transfer process that produces O( 1 D): Sharma Recently, Kalogerakis et al. (2016) presented the first laboratory demonstration of this new OH(ν) + O( 3 P) relaxation pathway and measured its rate coefficient for ν = 9. The production at nighttime of electronically excited O( 1 D) atoms in Reaction (R4) triggers well-known pumping mechanism of the 4.3 µm emission, which was studied in detail for daytime (Nebel et al., 1994;Edwards et al., 1996). Here O( 1 D) atoms are first quenched by collisions with N 2 in a fast spin-forbidden energy transfer process: then N 2 (ν) transfers its energy to ground state N 2 via a very fast single-quantum VV (vibrational-vibrational) process: leaving N 2 molecules with an average of 2.2 vibrational quanta, which is then followed by Reaction (R3). Collisional rate coefficients We use, in our CO 2 non-LTE model, the same VT (vibrational-translational) and VV collisional rate coefficients for the CO 2 lower vibrational levels as those of López-Puertas et al. (2004).However, a different scaling of these basic rates is applied for higher vibrational levels using the first-order perturbation theory as suggested by Shved et al. (1998). The reaction rate coefficients applied in this study for modeling OH(ν) relaxation transfer of OH(ν) vibrational energy to the CO 2 (ν 3 ) mode are displayed in Table 1.The total chemical production rate of OH(ν) in Reaction (R1) was taken from Sander et al. (2011) and the associated branching ratios for ν were taken from Adler-Golden (1997).We treat Reaction (R2) both as a single (1Q, ν = 1) and multiquantum (MQ, ν = 2 or 3) quenching process.We use the rate coefficient of this reaction (with associated branching ratios) taken from Table 1 of Adler-Golden (1997) and multiplied it by a low temperature factor of 1.4 (Lacoursière et al., 2003) for MLT regions.The rate coefficient for Reaction (R3) was taken from Shved et al. (1998). Model inputs and calculation scenarios The nighttime atmospheric pressure, temperature, and densities of trace gases and main atmospheric constituents for calculations presented below were taken from the WACCM (Whole Atmosphere Community Climate Model) model (Marsh et al., 2013). The following sets of processes and rate coefficients were used in our calculations: This is our basic model version with both "direct", Reaction (R2), and "indirect", Reaction (R4) + (R5), mechanisms working together when Reaction (R2) is treated as the single-quantum process (ν = 1) as was suggested by Kumer et al. (1978), though Reaction (R6) is treated as the multi-quantum process (any ν ≤ ν −1). A new mechanism, Reactions (R4) and (R5), replaces here Reaction (R7), which is used in other models described above. Vibrational temperatures of the CO 2 (ν 3 ) levels The vibrational temperature T ν is defined from the Boltzmann formula which describes the excitation degree of level ν against the ground level 0.Here g ν and E ν are the statistical weight and the energy of level ν, respectively.If T ν = T kin then level ν is in LTE. Figure 1 shows the vibrational temperatures of the CO 2 levels of four isotopes, giving origin to 4.3 µm bands, which dominate the SABER nighttime signal (López-Puertas et al., 2004).These results were obtained for SABER scan 22, orbit 01264, 77 • N, 3 March 2002.The same scan was used for the detailed analysis presented in the work by López-Puertas et al. (2004).The kinetic temperature retrieved for this scan from the SABER 15 µm radiances (SABER data version 2.0) and vibrational temperature of N 2 (1) are also shown.Solid lines in Fig. 1 represent simulations with our basic model [(OH-N2 1Q) & (OH-O2 MQ) & Reactions (R4), (R5)], when both the "direct" process Reaction (R2) (in its singlequantum version, as was suggested by Kumer et al., 1978), and the new "indirect" process, Reactions (R4) + (R5) are included.For comparison we also show vibrational temperatures (dashed lines) for the model [(OH-N2 3Q) & (OH-O2 1Q) & Reaction (R7)], where the "indirect" mechanism is off and the "direct" process is treated as a three-quantum one, which is equivalent to the 3 times higher efficiency suggested by López-Puertas et al. (2004). In both simulations, CO 2 (00011) of main isotope 626 and N 2 (1) have almost identical vibrational temperatures up to ∼ 87 km which is caused by an efficient VV exchange (Reaction R3).(1997), as a multi-quantum VV process.Compared to previous model this run shows a significantly lower channel 7 signal.This is obviously caused by a much more efficient removal of the OH(ν) vibrational energy in the multi-quantum quenching by collisions with O 2 .As a result, a significantly smaller part of this energy is collected by N 2 (1) and delivered to the CO 2 (ν 3 ) vibrations with the "direct" mechanism Reaction (R2).To compensate this OH(ν) decay and keep the transfer of energy to CO 2 (ν 3 ) unchanged, López-Puertas et al. ( 2004) adjusted new, presumably higher OH(ν) to the SABER 1.6 and 2.0 µm radiances.In our study the higher channel 7 emission is, however, restored when we include the Reactions (R4) and (R5) ("indirect" mechanism of energy transfer from OH(ν) to CO 2 (ν 3 )) in the model, but return Reaction (R2) to its single-quantum mode. In Fig. 2, we show (black curve with diamonds) the channel 7 radiance profile for the SABER scan specified in Sect.3.1.Comparing turquoise and red curves with this measurement, one may see that both the "direct" mechanism alone in its three-quantum version and the combination of "indirect" and single-quantum "direct" mechanisms are close to the SABER radiance for this scan.However, to provide this pumping level, the multi-quantum "direct" mechanism needs to be supported by the inefficient single-quantum OH(ν) quenching in Reaction (R6) by collisions with O 2 , which helps maintain a higher population of OH(ν).We also note here that both our violet 2004), short-dash and solid lines, respectively, very well. We also show in Fig. 2 our study of how both the "direct" and "indirect" mechanisms work together when the "direct" process Reaction (R2) is treated as a multi-quantum.The magenta curve in this plot is the result obtained with the model [(OH-N2 3Q) & (OH-O2 MQ) & Reactions (R4), (R5)] when Reaction (R2) is treated as a three-quantum process.This combination of both mechanisms provides high CO 2 (ν 3 ) pumping and subsequently strong channel 7 emission.The latter exceeds the turquoise and red curves by 20-45 % in the altitude range considered and strongly deviates from the measured radiance profile. Two other results of this study are shown only on the right panel of this plot for the signal differences.The light blue curve corresponds to simulations with the [(OH-N2 2Q) & (OH-O2 MQ) & Reactions (R4), (R5)] model when the quantum transfer in Reaction (R2) is reduced from 3 to 2. The dark green curve is obtained for the case when Reaction (R2) is treated as two-quantum process for highly resonant transitions OH(9) + N 2 (0) → OH(7) + N 2 (2) and OH(2) + N 2 (0) → OH(0) + N 2 (2), and as single-quantum for all other vibrational levels.It is seen that both of these input versions bring the calculated radiance closer to our result for a singlequantum "direct" process Reaction (R2) (red curve One may see that in Fig. 3 the "direct" mechanism alone with three-quantum efficiency for Reaction (R2), as well as both the "direct" (as single-quantum) and "indirect" mechanisms together provide similar results for all four atmospheric models, within a 10 to 30 % difference range.By comparing these models to measured radiances, both calculations are close to the observed signal down to 68 km for MLW and down to 75 km for SAW.For MLS and TROP, the two-mechanism calculations are somewhat closer to measurements than those for "direct" mechanism alone in altitude interval 75-90 km. Comparison of OH vibrational populations with ground-and space-based observations In Fig. 4 we present relative OH(ν) populations calculated using three different sets of rate coefficients discussed in the previous section, which provided comparably high enhancement of the CO 2 (ν 3 ) emission.These calculations are compared with the vibrational populations derived from ground (panel a) and space-based (panel b) observations of OH emissions. Measured populations (black) displayed in panel (a) were recorded by Cosby and Slanger (2007) To achieve a high signal-to-noise ratio, 300 radiance spectra (OH ν = 1 and 2) were collected and averaged.We show in panel (b) of Fig. 4 the OH(ν) population distribution normalized to OH(ν = 9) derived in this study as well as corresponding uncertainties.The three simulated distributions (red, turquoise, and magenta) in this panel are modeled using WACCM inputs taken on 22 November 2000 at latitude 45 • N at local midnight. To simulate the ground-based observations of Cosby and Slanger (2007), panel (a), the calculated relative populations were integrated over the entire altitude range of our model (30-135 km).For panel (b), we have integrated calculated OH(ν) densities of 87 to 105 km as observed by VIRTIS from Migliorini et al. (2015) to simulate mean population distribution obtained in this study. The turquoise profiles in both panels of Fig. 4 represent results obtained with our set of rate coefficients [(OH-N2 3Q) & (OH-O2 1Q) & Reaction (R7)] similar to the one used in López-Puertas et al. (2004), where the authors treated the OH-N 2 reaction with an efficiency increased by a factor of 3, the OH-O 2 reaction as single-quantum, and the OH-O reaction as a "sudden death" quenching or chemical OH removal process (Reaction R7), with ν independent rate coefficient of 2.0 × 10 −10 cm 3 s −1 .In panel (a), the turquoise profile shows higher relative populations compared to measurements for upper vibrational levels ν > 4, whereas in panel (b) this model shows populations within the uncertainty range of measurements for ν > 4. For lower vibrational levels ν ≤ 4, the populations calculated with this model are, however, significantly lower than measured ones: by 30 % for ν = 3 in the panel (a) and by up to 85 % for ν = 1 in panel (b).A significantly slower increase in populations calculated with the [(OH-N2 3Q) & (OH-O2 1Q) & Reaction (R7)] model compared to measurements can be explained by the lack of efficient mechanisms redistributing the OH(ν) energy from higher vibrations levels to lower ones.The single-quantum OH-O 2 reaction also allows for more excited OH molecules in the upper vibrational levels relative to a multi-quantum process.Additionally, a slower increase in calculated populations with the ν decreasing compared to measured ones which is seen in both panels, is the effect of the high quenching rate coefficient of the Reaction (R7) for lower vibrational levels for which this reaction dominates over the singlequantum O 2 quenching. The situation is different when our basic model [(OH-N 2 1Q) & (OH-O 2 MQ) & Reactions (R4), (R5)] is applied (red curves).As discussed above in the previous section, this model provides the same level of the CO 2 (ν 3 ) emission pumping as the extreme model of López-Puertas et al. (2004); compare red and turquoise curves in Fig. 2.However, they demonstrate significantly different population distributions.Relative OH population distribution in panel (a) shows our standard model in very good agreement with the results from Cosby and Slanger (2007), falling completely within the variation range of these measurements.Panel (b) also shows excellent agreement between calculations and measurements, where the former lie nearly completely within the measurement error bars for the majority of vibrational levels.In both panels our results reproduce well the steady upward trend in populations from upper to lower vibrational levels.Significantly higher populations of lower OH levels in this model are the result of redistribution of highervibrational-level energy to lower levels due to two dominant multi-quantum quenching mechanisms, namely the new Reaction (R4) and the multi-quantum version of Reaction (R6).We also note that Reaction (R4) uses a lower rate coefficient than Reaction (R7) for quenching the lower vibrational levels ν < 5, which results in maintaining their higher populations. Measured OH(ν = 3) (panel b) was the only population which showed disagreement with our model.Various reasons of increased measured population at ν = 3 are discussed by Migliorini et al. (2015); however, no definitive conclusions were given. Above 90 km atomic oxygen density increases rapidly with the altitude.As a result the role of Reaction (R4) in quenching higher OH vibrational and pumping lower levels increases.This effect is easily seen in panel (b) of Fig. 4, where mean OH(ν) densities for higher altitude region 87-105 km are compared.The turquoise curve (no Reaction R4) in this panel shows lower populations compared to those calculated with Reaction (R4) included. The magenta profiles in both panels represent our calculations with the model [(OH-N 2 3Q) & (OH-O 2 MQ) & Reactions (R4), (R5)], which is identical to our standard model except for the Reaction (R2) treated as the three-quantum one.The multi-quantum OH-N 2 VV transfer provides faster quenching of excited OH, hence, a lower overall population of the magenta profiles compared to red profiles.Despite showing reasonable agreement with measurements in both panels, this model caused, however, an excessive increase for the 4.3 µm emissions, as seen in Fig. 2. 3.4 OH 1.6 and 2.0 µm emissions SABER channels 8 (2.0 µm), and 9 (1.6 µm) are dominated by the OH(ν) emission from levels ν = 8-9 and ν = 3-5, respectively.We simulated channel 8 and 9 radiances for four atmospheric models from Table 2. Results are shown in Fig. 3, bottom row, as ratios of volume emission rates for R2), in turquoise.Black curves in this plot display SABER-measured VER ratios, for which VERs were obtained with the Abel inversion procedure, similar to that described by López-Puertas et al. (2004), from the SABER channel 8 and 9 limb radiances for scans listed in Table 2. Comparing red and turquoise profiles in Fig. 3 (bottom row), one may see that our standard model (red) shows significantly lower VER ratios for altitudes 85-100 km than the model of López-Puertas et al. (2004), turquoise.These differences between ratios are a result of very different OH(ν) population distributions (Fig. 4) for each model, which were discussed in the previous section.The channel 8/channel 9 VER ratios reflect these distributions very well since channel 8 is sensitive to the OH(ν) emissions from higher levels 8 and 9, whereas channel 9 records emissions of lower levels 3-5.A significantly higher population of lower vibrational levels in our model (red curves in Fig. 4) explain low VER ratios.In contrast, the model [(OH-N2 3Q) & (OH-O2 1Q) & Reaction (R7)], which underpredicts lower level populations, provides VER ratios which significantly exceed both our model results and measurements for altitudes above 90 km, where [O] density rapidly increases with altitude.This comparison demonstrates the strong impact of Reaction (R2), which provides efficient quenching of higher OH vibrational levels in collisions with O( 3 P) atoms in this altitude region. 4 Conclusions Kumer et al. (1978) first proposed the transfer of vibrational energy from chemically produced OH(ν) in the nighttime mesosphere to the CO 2 (ν 3 ) vibration, OH(ν) ⇒ N 2 (ν) ⇒ CO 2 (ν 3 ).The effect of this "direct" mechanism on the SABER nighttime 4.3 µm emission was studied in detail by López-Puertas et al. (2004), who showed that in order to match observations, an additional enhancement is needed that would be equivalent to the production of 2.8-3 N 2 (1) molecules instead of one molecule for each quenching reaction OH(ν) + N 2 (0).López-Puertas et al. (2004) concluded that the required 30 % efficiency in the OH(ν) + N 2 (0) energy transfer is, in principle, possible, although the mecha-nism(s) whereby the energy is transferred is (are) not currently known. Recently, Sharma et al. (2015) suggested a new efficient "indirect" channel of the OH(ν) energy transfer to the N 2 (ν) vibrations, OH(ν) ⇒ O( 1 D) ⇒ N 2 (ν) and showed that it may provide an additional enhancement of the MLT nighttime 4.3 µm emission.Kalogerakis et al. (2016) provided a definitive laboratory confirmation of new OH(ν) + O vibrational relaxation pathway and measured its rate for OH(ν = 9) + O. We included the new "indirect" energy transfer channel in our non-LTE model of the nighttime MLT emissions of CO 2 and OH molecules and studied in detail the impact of "direct" and "indirect" mechanisms on simulated vibrational level populations and radiances.The calculations were compared with (a) the SABER/TIMED nighttime 4.3 µm CO 2 and OH 1.6 and 2.0 µm limb radiances of MLT and (b) with the ground and space observations of the OH(ν) densities in the nighttime mesosphere.We found that new "indirect" channel provides significant enhancement of the 4.3 µm CO 2 emission.This model also produces OH(ν) density distributions which are in good agreement with both SABER limb OH emission measurements and the ground and space observations in the mesosphere.Similarly strong enhancement of 4.3 µm emission can also be achieved with the "direct" mechanism alone assuming a factor of 3 increase in efficiency, as was suggested by López-Puertas et al. (2004).This model does not, however, reproduce either the SABER-measured VER ratios of the OH 1.6 and 2.0 µm channels or the ground and space measurements of the OH(v) densities.This discrepancy is caused by the lack of efficient redistribution of the OH(ν) energy from the higher vibrational levels emitting at 2.0 µm to lower levels emitting at 1.6 µm in the models based on the "direct" mechanism alone.In contrast, this new "indirect" mechanism (Reactions R4 and R5 of Table 1), efficiently removes at least five quanta in each OH(ν) + O( 3 P) collision from high OH vibrational levels.Supported also by the multi-quantum OH(ν) + O 2 quenching (Reaction R6 of Table 1), the new mechanism provides OH(ν) distributions which are in agreement with both measured VER ratios and observed OH(ν) populations. The results of our study suggest that the missing nighttime mechanism of CO 2 (ν 3 ) pumping has finally been identified.This confidence is based on the fact that the new mechanism accounts for most of the discrepancies between measured and calculated 4.3 µm emission for various atmospheric situations, leaving relatively little room for other processes, among them the multi-quantum "direct" mechanism.The accounting for the multi-quantum transfer in reaction OH(v) + N 2 together with the "indirect" mechanism has little influence on the OH(ν) population distributions; however, it can enhance the 4.3 µm emission.Therefore, further laboratory and/or theoretical investigation of this reaction is needed to define its role.Further improvements for the new "indirect" mechanism will require optimizing the set of rate coef-ficients used for OH(ν) relaxation by O( 3 P) and O 2 at mesospheric temperatures and, in particular, understanding the dependence of the "indirect" mechanism on the OH vibrational level.Relevant laboratory measurements and theoretical calculations are sorely needed to understand these relaxation rates and the quantitative details of the applicable mechanistic pathways.Nevertheless, the results presented here clearly demonstrate significant progress in understanding the mechanisms of the nighttime OH and CO 2 emission generation in MLT. Figure 2 Figure 2 displays our simulations of SABER channel 7 (4.3 µm) radiances for inputs which correspond to the measurement conditions of the SABER scan described in Sect.3.1.The calculations also account for the minor contribution in channel 7 radiation emitted by the OH(ν ≤ 9) vibrational levels.Our simulation for this scan with the [(OH-N2 1Q) & (OH-O2 1Q) & Reaction (R7)] set of rate coefficients is shown by the violet curve.The turquoise curve displays our results for the rate coefficient set [(OH-N2 3Q) & (OH-O2 1Q) & Reaction (R7)], which simulates the model suggested by López-Puertas et al. (2004) with the factor of 3 increased efficiency of Reaction (R2).One may see that treating Reaction (R3) as a three-quantum VV process strongly enhances the pumping of the CO 2 (ν 3 ) vibrations and that the 4.3 µm radiance is in agreement with López-Puertas et al. (2004) results.The blue curve in Fig. 2 displays our run with the model [(OH-N2 3Q) & (OH-O2 MQ) & Reaction (R7)].In this model Reaction (R6) is treated, following Adler-Golden(1997), as a multi-quantum VV process.Compared to previous model this run shows a significantly lower channel 7 signal.This is obviously caused by a much more efficient removal of the OH(ν) vibrational energy in the multi-quantum quenching by collisions with O 2 .As a result, a significantly smaller part of this energy is collected by N 2 (1) and delivered to the CO 2 (ν 3 ) vibrations with the "direct" mechanism Reaction (R2).To compensate this OH(ν) decay and keep the transfer of energy to CO 2 (ν 3 ) unchanged, López-Puertas et al. (2004) adjusted new, presumably higher OH(ν) to the SABER 1.6 and 2.0 µm radiances.In our study the higher channel 7 emission is, however, restored when we include [(OH-N2 1Q) & (OH-O2 1Q) & Reaction (R7)] and turquoise [(OH-N2 3Q) & (OH-O2 1Q) & Reaction (R7)] curves reproduce the corresponding results in Fig. 10 of López-Puertas et al. ( on 3 March 2000 using the echelle spectrograph and imager (ESI) on the Keck II telescope at Mauna Kea (19.8206 • N, 155.4681 • W).The authors measured emission intensities of the 16 OH Meinel bands which were converted into the OH(ν) column densities and normalized to column density of OH(ν = 9).Several observations of OH emissions were recorded throughout the night.We display the average column densities as well as their variation ranges for each vibrational level.The three simulated distributions (red, turquoise, and magenta) in this panel are modeled using WACCM inputs taken on 3 March 2000 at latitude 20 • N at local midnight.Measured densities displayed in panel (b) of Fig. 4 were taken from Migliorini et al. (2015), who analyzed VIRTIS (for Visible and Infrared Thermal Imaging Spectrometer) measurements on board the Rosetta mission.VIRTIS performed two limb scans of the OH Meinel bands from 87 to 105 km covering the latitude range from 38 to 47 • N between 01:30 and 02:00 solar local time, in November 2009. Table 1 . Significant collisional processes used in model.
6,659.8
2017-08-18T00:00:00.000
[ "Physics", "Environmental Science" ]
A Method of Multi-license Plate Location in Road Bayonet Image —To solve the problem of multi-license plate location in road bayonet image, a novel approach was presented, which utilized plate's color features, geometry characteristics and gray feature. Firstly, the RGB color image was converted to HSV color model and calculates the distance according to the plate's color information in the color space. Secondly, the license plate candidate regions were segmented by binary and morphological processing. Finally, based on the plate's geometry characteristics and gray feature, the license plate regions were segmented by and validated. In a certain degree, the method wasn't limited the plate's type, size, number, the location of the car and the background in the picture. It was tested using the road bayonet image.(Abstract) Keywords—multi-license plate location; color features; geometry characteristics; gray feature INTRODUCTION At present, the intelligent transportation system commonly used HD intelligent traffic cameras, which has a wide range of monitoring.The system can capture two or three vehicle lanes by using HD traffic cameras, which has significantly improved the efficiency.At the same time, the equipment cost and maintenance cost has been saved.The license plate recognition system is mainly composed of license plate location, character segmentation and character recognition.Among them, the license plate location is the premise and foundation of license plate character segmentation and character recognition. There are many kinds of method for license plate location, but most methods aiming at the single license plate or the plate in the semi-structure environment, such as charging stations, small import and export.Which has constrained in the application, such as requiring the plate size and position in the image varies in a certain range.Jie Guo et al [1] convert the image from RGB color space to HSV color space and then segment the regions which satisfied the color feature of plate by calculating the distance and similarity in color space.To the segmented image, the texture and structural features are analyzed to locate the license plate correctly.De-hua Ren [2] proposed a color classification method based on the distance between different colors according to the Chinese car license plate color features and then segmented the license-plate's background color regions by scanning lines of picture and analyzing the line segments.Finally, these regions were translated into binary image in which license-plate's background color was dark and license-plate's foreground color was white, and validated by the license-plate's gray features.The two methods could adapt to license-plate's type, size, number and weren't limited to the location of the car and the background in the image.But in the application of the multi-license plate location in road bayonet image, as a result of multiple lanes, the disturbance of the trees, billboards and the reason of license plate dirty, wear hardly, the above methods can't locate all regions of license plate in the image. This paper aimed at the problem of multi-license plate location in road bayonet image, firstly calculates the distance according to the plate's color features in HSV color space and then segments the regions of interest by binary and morphological processing.Finally, based on the plate's geometry characteristics and gray feature, the license plate regions were segmented and validated.The method combines the license-plate's color features, geometry characteristics and gray feature, which can locate all regions of license plate in road bayonet image. A. Color models According to different applications, color representation in different ways.RGB color model is used for display, TV and scanner device, use three basic colors of red, blue and green configures most of the color which human eye can see.The HSV color model is widely used in video and television broadcasting.H,S,V respectively represent the Hue, Saturation and Value, which corresponding with the color features of the human eye can perceive.This color model represented by Munsell three-dimensional space coordinate system, because of the psychological perception of independence between the coordinates, it can independently perceive the change of color components, and because of the color is linear scalability, which suitable for user judgment with the naked eye.At the same time as the HSV model corresponds to the painter of color model, which can reflect the human perception and discrimination to the color and suitable for similarity comparison of color image [1], so this paper used the HSV color space to segment the color image. Because of the image generally use the RGB model, so the first thing is conversion.The relationship of each component between the HSV color model and the RGB color model is as follows [3]: (3) Among them, Hue with the metric, range from to , Saturation values range from 0 to 1, Value range from 0 to 1. B. The distance in color space License plate background and character color of Chinese has a fixed collocation, mainly contain blue background with white characters, yellow background with black characters, white background with black or red characters and black background with white characters.The three components of R,G,B in values equal to 0 and 255 consisting of eight kinds of basic color [3], this paper selects four base color, which is associated with the plate background were blue, yellow, white and black.The RGB values of four base colors and the corresponding HSV values is shown in Table 1. In HSV color space, the distance between and is as follows: [ ] (4) Respectively calculated the distance from each pixel in HSV image to four base colors can obtain the feature image based on color features.The smaller the value of the pixel in feature image is, the color of the pixel in RGB space is more close to the base color. C. Binarization of feature image In order to further separate the license plate from complex background, also need convert the feature image into binary image.Before the binary processing should find the appropriate threshold T, usually the OTSU is the ideal method to obtain the threshold, but in this paper, due to the license plate region occupied small proportion in the image and presence of large area of blue or yellow interference region, the binary image obtained from OTSU method usually contains lots of independent information.Found in the course of the experiment, the total area of the license plate region in the image in a certain range and the value of license plate area in feature image is small.Decreased the value of the threshold obtained from OTSU method, the number of white pixels reduced present certain rules in binary image. The following describes specific steps to obtain appropriate threshold T: 1) Calculate the number of the white pixels in binary image through the OTSU method. 2) Decrease the threshold obtained from OTSU method, then calculate the reduce number of white pixels and the total number of white pixels in binary image which processed by the reduced threshold. 3) If the reduce number or the total number satisfied the rules obtained by experiment debugging, T is equal to the reduced threshold.Otherwise returns step 2). The binary image converted from feature image by appropriate threshold T is as shown in Fig. 1(b). D. Morphology operation Although the binary processing has filter out most background information of the image, there are still some noise and vehicle information.Therefore, mathematic morphologic close and open operation are used to make the possible license plate region into the rectangular connected region.www.ijarai.thesai.org The selection of structure element associated with the size of the license plate in road bayonet image, so set the close operation structure element in the image of three lanes is and the structure element in four lanes image is .In order to remove the isolated points and smooth edge, set the open structure element is .The morphological image is as shown in Fig. 2. E. Connected componet labeling After morphology operation we can obtain the independent connected domain by 8-connected component labeling, save the results in and the number as N. the process is as follows: The connected domain is expressed as the candidate license plate region. A. Geometry characteristics of license plate License plate has obvious geometry characteristics, the width and height of plate are fixed and the ratio of width and height is in a certain range.In China, the ratio of small car license plate is and the ratio of large car is .Considering the road bayonet image obtained from fixed traffic camera, the license plate size in the image is in a certain range and related to the size of the image.So calculate three characteristic values of the candidate license plate region, which are size, ratio and filling. B. Location of candidate license plate The connected domain is expressed as the candidate license plate region, so the size of the region is the area of the connected domain and the ratio is the ratio of the ratio of the minimum enclosing rectangle.The filling defined as follows: The following describes specific steps to locate license plate: 1) Set size range from obtained by experiment, the connected domain which satisfied the range for the next step. 2) Conserding the reason of shooting angle and tilt, set the ratio range from 2 to 6. 3) The P is closer to 1, it means that the connected domain is closer to the license plate.Considering the deletion of part plate information in binary and morphology operation, set the P range from 0.6 to 1. 4) Segment the region satisfied all of the conditions mentioned above in the original image and eliminate satisfied regions and small regions in the morphological image. C. Secondary location of candidate license plate The regions exist in morphological image after location include the large area regions (include license plate or not) and the region disturbed by solitary lines or vehicle information.So use the secondary location of candidate regions to locate the rest of license plate in the image.The following describes specific steps: 1) Obtained the coordinates of the current connected domain and then segmented the region in the feature image. 2) Decrease the threshold T, obtained the candidate region through binary processing and morphology operation. 3) Segment the regions which satisfied the conditions mentioned in location of candidate license plate in the original image. The result of the location of candidate license plate is shown in Fig. 3 and the license plate segmented from the original image is shown in Fig. 4.There are two false results. D. Validation of license plate The result of the license plate segmented from the original image may include some false license plate, so we validated the result based on the gray feature of license plate.The following describes specific steps: 1) Obtained the binary image of the color license plate segmented form original image by OTSU method. 2) Get the middle 80% of the binary image to remove the interference of the border in vertical projection. 3) Count the changing times of the character and the background, remove the result which unsatisfied with the gary feature. The process of validation of license plate is shown in Fig. 5. ( V. CONCLUSIONS This paper aimed at the problem of multi-license plate location in road bayonet image, proposed a method combines color features, geometry characteristics and gray feature of license plate.Firstly based on the color features of license plate, filter the most background information by calculating the distance in HSV color space and binary processing.Obtained the candidate license plate regions through the morphology operation and then based on the geometry characteristics of size, ratio and filling, locate all license plate in the image by secondary location.Finally validate the results of segmentation based on the gray feature of license plate and remove the false results.This method can locate the license plate in different position, size, number and direction in the road bayonet image, which is a method of adaptability. Fig. 1 . Fig. 1.The original image and the binary image Fig. 3 . Fig. 3.The results of location in morpological image(red line marking the license plate region) Fig. 4 . Fig. 4. The results of segmentation in the original image Fig. 5 .Fig. 6 . Fig. 5.The process of validation of license plate (a)(1) is the right color license plate image, (a)(2) is the binary image of the right license plate, (a)(3) is the vertical projection image of license plate.(b)(1) is the false plate image, (b)(2) is the binary image of false plate image, (b)(3) is the vertical projection image of false result IV.EXPERIMENT RESULTS Considering the size of road bayonet image is 4912 3264, so at first we compress the size of image into 1228 816 to TABLE I . RGB AND HSV VALUES OF FOUR BASE COLORS
2,913.4
2015-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
The JCMT Transient Survey: Six Year Summary of 450/850 μm Protostellar Variability and Calibration Pipeline Version 2.0 The James Clerk Maxwell Telescope (JCMT) Transient Survey has been monitoring eight Gould Belt low-mass star-forming regions since 2015 December and six somewhat more distant intermediate-mass star-forming regions since 2020 February with the Submillimeter Common User Bolometer Array 2 on board JCMT at 450 and 850 μm and with an approximately monthly cadence. We introduce our pipeline v2 relative calibration procedures for image alignment and flux calibration across epochs, improving on our previous pipeline v1 by decreasing measurement uncertainties and providing additional robustness. These new techniques work at both 850 and 450 μm, where version 1 only allowed investigation of the 850 μm data. Pipeline v2 achieves better than 0.″5 relative image alignment, less than a tenth of the submillimeter beam widths. The version 2 relative flux calibration is found to be 1% at 850 μm and <5% at 450 μm. The improvement in the calibration is demonstrated by comparing the two pipelines over the first 4 yr of the survey and recovering additional robust variables with version 2. Using the full 6 yr of the Gould Belt survey, the number of robust variables increases by 50%, and at 450 μm we identify four robust variables, all of which are also robust at 850 μm. The multiwavelength light curves for these sources are investigated and found to be consistent with the variability being due to dust heating within the envelope in response to accretion luminosity changes from the central source. Introduction The advent of sensitive large field-of-view detectors has launched an era of time-domain astronomy with (sub)millimeter single-dish telescopes.These data sets have been used to search for and characterize transient events, such as flares from stars (Mairs et al. 2019;Guns et al. 2021;Naess et al. 2021;Johnstone et al. 2022) and relativistic jets (Fuhrmann et al. 2014;Subroweit et al. 2017;Tetarenko et al. 2017), as well as to monitor variability within nearby Galactic star-forming regions associated with the stellar mass assembly process (Johnstone et al. 2018;Park et al. 2019;Lee et al. 2021).Over the next decade, (sub) millimeter time-domain astronomy is anticipated to play an ever more important role as an essential science mode for FYST (CCAT-Prime Collaboration et al. 2023), AtLAST (Ramasawmy et al. 2022), CMB-S4 (Abazajian et al. 2019), and at slightly shorter wavelengths for potential space-based far-infrared missions (André et al. 2019;Fischer et al. 2019). Despite the significant advances in monitoring capabilities, calibration of (sub)millimeter observations, especially from the ground, remains challenging.Mairs et al. (2021) analyzed over a decade of James Clerk Maxwell Telescope (JCMT) Submillimeter Common User Bolometer Array 2 (SCUBA-2) continuum imager observations and concluded that the peak flux uncertainty at 850 μm after observatory-based calibrations is 7%, while at 450 μm the uncertainty rises to 17%. Notwithstanding the dynamic range and sensitivity of the modern large format (sub)millimeter detectors, these calibration uncertainties dominate observations of all but the faintest targets. To overcome this complication, the JCMT Transient Survey (Herczeg et al. 2017) developed a relative calibration scheme (Mairs et al. 2017a(Mairs et al. , 2017b) ) for 850 μm SCUBA-2 observations that significantly improves, by a factor of 3, the default calibration provided by the observatory.This enhanced automated reduction and relative calibration pipeline has allowed the team to identify many years-long secular protostellar variables within the Gould Belt (upward of 30% of the monitored sample; Johnstone et al. 2018;Lee et al. 2021) to examine closely the protostellar variable EC 53 in Serpens (also known as V371 Ser) undergoing episodic accretion events with an 18 month period (Yoo et al. 2017;Lee et al. 2020;Francis et al. 2022), and to investigate the months-long accretion burst associated with the deeply embedded protostar HOPS 373 in Orion B/NGC 2068 (Yoon et al. 2022).Combined, these variability results obtained in the submillimeter and for optically enshrouded protostars have enhanced our understanding of the stellar mass assembly process (Fischer et al. 2022). The JCMT Transient Survey began as a 3 yr Large Program monitoring eight nearby Gould Belt star-forming regions in 2015 December and has been continually extended without interruption through at least 2024 January.In 2020 February, six slightly more distant intermediate-mass star-forming regions were added to the monitoring observations.Given the large accumulation of time-domain data by this program, we revisit the alignment and calibration strategy of the JCMT Transient Survey to ensure the best quality measurements.The new calibration procedure introduced here also allows for relative calibration of the SCUBA-2 450 μm data sets.Section 2 introduces the observations and the standard data reduction used to make the observatory-calibrated maps at each epoch.Section 3 presents the new image alignment procedure, while Section 4 details the updated relative flux calibration and includes an independent consistency check on its precision.Section 5 presents the coadded images of the eight Gould Belt regions at 850 and 450 μm.We make these deep images publicly available to the astronomical community as part of this paper.We then present a reanalysis of source variability within the Gould Belt regions using the updated data sets in Section 6 and summarize the paper in Section 7. Observations and Data Reduction The JCMT Transient Survey (Herczeg et al. 2017) is a James Clerk Maxwell Telescope Large Program (project codes: M16AL001 and M20AL007) dedicated to monitoring the evolution of mass accretion in galactic star-forming regions.The survey employs SCUBA-2 (Holland et al. 2013), a workhorse continuum instrument operating simultaneously at 450 and 850 μm with beam sizes of 9 6 and 14 1, respectively.Since 2015 December, circular fields of ~¢ 30 usable diameter have been obtained approximately monthly, targeting eight star-forming regions in the Gould Belt: IC 348, NGC 1333, NGC 2024, NGC 2068, OMC 2/3, Ophiuchus Core, Serpens Main, and Serpens South (see Herczeg et al. 2017 for details).Combined, these fields contain more than 300 Class 0/I/Flat and more than 1400 Class II young stellar objects (YSOs) as identified by the Spitzer Space Telescope (Megeath et al. 2012;Stutz et al. 2013;Dunham et al. 2015).All data obtained since the beginning of the survey (2015 December 22) to 2022 March 1 are included in this work (the observations are summarized in Appendix A). In 2020 February, the survey was expanded to monitor six additional fields toward regions of intermediate-/high-mass star formation: three fields in DR21 (north: W75N, central: DR21(OH), south: DR23; see Schneider et al. 2010), M17, M17 SWex (Povich et al. 2016), and S255 (Chavarría et al. 2008).These fields are at distances less than 2 kpc, at which the JCMT beam at 450 and 850 μm are still capable of resolving sub-parsec dust condensations.Previously, these fields have been found to host variable young stars showing evidence of accretion outbursts (Liu et al. 2018;Park et al. 2019;Chen et al. 2021;Wenner et al. 2022).These fields are representative of intermediate-to high-mass star formation regions, in which the earliest high-mass protostars are still in the making.The submillimeter monitoring observations for these fields should, therefore, shed light on the high-mass star-forming process.As they have only recently been added to the survey, these fields have significantly fewer observations than the original eight fields, requiring a separate series of publications for the evaluation of variability.For completeness, a summary of the observations of these six new fields is included in Appendix B. The exposure time of each JCMT Transient Survey region observation is adjusted based on the amount of atmospheric precipitable water vapor observed along the line of sight to ensure a consistent background rms noise of ∼12 mJy beam −1 at 850 μm from epoch to epoch.The atmospheric absorption is much more severe in the 450 μm transmission band, and the data are more susceptible to atmospheric variability than their 850 μm counterparts.Typical rms noise measurements at 450 μm, therefore, vary over an order of magnitude, between 100 and 1000 mJy beam −1 , for observations that simultaneously yield an uncertainty of ∼12 mJy beam −1 noise at 850 μm (see Section 4.2).Mairs et al. (2017b) describe in detail the data reduction procedures used to construct individual JCMT Transient Survey images.This work continues to make use of the configuration labeled as R3, focusing on the recovery of compact, peaked sources at the expense of accurate extended-emission recovery on scales larger than 200″.Briefly, the MAKEMAP procedure (Chapin et al. 2013), part of the STARLINK software suite's (Currie et al. 2014) SMURF package, is used to iteratively reduce the raw SCUBA-2 data and construct images.In order to well sample the beam, 450 μm maps are gridded with 2″ pixels, while 850 μm maps are comprised of 3″ pixels.A Gaussian smoothing is then performed on each image using a full width at half-maximum equivalent to the angular size of 2 pixels in order to mitigate pixelto-pixel noise, yielding more reliable peak flux measurements.Mairs et al. (2017b) also developed the first version of our procedures for the relative image alignment and flux calibration required to bring individual epochs of the same region into agreement, denoted pipeline v1, or the point-source method.In this work (pipeline v2), we revisit these tasks in order to produce more accurate and robust relative astrometry and calibration.Table 7 in Appendix A summarizes the dates, scan numbers, and rms map noise at both wavelengths, along with the alignment and calibration parameters for all Gould Belt observations.Also, to enhance the value of these JCMT observations, deep coadds of the eight Gould Belt regions are released along with this paper (Section 5). Image Alignment The JCMT has an inherent pointing uncertainty of 2″-4″ (see Mairs et al. 2017b and Figure 1).This pointing offset is the same at both 450 and 850 μm because both focal planes have the same field of view on the sky, and observations are carried out simultaneously by means of a dichroic beam splitter (Chapin et al. 2013).Relative image alignment can, therefore, be achieved by determining positional offsets between epochs using only the 850 μm images, for which the background measurement noise is consistent over time (see Section 2). As part of the JCMT Transient Survey data reduction process for relative alignment, the original point-source method (Mairs et al. 2017b) tracked a set of bright, compact, peaked sources identified in each epoch and compared their measured peak positions to the first image obtained for that region.For each additional epoch, the calculated average offsets in both R.A. and decl.over this set of sources were used to correct the newly obtained data to a fixed grid (Mairs et al. 2017b).While this method produced maps in relative alignment to within a factor of ∼1″, the technique requires at least several bright, pointlike sources to be present throughout the map, and the derived offsets are subject to Gaussian fitting uncertainties introduced by pixel-topixel noise.Furthermore, this method did not attempt to remove the pointing uncertainty of the first epoch. A more robust pipeline v2 algorithm has been developed and is now included in the updated data reduction pipeline.This new technique cross correlates the reconstructed images associated with each epoch, prior to Gaussian smoothing, to determine the relative alignment offsets and estimates the absolute pointing from the statistics of the individual pointings, assuming that there is no systematic pointing offset at the telescope.In detail, the new image alignment procedure defines a nominally ¢ ´¢ 20 20 subfield centered on the middle of a given map15 taken at epoch i, Map i , and cross correlates the information contained within against the same subfield extracted from the first observation of the given region, Map 0 .The method employs SCIPYʼs 2D, discrete Fourier transform algorithm, FFT2 (Virtanen et al. 2020): where XC is the computed cross-correlation function,  represents the 2D discrete Fourier transform, and * represents the complex conjugate.Map i and Map 0 contain the same structured emission at slightly different positions, tracing the inherent pointing uncertainty between the epochs.Meanwhile, the pixelated random measurement noise is uncorrelated between epochs, and therefore, does not produce a localized peak in XC.The epoch offset is measured by fitting a Gaussian function to XC with a Levenberg-Marquardt least squares algorithm.The necessary offsets that must be applied to Map i in order to bring it into relative alignment with Map 0 are derived by measuring the misalignment of the peak of the cross-correlation function from the center in both the R.A. and decl.directions.In the final step, the central coordinates of Map i are also uniformly shifted to the median R.A. and decl.measured over the unaligned maps observed on or before 2021 June 15, as a reasonable estimate of the true astrometry, assuming pointing errors over many epochs are unbiased.The original point-source method continues to be used as a consistency check for this more robust pipeline v2 algorithm. The left panel of Figure 1 shows the original telescope pointing offsets with respect to the derived true astrometry toward Orion OMC 2/3, as measured by the XC method, for all epochs (gray squares), the residual offsets measured after the cross-correlation alignment corrections (pipeline v2) have been applied (blue circles), and the residual point-source method alignment consistency checks after the cross-correlation alignment corrections have been applied (yellow triangles).The right panel of Figure 1 shows a zoomed-in comparison between the residual offsets measured by each method.There is excellent agreement between the new and old techniques.The standard deviation of the residual cross-correlation offsets suggests the maps are self-consistently aligned to better than 0 2. The point-source verification suggests the alignment uncertainty is better than ∼0 5, though the inherent uncertainty using the point-source method is larger than for the crosscorrelation method.Table 1 shows the alignment results for each of the eight Transient Survey Gould Belt regions.The ΔR.A. and Δdecl.values are derived by calculating the standard deviation of the measured map offsets from the median (defined to be 0, 0).The corrected residual offsets in the table refer to the point-source alignment verification (yellow triangles in Figure 1), which provides a more conservative measurement of the corrected pointing uncertainty than the cross-correlation self-consistency check.Derived pointing offsets for each Gould Belt observation are given in Table 7 in Appendix A. Relative Flux Calibration Both the 450 and 850 μm images are flux calibrated in a relative sense over time by measuring peak fluxes of bright, compact submillimeter emission sources that were originally cataloged in coadded images of each target field.The FELL- WALKER (Berry 2015) source identification algorithm was used to find the locations of compact, peaked sources; it is part of STARLINKʼs (Currie et al. 2014) CUPID (Berry et al. 2013) software package.Sources below brightness thresholds of 1000 mJy beam −1 at 450 μm and 100 mJy beam −1 at 850 μm are excluded from this final source calibration position catalog.Furthermore, sources that were previously identified as known submillimeter variables by Mairs et al. (2017a), Johnstone et al. (2018), and Lee et al. (2021) are removed. The peak flux of each source in a given catalog is measured in each observed epoch to construct initial light curves for all identified objects.Prior to measuring peak flux, the maps used are Gaussian smoothed by 2 pixels to better match the beam and are relatively aligned to much better than the scale of a single pixel (see Sections 2 and 3).The peak flux in each epoch is then accurately measured at a fixed pixel location without requiring a Gaussian fit for each epoch. The fiducial (expected) standard deviation, SD fid , in the light curve of a given source, i, has the form (Johnstone et al. 2018 where n rms is the typical rms noise measured across the epochs (dominating faint sources), σ FCF is the expected relative flux calibration uncertainty that can be achieved by the algorithms described below (dominating bright sources), and f m (i) is the mean peak flux of source i.At 850 μm and using the original pipeline v1 calibration approach, Johnstone et al. (2018) found n rms = 12 mJy beam −1 and σ FCF = 0.02 (that is, 2%). Iteratively Weighted Source Calibration The pipeline v2 relative flux calibration proceeds as follows.Once the initial light curves for all sources in a given region are constructed, an iterative algorithm is used to weight each source either as a favorable or unfavorable calibrator source based on the robustness of each observed map and the constancy of the source light curve over time.With this information in hand, the full set of calibration sources is used to derive a relative flux calibration factor (relative FCF; R FCF ) for each epoch.The R FCF is a multiplicative constant with which to multiply a given epoch in order to bring the map into agreement with the weighted mean flux brightnesses over all sources, as measured in the coadded image of all data obtained on or before 2021 April 10.The details of the algorithm are as follows: 1. Initial R FCF and their associated uncertainties, σ FCF , are set to be 1% and 2%, respectively, for each epoch and each wavelength.Using these initial estimates, an iterative process begins.2. For each source, excluding known variables, the weighted mean flux, f source ¯is estimated as follows: 2) for 850 μm and 5% of the mean flux at 450 μm), the fiducial value is adopted in order to prevent a runaway effect wherein the brightest, most stable sources obtain outsized weights.4. Source flux thresholds of 1000 mJy beam −1 at 450 μm and 100 mJy beam −1 at 850 μm are then applied to excise sources that are too faint to provide a meaningful contribution to the overall flux calibration.5.The R FCF and formal uncertainty for each individual epoch are then determined by applying a source-weighted linear least squares fit between the peak flux measurements in that observation and their weighted mean values calculated over all observations taken before UTC 2021 April 10.The intercept is fixed at the origin, and the calculated slope and its standard deviation yield the best-fit R FCF and its formal uncertainty, σ FCF,formal .In the equations below, s represents a given source, and e (1.5, 2.2) ( 0.3, 0.5) NGC 2024 (1.5, 1.8) ( 0.8, 0.8) NGC 2068 (1.5, 1.9) represents a given epoch: 6.The R FCF uncertainties (σ FCF ) are estimated by calculating the standard deviation in source fluxes measured within the given epoch normalized to their respective mean fluxes across all epochs.Similarly to Step 3, an empirical analysis yielded a minimum threshold for σ FCF of 70% of the formal R FCF uncertainty to prevent runaway events.Therefore, if σ FCF becomes less than 70% of the formal R FCF uncertainty, σ FCF is set to 0.7 × σ FCF,formal .7. Using the newly calculated relative FCFs, R FCF , and their associated uncertainties, σ FCF , repeat Steps 2-6 until convergence (achieved in at most five iterations).Over time, if a new variable source emerges, it will be automatically downweighted in the calibration and identified through its change in fractional weight as in the left panel of Figure 2 or through the calibration consistency check, described below in Section 4.3. If all sources have the same fractional uncertainty then they each contribute equal weight.For nonvarying bright sources, this is expected (see Equation (2)) as the calibration dominates the peak flux uncertainty.Alternatively, for faint sources dominated by constant noise, the relative weighting varies directly with the source brightness.Occasionally, sources have additional measurement uncertainties, due to the location on the map, neighbor crowding, or potential variability that has not yet been confirmed (and hence the source has not yet been removed from the list).Due to the increased uncertainty, these sources are automatically downweighted by the algorithm (see point 7 above).All known nonvarying protostars are retained in the pool of calibrators, provided the submillimeter flux is above the thresholds.In this way, deviations in the calibration results over time are used to identify newly varying YSOs and previously undiscovered YSOs, such as deeply embedded protostars that were historically too faint to be included in previous surveys. Taking the OMC 2/3 region as an example, in the left panel of Figure 2, the upper envelope follows the expected weighting relation, with a few sources lying lower due to higher-thanexpected uncertainties in their fluxes signaling potential variability or difficulty in measuring a reliable peak flux value.The right panel of Figure 2 7 in Appendix A. Selecting Reliable 450 μm Data As previously discussed, the 850 μm observations obtained by the JCMT Transient Survey have a consistent background rms noise of ∼12 mJy beam −1 .The 450 μm data, however, are much more sensitive to changes in atmospheric water vapor and airmass, causing more than an order of magnitude of spread in background rms values over the duration of the survey (see Figures 3 and 4).Therefore, in conditions of poor atmospheric transmission, even the brightest sources in 450 μm maps will be overwhelmed by the noise, and no reliable peak flux measurements can be obtained.In order to identify the usable 450 μm maps, in the right panel of Figure 4 we plot a proxy for the atmospheric transmission, the opacity of the atmosphere measured at 225 GHz (τ 225 ) multiplied by the airmass of the observation, against the calculated relative-FCF uncertainty (σ FCF ) for each epoch and identify a box within which the brightest 450 μm peak flux values have a sufficient signal-to-noise ratio (S/N) to return an accurate FCF.The 450 μm map is defined as usable if 1. τ 225 × airmass < 0.12 2. Relative flux calibration uncertainty < 5% Relative Flux Calibration Consistency Check An independent relative flux calibration algorithm is employed to verify the consistency of the weighted peak flux calibration scheme described above.The method is based on the original point-source method described by Mairs et al. (2017b), which requires identifying families of bright, reliable (nonvarying) compact sources and constructing light curves (flux versus time).The normalized light curves of the nonvarying family sources trace the inherent JCMT calibration uncertainty from epoch to epoch, i.e., if a given map is brighter than the mean, the average deviation from the mean of the family source peak fluxes will yield a flux correction factor for that epoch.The standard deviation in normalized peak fluxes represents the uncertainty.These results are not used to perform calibration of the data sets.This method is used only as a confirmation of the precision of the weighted peak flux calibration scheme.This verification step does not select reliable 450 μm maps in the same manner as described above.Instead, 450 μm maps are individually selected based on the theoretical R FCF uncertainty that might be achieved given the brightness of the sources present in the map that could constitute a family along with the rms background noise of the map itself.At 450 μm, the target R FCF uncertainty is defined to be 5% to match the weighted flux calibration threshold.The theoretical relative flux calibration uncertainty (σ FCF,theory ) produced by a given family of calibrator sources is calculated by FCF,theory ensemble Here, f is the peak flux of the faintest source considered in mJy beam −1 , N is the number of sources with peak fluxes greater than or equal to f, and rms background noise of a given map in millijansky per beam.Therefore, in order to achieve a relative flux calibration uncertainty better than ∼5%, the rms background noise threshold for whether to consider an individual epoch as reliable is To identify the trustworthy families of sources for each region, the same brightness thresholds for source consideration as in the weighted flux calibration scheme described above, 1000 mJy beam −1 at 450 μm and 100 mJy beam −1 at 850 μm, are initially employed.Unlike in the weighted calibration method, however, not all of these sources will be used to verify the map calibration, they are simply used as a pool of potential calibrator sources.These potential calibrators are arranged in ascending flux order and known variables identified by Mairs et al. (2017a), Johnstone et al. (2018), and Lee et al. (2021) are removed.Beginning with the brightest source and sequentially including fainter sources one by one (excluding known variables), ever larger potential families are defined.For each potential family, Equation (6) is used to determine which maps achieve the rms threshold to produce a theoretical R FCF uncertainty of 5%.At 850 μm, all epochs are included in the analysis.At 450 μm, maps obtained in poor atmospheric transmission conditions are excluded.The set of reliable maps in each region generally matches those identified using the method described in Section 4.2 with a few exceptions (see Appendix A). At both wavelengths, the potential calibrator groups are then narrowed down by optimizing three key parameters: (1) the brightest set of sources that (2) return the lowest σ FCF using (3) the largest number of maps.The final point applies specifically to 450 μm data.For more detail regarding the trade-off between a higher number of sources and a lower σ FCF , see Section 4.2 of Mairs et al. (2017b). To test the σ FCF,theory values, the light curves of each potential calibrator group member (in the narrowed-down subset) were constructed using only the maps that met the rms threshold criteria described above for the faintest source in the group.As for the iteratively weighted calibration method, the light-curve fluxes are measured at the known peak pixel location of each source at each aligned epoch.The light curves are then normalized by dividing each flux measurement by the median for that source across the fixed set of observations. To select the final calibrator family (the group of sources that will be used to relatively flux calibrate the data; see Mairs et al. 2017b), the sources in the narrowed-down potential calibrator group are paired with one another in every possible combination for a given family size from two sources to the number of sources in the group.Figures 5 and 6 summarize the results of the full relative flux calibration procedure.The left panels show the distribution of all σ FCF values derived for the eight Gould Belt regions using the iteratively weighted algorithm described in Section 4.1.The 450 μm data included was observed in good weather conditions and had a σ FCF of 5%, as described in Section 4.2.The inherent telescope flux uncertainty can be estimated from the width of each distribution's main peak.The width of the 850 μm peak is 8%-10%, while the width of the 450 μm peak is 15%-20%, in good agreement with the uncertainties derived via analyses of JCMT calibrator observations (Dempsey et al. 2013;Mairs et al. 2021). The center and right panels of Figures 5 and 6 show the correlation between the pipeline v2 flux calibration algorithm with the point-source method. 16The 850 μm R FCF values for these two methods agree to better than 3%, while the 450 μm σ FCF values agree to better than 7%.Furthermore, Table 7 in Appendix A allows for the determination of typical conversion uncertainties at 850 μm, σ FCF = 0.01 or 1%, and 450 μm, 0.035 < σ FCF < 0.05 or 3.5%-5%.These values can be used along with Equation (2) when estimating the expected standard deviation of any particular source. Coadded Images The JCMT Transient Survey's consistent, repeated observing strategy from 2015 December through 2022 February has led to the deepest 450 and 850 μm maps to date of eight ~¢ 30 -diameter Gould Belt fields.Figures 7 and 8 show each cross-correlation-aligned relative flux-calibrated coadded over the same wavelength-consistent color scales, and Table 2 summarizes the number of epochs included in each image along with the background rms.At 450 μm, only the usable maps, as defined in Section 4.2, were included. The images released in this work should not be used to analyze large-scale (> ¢ 3 ) flocculent gas and dust structures.The data reduction parameters specifically chosen for this work were optimized to accurately recover compact peak fluxes over a large dynamic range at the expense of reconstructing faint, less dense regions.In fact, while SCUBA-2ʼs combined PONG1800 mapping strategy and data reduction procedure allow for some degree of separation of atmospheric and astronomical signal on scales up to ~¢ 10 , ground-and airbornebased submillimeter bolometer data always suffer from spatial filtering on scales larger than the characteristic detector size, leading to flux loss on the edges of large structures (see Chapin et al. 2013;Mairs et al. 2015Mairs et al. , 2017b;;Mairs et al. 2017a for more details).Coadded images are available on the CANFAR website.17 Results In this section, for consistent comparisons with previous results, we use the methods outlined by Lee et al. (2021) to redetermine the long-term, secular variables separately at 850 and 450 μm in order to test the updated calibrations.The secular variability is based on both linear and sinusoidal fits to the time-domain observations (see Lomb 1976;Scargle 1989;VanderPlas 2018), with the best-fit false alarm probability (FAP) for interesting sources presented in Table 3.Additional information about specific robust secular variables can be found in the appendix in Lee et al. (2021).Note.Candidate variables are denoted in bold font, and robust variables are denoted in italic font. We start by comparing the variables recovered by the new 850 μm calibration (pipeline v2) against those recovered by the original scheme (pipeline v1), taking the same 4 yr time window as used by Lee et al. (2021;Section 6.1).We next consider the advantage of a longer time window when searching for secular valuables, utilizing 6 yr of monitoring at 850 μm (Section 6.2).Then, for the first time, we look for evidence of variability at 450 μm, using all 6 yr of monitoring (Section 6.3) and briefly investigate the combined results. Robustness of 850 μm Variables over 4 yr Utilizing the original calibration methods (pipeline v1; Mairs et al. 2017b) and the first 4 yr of the JCMT Transient Survey observations through 2020 January, Lee et al. (2021) found 18 secular variables.Based on the best-fit Lomb-Scargle periodogram (Lomb 1976;Scargle 1989; VanderPlas 2018) derived periods P, two of these are periodic (P < 4 yr) and 11 are curved (4 yr < P < 15 yr), while five are best fit as linear-slope sources.In all cases, the FAP was required to be <10 −3 (Figure 9, left panel).These secular variables were all protostars, and no prestellar or disk sources were observed to vary. Using these methods, every light curve has a best-fit sinusoid, regardless of whether it is a periodic, curved, linear, or even a nonvariable source.The FAP value for nonvariable sources, however, will be high, indicating a poor fit.Furthermore, as these period fits assume an underlying sinusoid, the derived values should be taken as representative of the quantitative nature, rather than specific.For timescales shorter than the observing window, the period is a reasonable measure of episodic timescale, whereas for longer derived periods, the uncertainties are necessarily much larger.Given the <10 yr timescale of these observations, there is insufficient data to distinguish linearly varying sources from sources with sinusoidal variability over long periods since sinusoids can be regarded as a straight line near its zeros when the period is long.Variables with periods longer than 20 yr are, therefore, classified as linear, but also assigned a derived period (see Lee et al. 2021;Park et al. 2021 for further discussions).Further observations over the coming years may refine these fits and allow for differentiation in the future. With our new calibrations (pipeline v2) and using the same 4 yr of data and FAP threshold (Figure 9, middle panel), we find 22 secular variables; three periodic, 14 curved, and five linear sources.Modifying slightly the classification from Lee et al. (2021), we consider those sources with an FAP <10 −5 to be robust secular variables, and those with a 10 −5 < FAP <10 −3 to be candidate secular variables.With this definition, Lee et al. (2021) recovered 10 robust and eight candidates, while the new calibrations recovered 13 robust and nine candidates.Robust fit parameters are presented in Table 4. Comparing the FAP values for two sets of detected secular variables (see Table 3), we find that one source, J034356.5 +320050 in Perseus (also known as IC 348 HH 211), 18 classified as (candidate) periodic by Lee et al. (2021), is rejected using the new calibration.Within the Lee et al. (2021) sample, HH 211 had the shortest period, ∼1 yr, and the second-largest FAP = 10 −3.3 .With the new calibration, the sinusoidal fitting finds the same best period; however, the FAP increases significantly to FAP = 10 −0.5 .Of the other seven candidate sources from Lee et al. (2021), four remain candidate variables using the new calibration, while three are elevated to robust detections.Furthermore, all 10 of the Lee et al. (2021) sources with an FAP <10 −5 remain robust with the new calibration.Finally, we note that five additional sources are classified as candidate secular variables using the new calibration, of which three are unassociated with known YSOs.All three of these sources, however, have FAPs within a factor of 2 of the cutoff threshold. Comparing our results between both of the calibration methods also provides a test on whether the calibration method affects the variability measurements and best-fit parameters.The left panel of Figure 10 plots the derived light-curve linear slopes for the sources that were classified as variables, candidate and robust, for both calibration methods.The slopes are consistent between the calibrations, as expected.There are significant changes for a few of the fitted periods, however.The right panel of Figure 10 shows the comparison of the best-fit periods for those 10 sources with robust periodogram FAPs using the old calibration.Eight out of 10 sources maintain their secular type (periodic/curved/linear), though these sources often recover somewhat different best-fit periods between the calibrations.The other two sources switch between linear and curved type.Figure 11 plots the old (red) and new (blue) calibrated measurements and best fits and shows that the change to the curved type is subtle, depending specifically on the calibration of the early and late epochs.For the slope comparison, all sources with candidate or robust secular fits are included.For the period comparison, only those sources with robust periodic fits are included, and the dashed horizontal and vertical lines separate the plot into periodic, curved, and linear regions.Two sources are overlapped in the period plot, so each is annotated with a bullseye.Figure 11.Light curves of two sources that changed variable type from linear (red, pipeline v1) to curved (blue, pipeline v2) after the improved calibration. 18In Tables 3 and 4 in Lee et al. (2021), the source names for HH 211 and IC 348 MMS 1 were inadvertently reversed. Recovered 850 μm Variables at 6 yr Increasing the observing window from 4-6 yr, from the beginning of the survey through 2022 February, we update the criteria for secular variability accordingly.We categorize the variables by their best-fit periods with periodic (<6 yr), curved (6-20 yr), and linear (>20 yr).We now recover 38 secular variables, more than double the number obtained over 4 yr by Lee et al. (2021).Of these sources, 20 are robust and 18 are candidate variables (Table 3).Of the robust sources, eight are linear, 10 are curved, and two are periodic (Table 5).All robust sources are known protostars except the brightening curved source J182947.6+011553 in Serpens Main, which is discussed further below. Of the candidate sources, five are linear, six are curved, and seven are periodic.Half of the candidate periodic sources have very short periods, less than a year, and FAPs close to the cutoff value.Another seven of the candidate sources are not known to be associated with YSOs.We suspect that systematic issues, potentially associated with the yearly weather patterns at Maunakea, are responsible for some of these candidates.The right panel of Figure 9 plots the sinusoidal versus linear FAPs for all the monitored JCMT Transient sources over the 6 yr window using the updated calibration, revealing a cluster of sources with FAPs of ∼10 −4 . Considering only the newly calibrated results, we find that 12 of the 13 robust secular variables after 4 yr remain robust after 6 yr of observations, with only Serpens Main Solar Maximum Mission (SMM) 10 missing the cut (Table 3).The FAP for SMM 10 increases from 10 −5.1 to 10 −4.3 , dropping it to candidate status.Interestingly, SMM 10 was found to have significant stochasticity in its light curve when observed with higher angular resolution using the Atacama Large Millimeter/ submillimeter Array Atacama Compact Array (Francis et al. 2022).Furthermore, of the nine candidate secular variables found after 4 yr, four are raised to robust after 6 yr, while three remain candidates and two are removed from the variable sample. The majority of the robust sources maintain the same periodic/curved/linear classification over both time windows, although the additional epochs do occasionally change the bestfit period.As an example, Figure 12 plots the old (red) and new (blue) calibrated measurements and best fits for a source that changed from linear type to curved type through the addition of two more years of observation. As noted above, one robust secular variable is not identified with a previously known protostar, J182947.6+011553 in Serpens Main.Considering near-IR variability, Kaas (1999) cataloged a candidate YSO, source 6 in their Table 5, about 9″ away; however, no object is visible as a localized source in the Wide-field Infrared Survey Explorer (WISE) mid-IR images at this position.Additionally, the source is not in the Spitzer 24 or 4.5 μm data, nor the eHOPS catalog in IRSA, suggesting that it may be a variable background galaxy.Looking further afield, SMM 1 (Casali et al. 1993;Enoch et al. 2009), a robust secular 850 μm variable source (Table 5), lies about 45″ to the southeast of J182947.6+011553.SMM 1 is found to be a secular variable (Hull et al. 2016) in the direction of J182947.6+011553.Consistent with the scattering surface seen in WISE images, theoretical modeling of the outflow by Liang et al. (2020) suggest an ~¢ 2 lobe length, which would entirely surround J182947.6+011553.Finally, both SMM 1 and J182947.6+011553show rising light curves, with an ∼1.5%-2% increase per year (Table 5).Thus, we hypothesize that this source, despite the significant distance, is being influenced by SMM 1. Recovered 450 μm Variables at 6 yr Figure 13 plots the 450 μm sinusoidal versus linear FAPs for all the monitored JCMT Transient sources.There are four robust secular detections, all of which are also robust at 850 μm (see Tables 3 and 6).Furthermore, of the 12 additional candidate variables at 450 μm, four are known robust variables at 850 μm.In Figure 14, we show the 450 and 850 μm light curves for the four robust at 450 μm protostellar sources, along with a scatter plot of the paired submillimeter fluxes across all epochs, revealing the linearity in the response between wavelengths.This is expected if the underlying process is a change in the temperature of the dust in the envelope due to variations in the accretion luminosity of the deeply embedded protostar (Johnstone et al. 2013;Contreras Peña et al. 2020;Francis et al. 2022). Quantifying the response at 850 μm versus 450 μm, we have calculated the normalized slopes for each of these four sources (see Figure 14, right panels).For the three Orion HOPS secular variables, the normalized slope is ∼1.4,such that the 450 μm brightness has about a 40% larger variation than the 850 μm brightness.Following the same argument as used by Contreras Peña et al. (2020, their Section 6.2), we anticipate that the 850 μm brightness varies as Td 1.5 , while the 450 μm brightness varies as Td 2 , where we assume T d ∼ 20 K in the outer envelope.The larger exponent at 450 μm is due to the fact that the shorter wavelength is less fully on the Rayleigh-Jeans tail of the dust emission, and therefore, has a stronger reaction to temperature change.These two formulae can be combined to yield µ S S 450 850 4 3 , which for small variations can be made linear such that S 450 ∼ 1.33 S 850 . The Serpens Main secular variable protostar EC 53 (Lee et al. 2020;Francis et al. 2022), also known as V371 Ser, shows a significantly higher normalized slope, ∼1.8.This may indicate additional extended and nonvarying emission at 850 μm, which is biasing the slope higher, or that the mean dust temperature T d used in the above determinations of submillimeter brightness response has been overestimated.The power-law exponents rise at both 850 and 450 μm as the dust temperature decreases, but more strongly at 450 μm, leading to a greater response at 450 μm versus 850 μm.We note, however, that modeling of the EC 53 envelope structure and temperature profile by Francis et al. (2022) to fit simultaneously time-variable observations at interferometric and single-dish angular scales required a somewhat higher outer dust temperature, T d ∼ 25 K. Francis et al. (2022) also struggled to fit well these 450 μm JCMT observations (see their Section 6.2) and suggest that one complication may be the JCMT beam structure at 450 μm.The good fits, however, to the simple Contreras Peña et al. (2020) model for the three HOPS sources suggest that the problem may lie with the simplified modeled structure assumed within the EC 53 envelope.Thus, time-dependent calculations using a more detailed and careful radiative transfer modeling of the envelope, such as those considered by Baek et al. (2020) and including the known outflow cavity and disk, are likely to be required. Finally, as a check on the utility of the robust secular fits at both 850 and 450 μm, in Figure 15, we present scatter plots at 850 and 450 μm of the reduced χ 2 versus mean flux (using the middle 80% of data points in brightness).The measurements are made both before and after removal of the best-fit secular component for those sources that are robust secular variables at 850 μm over 6 yr.As found by Lee et al. (2021), the robust variables tend to have exceptionally high χ 2 values prior to the removal of the best secular fit.For the 850 μm data sets, even the removal of the secular fit can leave a significant residual, suggesting that for some sources, there exists an additional component beyond the simple smooth evolution in time of the light curves investigated here. Summary and Conclusions The JCMT Transient Survey has been monitoring eight Gould Belt regions continuously since 2015 December and six intermediate-mass star-forming regions since 2020 February. We have reduced and calibrated all of these observations through 2022 February, using pipeline v2, which we describe in this paper.We have also investigated the Gould Belt fields for variable sources, comparing the 4 yr results using pipeline v2 against the analysis by Lee et al. (2021) over the same time range, using pipeline v1.We further show the importance of increasing the monitoring observations time window by comparing the variability results using pipeline v2 over the full 6 yr of individual epochs.Finally, we demonstrate the value of multiwavelength monitoring by performing pipeline v2 relative calibration on the 450 μm data and comparing individual source light curves at both 850 and 450 μm.The main results of this paper are therefore as follows: 1.The pipeline v2 relative alignment via cross correlation achieves an accuracy better than 0 5, which can be compared directly against the 850 and 450 μm beam sizes of 14 1 and 9 6, respectively.While the alignment is performed at 850 μm, the results are directly applicable to the 450 μm maps as the JCMT SCUBA-2 observations are carried out simultaneously by means of a dichroic beam splitter (Section 3). 2. The pipeline v2 relative flux calibration uses information on all sources in each field and an iterative approach to more robustly bring all epochs of a given region into agreement.For most fields, the uncertainty in the 850 μm relative FCF (σ FCF ) is 1%, and at 450 μm, the relative-FCF uncertainty is <5% (Section 4).Furthermore, given that the JCMT Transient Survey observations are optimized for 850 μm observations, we automate the determination of the good 450 μm epochs by applying limits to the quantified metadata (Section 4.2). 3. Along with this paper we make available to the community deep coadds of the eight Gould Belt starforming regions (Section 5).The typical rms for these maps at 850 and 450 μm are 1.5 and 24 mJy bm −1 , respectively (Table 2).We also present the reduction and calibration metadata for all Gould Belt epochs (Appendix A) and all intermediate-mass star-forming regions (Appendix B). 4. We analyze the variable source results at 850 μm using both pipeline v2 and pipeline v1 over 4 yr, uncovering a greater number of robust variables using the updated calibration techniques (Section 6.1).The properties of the recovered variables found by both calibration methods agree in general, though there are subtleties when determining the best-fit periods via periodogram analysis. We also demonstrate that extending the time baseline to 6 yr increases the number of variables recovered (Section 6.2). 5. Finally, we analyze the variable source results at 450 μm for the first time.Over 6 yr, we recover four robust variables, all of which are also robust at 850 μm.Direct comparison of the light curves at both wavelengths supports the expectation that the variability is driven by changing mass accretion onto the central protostar and the resultant heating of the dust in the natal envelope (Section 6.3). Coadded images are available on the CANFAR website (see footnote 17). Figure 1 . Figure 1.Region: OMC 2/3.Inherent positional offsets as measured by the cross-correlation algorithm (this work) are shown as gray squares.Residual offsets, after applying pointing corrections to each image, are represented by blue circles.Yellow triangles show the results of a consistency check performed on the corrected images using the original (pipeline v1; point source) pointing correction algorithm. where e represents each epoch, N e represents the number of epochs, and f source,e represents the flux of the given source in epoch e.3.The source uncertainty, σ source , is then estimated by calculating the standard deviation in the source flux across all epochs, weighted by the epoch uncertainty.If the calculated source standard deviation is smaller than the fiducial standard deviation (Equation ( plots the resultant R FCF required for each individual epoch.The upward trend shown in the right panel is seen across all Gould Belt regions and corresponds to systematic changes in the JCMT flux throughput since 2015, which changed the standard FCFs published by the observatory, described in detail by Mairs et al. (2021).These changes were (1) a filter stack replacement in 2016 November and (2) secondary mirror unit maintenance that improved the beam profiles in 2018 June.The flux conversion factors presented in Mairs et al. (2021) are reciprocals of those derived in this work and thus decrease over time, while the R FCF presented here appears to increase.The magnitude of the standard-FCF changes agrees in both of these studies.The derived R FCF values for each observation are given in Table Figure 2 . Figure 2. Region: OMC 2/3.Left: the fractional weight each identified pointlike source carries in the weighted, relative flux calibration.Right: calculated relative FCFs as a function of time.The vertical line denotes the date before which the median flux is calculated for each source to employ in the relative normalization (UTC 2021 April 10; see point 2 in Section 4.1).The dashed horizontal lines show the 1σ range of the relative-FCF values. Figure 4 . Figure 4. Left: 450 (red) and 850 μm (blue) rms vs. τ 225 × airmass (a proxy for the atmospheric transmission).The vertical black line indicates the τ 225 × airmass threshold, as indicated in the right panel.Right: τ 225 × airmass vs. the weighted calibration factor uncertainty (pipeline v2).The blue data points indicate the usable maps as described in Section 4.2. Figure 5 . Figure5.All low-mass regions, 850 μm.Left: the distribution of relative FCFs computed using the weighted algorithm.Center: relative FCFs computed by the original pipeline v1 (point source) as a function of the relative FCF computed using the weighted algorithm (this work).A 1:1 line is overlaid.Right: distribution of the point-source algorithm relative FCFs divided by the weighted algorithm relative FCFs, showing consistency between the two methods to within 2%. For each pair of sources in each family size, the normalized fluxes are compared to one another, epoch by epoch.The standard deviation of this normalized flux ratio is an indication of how well the sources agree with one another as they track the inherent JCMT flux calibration uncertainty from epoch to epoch.Higher standard deviation values indicate the sources are in less agreement.The calibrator family is selected by optimizing the group size with the minimum standard deviation threshold of each group.Finally, in each epoch, this secondary R FCF is used to verify the main weighted calibration method, taking the average normalized flux value among the sources comprising the optimized family.The normalization is calculated over the same time period as the pipeline v2 data taken prior to 2021 April 10 UTC such that the solutions are fixed for future observations. Figure 6 . Figure 6.Same as Figure 5, but for the good 450 μm data (see Section 4.2). Figure 8 . Figure 8. Coadded images of IC 348, NGC 1333, Serpens Main, and Serpens South (top to bottom, respectively) at both 450 (left) and 850 μm (right).Observations through 2022 February are included.At 450 μm, only usable data as defined in Section 4.2 are included. Figure 9 . Figure9.Scatter plot of sinusoidal (FAP Mod ) and linear (FAP Lin ) at 850 μm.Left: 4 yr of data using the original alignment and reduction pipeline as performed byLee et al. (2021).Middle: 4 yr of data reduced with the new calibration pipeline introduced in this paper.Right: 6 yr of data reduced with the new calibration pipeline.Note that the x-axis extends to much smaller FAP values in the right panel compared with the left and center panels. Figure 10 . Figure10.Comparison of best-fit linear slope (left) and period (right) for old and new calibrations at 850 μm.For the slope comparison, all sources with candidate or robust secular fits are included.For the period comparison, only those sources with robust periodic fits are included, and the dashed horizontal and vertical lines separate the plot into periodic, curved, and linear regions.Two sources are overlapped in the period plot, so each is annotated with a bullseye. Figure 14 . Figure 14.Left panels: light curves at 850 μm (red dots) and 450 μm (blue dots) for the four robust variables at both wavelengths, showing the tight correlation.Right panels: scatter plots of the brightness at each wavelength normalized at 850 μm by the median measured value and at 450 μm by the intercept value of the best-fit slope at the median 850 μm value.The normalized slopes are provided for each source. Figure 15 . Figure 15.Scatter plot of the χ 2 -fit value divided by the number of observations for each source.The left figure is drawn at 850 μm, and the right figure is drawn at 450 μm.Here, we only include the middle 80% data points in brightness and use Equation (2) for the standard deviation of each measurement, with RFCF unc equal to 1% and 5% at 850 and 450 μm, respectively.The robust variables at 850 μm are also annotated.The gray-vertical lines show the change in the χ 2 value after the subtraction of the best-fit secular trend. Table 7 in Appendix A includes the derived 450 μm R FCF values for those epochs that satisfy the above conditions.The table also allows for a calculation of the typical 450 μm rms noise in an epoch, n rms = 130 ± 65 mJy beam −1 , which can be used in Equation (2) to estimate the fiducial 450 μm standard deviation of any particular monitored source. Table 2 Summary of Gould Belt Region Coadded Images Released Table 4 Physical Properties of Robust Secular Variables at 850 μm over 4 yr Table 5 Physical Properties of Robust Secular Variables at 850 μm over 6 yr Figure 12.Light curve of HOPS 358, whose variable type changed from linear (red) to curved (blue) after adding two additional years of observation.The dashed line marks the date of the last measurement used by Lee et al. (2021) in their linear fit. in the mid-IR (Contreras Peña et al. 2020; Park et al. 2021), which has launched a powerful jet and outflow Table 6 Physical Properties of Robust Secular Variables at 450 μm over 6 yr Figure 13.Scatter plot of sinusoidal (FAP mod ) and linear (FAP lin ) at 450 μm.Sources that are robust variables at 850 μm are marked as circles (periodic), triangles (curved), and squares (linear). Table 7 Summary of Processed Observations of JCMT Transient Survey Gould Belt Regions The entire table is published only in the electronic edition of the article.The first five lines are shown here for guidance regarding its form and content. (This table is available in its entirety in machine-readable form.) Table 9 Summary of JCMT Transient Survey Intermediate-Mass Region Processed Observations (This table is available in machine-readable form.)
12,285
2024-05-01T00:00:00.000
[ "Physics" ]
Multi-Factor Asset-Pricing Models under Markov Regime Switches : Evidence from the Chinese Stock Market Abstract: This paper proposes a Markov regime-switching asset-pricing model and investigates the asymmetric risk-return relationship under different regimes for the Chinese stock market. It was found that the Chinese stock market has two significant regimes: a persistent bear market and a bull market. In regime 1, the risk premiums on common risk factors were relatively higher and consistent with the hypothesis that investors require more compensation for taking the same amount of risks in a bear regime when there is a higher risk-aversion level. Moreover, return dispersions among the Fama–French 25 portfolios were captured by the beta patterns from our proposed Markov regime-switching Fama–French three-factor model, implying that a positive risk-return relationship holds in regime 1. On the contrary, in regime 2, when lower risk premiums could be observed, portfolios with a big size or low book-to-market ratio undertook higher risk loadings, implying that the stocks that used to be known as “good” stocks were much riskier in a bull market. Thus, a risk-return relationship followed other patterns in this period. Introduction The Modern Portfolio Theory (MPT), first introduced by Markowitz (1952), describes the relationship between risk and expected return statistically using mean-variance optimization.Modern finance theory, therefore, stepped onto a new stage.Based on the framework of MPT, the Capital Asset Pricing Model (CAPM) was proposed to promote the study of asset-pricing and was developed by Treynor (1961Treynor ( , 1962)), Sharpe (1964), Lintner (1965aLintner ( , 1965b)), and Mossin (1966).However, as more and more abnormal cross-sectional returns were found to be persistent and were unable to be explained by traditional asset-pricing models, Fama andFrench (1992, 1993) proposed a three-factor model in which the risk factors, such as MKT(Market), SMB (Small Minus Big), HML (High Minus Low), had an explanatory power on abnormal returns.Many researchers try to use multi-factor models to interpret anomalies in areas such as momentum and investment.By constructing several characteristic-based factors, such as the momentum-based factor UMD (Up Minus Down, Carhart 1997), the profitability-based factor RMW (Robust Minus Weak, Fama andFrench 2015, 2016), and the investment-based factor AGR (Asset Growth Return, Chen 2017), the multi-factor asset-pricing models are improved and can explain most of the characteristics related to abnormal returns. Although the mean-variance model, CAPM, and the multi-factor model are logically simple and useful in practice, they are static, single-period linear models, which can hardly fit the real world Fama-French factors can proxy the latent risk factors in the Chinese stock market, we focus on the two typical asset-pricing models (i.e., CAPM and Fama-French three-factor model) and put them under Markov regime switches.It was found that the Chinese stock market can be depicted with two regimes in which the multi-factor asset-pricing model deviates from one to the other.Hence, investigations on the features of the Chinese stock market, and the portraits of time-varying risk factors and betas, are of great significance to investors. In this paper, we study two typical multi-factor asset-pricing models under the Markov regime switches for the Chinese stock market.We first distinguish different market states and examine the time-varying risk factors.Then, we allow risk loadings (i.e., betas) to switch across regimes.The results may shed light on the state-dependent risk-return relationship. Benchmark Portfolios Following the conventions of Fama andFrench (1996, 2006), we constructed 25 portfolios (denoted as FF25) based on quintile intersections of size and B/M, and take them as the benchmark portfolios.We studied all firms that were listed on the Shanghai Stock Exchange and the Shenzhen Stock Exchange, during the period from July 1995 to March 2015.However, since stock returns from July of year t to June of year t + 1 are matched with the accounting data in December of year t -1, firms that did not have December fiscal year-end data or 36 months of stock return data were excluded from the dataset.Every year, at the end of June, all stocks were ranked and allocated by their size quintile cutoffs and B/M quintile cutoffs.The size breakpoints of year t were the market capitalizations of each stock by the end of June of year t.The B/M for June of year t is the book value of year-end t -1 divided by the market value in December of year t -1.Then, the 25 portfolios were held for a year and the monthly average raw returns for each portfolio were calculated. Constructing Risk Factors Based on the methodology of Fama and French (1993), at the end of June every year, all the sample stocks were divided into six groups based on their size and B/M value cutoffs.The size breakpoint was the median market capitalization of all stocks at the end of June of year t.The B/M breakpoint for June of year t was the book value of last fiscal year-end in December year t -1 divided by market equity for December year t -1.Based on the cutoffs, growth portfolio was composed of the lowest 30% B/M firms, while value group included the highest 30% B/M firms.The remaining 40% of stocks comprised the medium group.Following Equations ( 1) and (2), value-weighted return difference between the value and growth portfolios within each of the size groups was calculated and averaged.The return difference was a value common risk factor, denoted by HML.We adopted the same approach to calculate SMB.Table 1 presents the formation of the common risk factor.The market risk factor was also the excess return on the market portfolio.Note: Size refers to the market capitalization of the stocks in June year t.Book-to-Market (B/M) is the book equity of last fiscal year-end in December divided by market value for December, year t -1."L", "M", and "H" represent the low, medium, and high B/M levels, respectively.All of the stocks were sorted based on size and B/M cutoffs.Six groups were formed, denoted as "S/L", "S/M", "S/H", "B/L", "B/M", and "B/H". The Framework of a Markov Regime-Switching Model The MRS is a flexible framework that is proficient in handling the variations caused by heterogeneous states of the world.In this paper, we focused on the regime-dependent risk factors and risk loadings in multi-factor models.Following the conventions of Hamilton (1989Hamilton ( , 1994)), we modeled the regimes as follows. For a regime-switching model, the transition of states is stochastic.However, the switching process followed a Markov chain and was driven by a transition matrix, which controls the probabilities of making a switch from one state to another.Considering that the Chinese stock market has manifested bear and bull markets, which along with business cycles, vary between expansions and recessions, we allowed two states, namely, a bear state and a bull state, in this model.The transition matrix is represented as: where P ij is the probability of switching from state i to state j.Denoting ψ t−1 as the matrix of available information at time t -1, the probability of State 1 or 2 is calculated following Equations ( 4) and ( 5): To estimate a regime-switching model where the states are unknown, we consider f (y t |S t = j, Θ) as the likelihood function for state j on a set of parameters (Θ).Then the full log likelihood function of the model is given by: which is a weighted average of the likelihood function in each state, and the weights are the states' probabilities.Applying Hamilton's filter, the estimates of probabilities can be acquired by doing an iterative algorithm.Finally, the estimates in the model were obtained by finding the set of parameters that maximized the log likelihood equation. To investigate a time-varying risk-return relationship, we put the multi-factor asset-pricing models under Markov regime switches.We first analyzed time-series variations in risk premiums for each risk factor by using a multivariate MRS model.Then we allowed beta in the multi-factor asset-pricing model to switch under a univariate MRS setting.Two models were adopted in this research.The first model was the CAPM, including the market factor (MKT).The second model was the Fama-French three-factor model, which incorporates the size factor (SMB) and value factor (HML). CAPM with Markov Switching (MR-CAPM) In the case of a CAPM, the market risk factor was first studied to identify two regimes. with: where λ MKT is the market risk premium, S t is an indicator variable that denotes the possible two states, t is the residual vector that follows the normal distribution, and σ 2 S t is the variance vector at state S t .In the second step, the market beta and residual in CAPM were assumed to be regime-dependent. where R i,t − R f ,t is the excess return for portfolio i, α i, S t is the unexplained return, β i, S t is the risk loading, and MKT i,t is excess return on market portfolio.S t is an indicator variable that denotes the possible two states.t is the residual vector that follows the normal distribution and σ 2 S t is the variance vector at state S t . The model was applied independently for the 25 benchmark portfolios.The matrix of estimates for parameters in the model is reported in section three. Fama-French Three-Factor Model with Markov Switching (MR-FF3 Model) In a Fama-French three-factor model, three factors (i.e., the market factor (MKT), the size factor (SMB), and the value factor (HML)) are proposed to explain the size anomaly and value anomaly.According to the formation of common risk factors, SMB and HML are history returns on the hedging portfolios (small minus big, high B/M minus low B/M), known as R SMB and R HML .So, if these factors (i.e., SMB and HML) originate from the portfolios/stocks in the market, and the return series emerge from the market, it is sagacious to guess that SMB and HML factors in the Fama-French three-factor model may vary over time and could be non-linear dynamic. Therefore, the risk premiums for the common risk factors can be represented as a matrix λ: Since the common risk factors are mimicking portfolios, excess returns on risk factors have no autoregressive terms, following a simple mean-variance MRS model: with: Cov( m,t , s,t , h,t ,) = 0 (21 where S t is an indicator variable that denotes the possible two states.m,t , s,t , and h,t refer to the residual vectors for MKT, SMB, and HML, and follow the normal distribution.σ 2 m,S t , σ 2 s,S t , and σ 2 h,S t are the variance vectors for the three factors at state S t . In the second step, betas for three factors and the residual in the Fama-French three-factor model are assumed to be regime-dependent. where R i,t − R f ,t is the excess return for portfolio i, α i, S t is the unexplained return, β i, S t , s i, S t , h i, S t are the risk loadings on three factors, and MKT i , SMB t , HML t are the three factors.S t is an indicator variable that denotes the possible two states.t is the residual vector that follows the normal distribution.σ 2 S t is the variance vector at state S t .Similarly, the MR-FF3 model will be estimated for each of the 25 portfolios. Empirical Results In this section, we present the estimates of multi-factor asset-pricing models under Markov regime switches.First, in the market process, we conducted the multivariate MRS model to analyze the regime-dependent variations among common risk factors.Then, in the beta process, a univariate MRS model is applied to investigate variations in risk loadings. Risk Factor Variations In the market process, we first analyzed the risk factor variations under Markov switching to determine regimes.Following the model developed in Section 2.2.2, we applied the Perlin (2014) Matlab Package and estimated the parameters, as shown in Table 2 and Figure 1, determining the two regimes as a bear and a bull state, respectively. where , − , is the excess return for portfolio i, , is the unexplained return, , , , , ℎ , are the risk loadings on three factors, and , , are the three factors.St is an indicator variable that denotes the possible two states. is the residual vector that follows the normal distribution. 2 is the variance vector at state St. Similarly, the MR-FF3 model will be estimated for each of the 25 portfolios. Empirical Results In this section, we present the estimates of multi-factor asset-pricing models under Markov regime switches.First, in the market process, we conducted the multivariate MRS model to analyze the regime-dependent variations among common risk factors.Then, in the beta process, a univariate MRS model is applied to investigate variations in risk loadings. Risk Factor Variations In the market process, we first analyzed the risk factor variations under Markov switching to determine regimes.Following the model developed in Section 2.2.2, we applied the Perlin (2014) Matlab Package and estimated the parameters, as shown in Table 2 and Figure 1, determining the two regimes as a bear and a bull state, respectively.Figure 1 plots the conditional mean of market return and the smoothed probablilities of regimes 1 and 2 in the sample period.The red area refers to the smoothed probability of regime 1, while the green area implies that of regime 2. At any time point, the sum of probablilities of regimes 1 and 2 should be equal to one.It is shown that regime 1 dominated most of the sample period, inferring that the Chinese stock market has been a bear market for most of the time.Further, if we compare the period of regime 2 with real world events that occurred in the time horizon, we can find that regime 2 has captured most of the astonishing booms and crises, and ups and downs.The market was inspired before Hong Kong returned to China in 1997.However, as the Asian financial crisis hit the Hong Kong market, the Chinese stock market also came to a bear state.In 2000, the international dot-com bubble occured, and the Chinese stock market experienced a short rise before coming to a long bear state from 2001 to 2006, because the state-owned shares (previously illiquid shares) were reduced and dumped into the market.Then, in 2006, as the government released several policies related to the split-share reform, the market stepped into a bull regime.However, speculations were driven by irrationalities, as it was more or less like a bubble.Along with the snow disaster in southern China and the Wenchuan earthquake, following the overwhelming subprime crisis along with the global economic slowdown, the Chinese stock market was dragged into another bear regime. Table 2 shows that the conditional mean of market return is relatively lower in regime 1, but higher in regime 2, aligning with the definitions of regimes 1 and 2. The transition matrix reveals that the probability of switching from bear to bull is 0.03, which is smaller than the probability of switching from bull to bear (0.06).If the current state is regime 1, it is less likely to switch to the bull market, because the probability of keeping the current state is 0.97.Hence, it is natural to find that the expected duration of staying at a bear market is 34.05 months, which is much longer than that of a bull market.Thus, the Chinese market has remained a bear market for most of the time. Risk Loading Variations In the beta process, we allow risk loading for each risk factor to switch across regimes.Figure 2 plots the market betas in the MR-CAPM model for the 25 characterized portfolios. The left subfigure plots the estimates in regime 1, and the right subfigure plots the estimates in regime 2. In both regimes, the market betas are non-zeros.If we compare two typical characterized portfolios, P 5 (the 5th portfolio characterized by the highest B/M and the smallest size) and P 21 (the 21st portfolio characterized by the lowest B/M and the biggest size), it will help reveal the return dispersions among portfolios.The risk loading of P 5 is higher than the loading of P 21 in regime 1, but the relation is reversed in regime 2. Thus, we can say that the return dispersion between P 5 and P 21 is explained by the risk dispersion between P 5 and P 21 in regime 1.However, the risk explanation no longer holds in regime 2. Thus, in a bear market, a positive risk-return relationship holds, while in a bull market, the trade-off between risk and return follows other patterns. Risk Factor Variations To better understand how the risk-return relationship deviates, we put the traditional Fama-French three-factor model in the framework of Markov regime switches.We first analyzed how risk factors vary as regime switches.Figure 3 plots the time-series variations of risk premiums on MKT, SMB, and HML.The red and green areas denote the smoothed probabilities of a bear market (regime 1) and a bull market (regime 2), respectively.It was shown that a bear market dominated most of the observation period.According to the estimates in Table 3, if the current state is a bear market, the probability of staying at the current state is as high as 0.95.Further, since the probability of transmitting from bear to bull is 5%, lower than the probability of transmitting from bull to bear (11%), it is more likely to be a bear market, whose expected duration is as long as 18.89, almost twice that of a bull market. Risk Factor Variations To better understand how the risk-return relationship deviates, we put the traditional Fama-French three-factor model in the framework of Markov regime switches.We first analyzed how risk factors vary as regime switches.Figure 3 plots the time-series variations of risk premiums on MKT, SMB, and HML.The red and green areas denote the smoothed probabilities of a bear market (regime 1) and a bull market (regime 2), respectively.It was shown that a bear market dominated most of the observation period.According to the estimates in Table 3, if the current state is a bear market, the probability of staying at the current state is as high as 0.95.Further, since the probability of transmitting from bear to bull is 5%, lower than the probability of transmitting from bull to bear (11%), it is more likely to be a bear market, whose expected duration is as long as 18.89, almost twice that of a bull market.Table 3 provides the estimates of parameters in the MR-FF3 model.Aligning with the findings in Chen ( 2017), in a bear market (regime 1), the risk premiums of SMB and HML was slightly higher.It is because in a bear market, when the market return is low and the business is under downturn, investors ask for higher compensation on size-related risk and value-related risk.According to Cochrane (2005), during bad times, when investors value a little bit of extra wealth, "good" stocks that pay off well are wanted by investors and get a higher price.Other stocks that cannot provide good payoffs in these times are "bad" stocks and will have a lower price with a higher expected return.Then, the return dispersions between "bad" and "good" stocks are expected to expand.Therefore, return dispersions between "small" and "big" stocks and "high B/M" and "low B/M" stocks are expected to expand, and thus, risk premiums on SMB and HML are higher in a bear market. However, in a bull market, when the market return is high, investors ask for lesser compensation.This is consistent with the expectation that during the bull market, risk-loving investors that ask for a lower price of risk for a unit of risk are increasing.Thus, in regime 1, as the risk premiums in SMB and HML are higher, the risk-aversion levels in the market are also increasing.However, in regime 2, following the same rule, the risk-aversion levels in the market are decreasing.Table 3 provides the estimates of parameters in the MR-FF3 model.Aligning with the findings in Chen ( 2017), in a bear market (regime 1), the risk premiums of SMB and HML was slightly higher.It is because in a bear market, when the market return is low and the business is under downturn, investors ask for higher compensation on size-related risk and value-related risk.According to Cochrane (2005), during bad times, when investors value a little bit of extra wealth, "good" stocks that pay off well are wanted by investors and get a higher price.Other stocks that cannot provide good payoffs in these times are "bad" stocks and will have a lower price with a higher expected return.Then, the return dispersions between "bad" and "good" stocks are expected to expand.Therefore, return dispersions between "small" and "big" stocks and "high B/M" and "low B/M" stocks are expected to expand, and thus, risk premiums on SMB and HML are higher in a bear market. However, in a bull market, when the market return is high, investors ask for lesser compensation.This is consistent with the expectation that during the bull market, risk-loving investors that ask for a lower price of risk for a unit of risk are increasing.Thus, in regime 1, as the risk premiums in SMB and HML are higher, the risk-aversion levels in the market are also increasing.However, in regime 2, following the same rule, the risk-aversion levels in the market are decreasing. Risk Loading Variations Adopting the same approach as in Section 3.1.2,we allow risk loadings of the MR-FF3 model to vary across regimes.Figures 4-6 plotings on MKT, SMB, and HML in the MR-FF3 model for the 25 characterized portfolios, respectively the risk load. In either figure, the subfigures plot estimates for regime 1 and regime 2, respectively.It is shown that in regime 1, risk loadings on SMB and HML have typical patterns.In regime 1, betas of SMB factor increased as the size of each portfolio decreased from big to small.Thus, return dispersions between big and small stocks were captured by the SMB factor.Meanwhile, betas of HML factor increased as B/M value of each portfolio increased, implying that return dispersions between low and high B/M stocks were captured by HML factors.Therefore, it is in regime 1 that a three-factor asset-pricing model can explain the expected returns on stocks.However in regime 2, beta loadings had a reversed pattern, in that big size and low B/M portfolios had higher risk loadings.Thus, in a bull market, investing on such big and low B/M stocks may undertake higher risks, and a positive risk-return relationship no longer holds. Risk Loading Variations Adopting the same approach as in Section 3.1.2,we allow risk loadings of the MR-FF3 model to vary across regimes.Figures 4-6 plot the risk loadings on MKT, SMB, and HML in the MR-FF3 model for the 25 characterized portfolios, respectively. In either figure, the subfigures plot estimates for regime 1 and regime 2, respectively.It is shown that in regime 1, risk loadings on SMB and HML have typical patterns.In regime 1, betas of SMB factor increased as the size of each portfolio decreased from big to small.Thus, return dispersions between big and small stocks were captured by the SMB factor.Meanwhile, betas of HML factor increased as B/M value of each portfolio increased, implying that return dispersions between low and high B/M stocks were captured by HML factors.Therefore, it is in regime 1 that a three-factor asset-pricing model can explain the expected returns on stocks.However in regime 2, beta loadings had a reversed pattern, in that big size and low B/M portfolios had higher risk loadings.Thus, in a bull market, investing on such big and low B/M stocks may undertake higher risks, and a positive risk-return relationship no longer holds. Robustness Test Using a Hedging Portfolio To compare the performance of unconditional factor models and regime-dependent factor models, we further conducted a time-series regression of excess returns for a hedging portfolio on risk factors.The hedging portfolio is a zero-cost portfolio that has a long position on the 5th portfolio of FF25 portfolios, and a short position on the 21st portfolio of the FF25 portfolios, denoted as a "5-21" portfolio.Because the 5th portfolio was the one that had the smallest size and the highest B/M value, while the 21st portfolio was the one that had the biggest size and the lowest B/M value, return spreads between the two portfolios should be the largest and related with firm characteristics of size and B/M.Table 4 reports the estimates of exposures to risk factors for each of the four models: unconditional CAPM, MR-CAPM, unconditional FF3 model, and MR-FF3 model.As is shown in Panel A of Table 4, the unconditional CAPM fails to explain the abnormal return of the hedging portfolio because there is a significant intercept and an insignificant market beta.However, when the model is conditioned on regimes, we found that the CAPM had an asymmetric pattern under two regimes, in which regime 1 had a significant market beta.The Fama-French three-factor model was also improved by adjusting under Markov regime switches.The MR-FF3 model explained more unexplained returns and depicted a regime-dependent risk exposure pattern.Statistics in Panel B implied that am MR-FF3 model had the least pricing errors and best performance under different criterions. Out-of-Sample Analysis on MR-FF3 Model Based on the MR-FF3 model developed in Section 3.2, we conducted an out-of-sample analysis to examine its predictability.For each of the 25 portfolios, the estimation window ranged from July 1995 to March 2015, while the forecasting period ranged from April 2015 to December 2017.The MR-FF3 model was first estimated in the expanding window regressions, and then the Markov regime transition matrix and conditional mean parameters were estimated under a root mean squared prediction error (RMSPE) criterion. ) is the one-step-ahead expected return under regime switching, which is a weighted average of the returns in Regime 1 and Regime 2, where the weights were given by the transition probabilities conditional on the prevailing state at time t, as shown in the following equation: where µ 1 µ 2 is the mean forecast for each state: Here in model MR-FF3: where β S t s S t h S t is the coefficient vector that depends on state (S t = 1, 2). MKT t+1 SMB t+1 HML t+1 is the realized factor return vector at t + 1, which is the independent data used to predict the expected return. P(S t+1 = 1|S t = i) P(S t+1 = 2|S t = i) is calculated based on: where P 11 1 − P 22 1 − P 11 P 22 is the transition probability matrix and π 1 1 − π 1 is the filtered probability vector at t. Therefore, Based on the calculations above, we can estimate the one-step-ahead forecasts. Since the estimation window incorporates all the previous information, it is an expanding window estimation.The forecasting horizon is one step, from t to t + 1, stepping over one month.Figure 7 illustrates the methodology adopted in this section. The out-of-sample analysis was conducted for each of the 25 Fama-French portfolios.Figure 8 plots the forecasting returns and actual returns for portfolios 1, 5, 21, and 25, respectively.Table 5 further reports the performance of the MR-FF3 model in in-sample and out-of-sample fitting.It was noticed that the in-sample RMSE was unusually larger than out-of-sample RMSE for the reason that idiosyncratic variance declined over time.Furthermore, we calculated the arithmetic average of the 25 portfolio estimated returns at each time point during the forecasting window.Figure 9 shows the average forecasting returns versus true values.It was shown that the differences between forecasting and true values were close to zero. Discussion In this study, we found that there were two significant regimes (i.e., bear and bull) existing in the Chinese stock market.It was shown that the bear market dominated most of the sample period, aligning with the fact that the Chinese stock market is still an emerging market and has been facing problems caused by dual economic characteristics and restrictions by government policies.On the other hand, the so-called bull market was characterized by increasing risk-takers (risk-lovers) in the market, generating lower risk premiums on common risk factors (i.e., SMB and HML). To understand how asset-pricing models perform under different regimes, we first adjusted the CAPM by introducing the two regimes.It was found in regime 1 that a positive risk-return relationship was persistent when market beta was significantly positive.However, in a bull market, when there were negative betas, a trade-off between risk and return had other patterns. Furthermore, we proposed a MR-FF3 model to investigate the risk-return trade-off deviations.We observed risk factor variations and risk loading variations across two regimes.It was found in regime 1 that the factor excess returns were relatively higher, consistent with the hypothesis that a bear market has a higher risk-aversion level.Moreover, investigations of the beta process implied that return dispersions among characterized portfolios came from risk loading patterns.Specifically, portfolios that have higher returns endure greater exposure to common risk factors (i.e., SMB and HML).Thus, a multi-factor asset-pricing model worked well in regime 1.However, Discussion In this study, we found that there were two significant regimes (i.e., bear and bull) existing in the Chinese stock market.It was shown that the bear market dominated most of the sample period, aligning with the fact that the Chinese stock market is still an emerging market and has been facing problems caused by dual economic characteristics and restrictions by government policies.On the other hand, the so-called bull market was characterized by increasing risk-takers (risk-lovers) in the market, generating lower risk premiums on common risk factors (i.e., SMB and HML). To understand how asset-pricing models perform under different regimes, we first adjusted the CAPM by introducing the two regimes.It was found in regime 1 that a positive risk-return relationship was persistent when market beta was significantly positive.However, in a bull market, when there were negative betas, a trade-off between risk and return had other patterns. Furthermore, we proposed a MR-FF3 model to investigate the risk-return trade-off deviations.We observed risk factor variations and risk loading variations across two regimes.It was found in regime 1 that the factor excess returns were relatively higher, consistent with the hypothesis that a bear market has a higher risk-aversion level.Moreover, investigations of the beta process implied that return dispersions among characterized portfolios came from risk loading patterns.Specifically, portfolios that have higher returns endure greater exposure to common risk factors (i.e., SMB and HML).Thus, a multi-factor asset-pricing model worked well in regime 1.However, in State 2, beta loadings had a reversed pattern, where portfolios characterized with a big size and low B/M had higher risk loadings.Meanwhile, since the risk price in regime 1 was higher than risk price in regime 2, the risk-return relationship did not hold any more in a bull market 1 .Moreover, we conducted an out-of-sample analysis on the Fama-French three-factor model under Markov regime switches.It was shown that an MR-FF3 model performed well in one-step-ahead forecasting. To sum up, for an investor in the Chinese stock market, variations in risk factors dominated the changes from regime 1 to 2. Though investors do not know the exact current state, they could infer the market state based on information available from newspapers and government policies, especially from the estimates of excess returns on common risk factors.If it is in regime 1, investment strategies based on a three-factor model may be helpful.However, if it is in a bull market, when the multi-factor asset-pricing models deviate, investing on big stocks and low B/M stocks, which were originally regarded as "good" stocks, may be much riskier, because risk loadings on them are higher during this period. Figure 1 . Figure 1.Market excess returns and smoothed regime probabilities in a Markov Regime Switching Capital Asset Pricing Model (MR-CAPM), and GDP Growth.Notes: The figure shows the conditional mean of market return (blue line) and the smoothed probability of being either a bear market (red area) or a bull market (green area), along with the variations in GDP growth rate and macroeconomic events in the period between July 1995 and March 2015. Figure 1 Figure1plots the conditional mean of market return and the smoothed probablilities of regimes 1 and 2 in the sample period.The red area refers to the smoothed probability of regime 1, while the green area implies that of regime 2. At any time point, the sum of probablilities of regimes 1 and 2 should be equal to one.It is shown that regime 1 dominated most of the sample period, inferring that the Chinese stock market has been a bear market for most of the time.Further, if we Figure 1 . Figure 1.Market excess returns and smoothed regime probabilities in a Markov Regime Switching Capital Asset Pricing Model (MR-CAPM), and GDP Growth.Notes: The figure shows the conditional mean of market return (blue line) and the smoothed probability of being either a bear market (red area) or a bull market (green area), along with the variations in GDP growth rate and macroeconomic events in the period between July 1995 and March 2015. Figure 2 . Figure 2. Market Betas in MR-CAPM for 25 Size-B/M Portfolios.Notes: The three-dimensional space shows the market betas in the two-regime MR-CAPM for 25 Size-B/M portfolios.The left group plots estimates of regime 1 and the right subfigure plots those of regime 2. The vertical axis denotes risk betas.The horizontal space denotes the 25 portfolios.The horizontal axis denotes the B/M value magnitude.From left to right, the B/M value of portfolios increases.The depth axis refers to size magnitude.As depth increases, the size of portfolios increases.The portfolios are marked with numbers from 1 to 25. Figure 2 . Figure 2. Market Betas in MR-CAPM for 25 Size-B/M Portfolios.Notes: The three-dimensional space shows the market betas in the two-regime MR-CAPM for 25 Size-B/M portfolios.The left group plots estimates of regime 1 and the right subfigure plots those of regime 2. The vertical axis denotes risk betas.The horizontal space denotes the 25 portfolios.The horizontal axis denotes the B/M value magnitude.From left to right, the B/M value of portfolios increases.The depth axis refers to size magnitude.As depth increases, the size of portfolios increases.The portfolios are marked with numbers from 1 to 25. Figure 3 . Figure 3. Risk premium series and smoothed regime probabilities in MR-CAPM.Notes: The figure shows the conditional means of risk premiums and the smoothed probability of being either a bear market (red area) or a bull market (green area).The observation period is from July 1995 to March 2015. Figure 3 . Figure 3. Risk premium series and smoothed regime probabilities in MR-CAPM.Notes: The figure shows the conditional means of risk premiums and the smoothed probability of being either a bear market (red area) or a bull market (green area).The observation period is from July 1995 to March 2015. Figure 4 .Figure 4 . Figure 4. Market betas in MR-FF3 for 25 size-B/M portfolios.Notes: The three-dimensional space shows the market betas in the two-regime MR-FF3 for 25 size-B/M portfolios.The left group plots estimates of regime 1 and the right subfigure plots those of regime 2. The vertical axis denotes risk betas.The horizontal space denotes the 25 portfolios.The horizontal axis denotes the B/M value magnitude.From left to right, the B/M value of portfolios increases.The depth axis refers to size magnitude.As depth increases, the size of portfolios increases. Figure 5 . Figure 5. SMB betas in MR-FF3 for 25 size-B/M portfolios.Notes: The three-dimensional space shows the SMB betas in the two-regime MR-FF3 for 25 size-B/M portfolios.The right group plots estimates of regime 1 and the left subfigure plots those of regime 2. The vertical axis denotes risk betas.The horizontal space denotes the 25 portfolios.The horizontal axis denotes the B/M value magnitude.From right to left, the B/M value of portfolios increases.The depth axis refers to size magnitude.As depth increases, the size of portfolios decreases. Figure 6 .Figure 5 . Figure 6.HML betas in MR-FF3 for 25 size-B/M portfolios.Notes: The three-dimensional space shows the HML betas in the two-regime MR-FF3 for 25 size-B/M portfolios.The right group plots estimates of regime 1 and the left subfigure plots those of regime 2. The vertical axis denotes risk betas.The horizontal space denotes the 25 portfolios.The axis denotes the B/M value Figure 5 . Figure 5. SMB betas in MR-FF3 for 25 size-B/M portfolios.Notes: The three-dimensional space shows the SMB betas in the two-regime MR-FF3 for 25 size-B/M portfolios.The right group plots estimates of regime 1 and the left subfigure plots those of regime 2. The vertical axis denotes risk betas.The horizontal space denotes the 25 portfolios.The horizontal axis denotes the B/M value magnitude.From right to left, the B/M value of portfolios increases.The depth axis refers to size magnitude.As depth increases, the size of portfolios decreases. Figure 6 .Figure 6 . Figure 6.HML betas in MR-FF3 for 25 size-B/M portfolios.Notes: The three-dimensional space shows the HML betas in the two-regime MR-FF3 for 25 size-B/M portfolios.The right group plots estimates of regime 1 and the left subfigure plots those of regime 2. The vertical axis denotes risk betas.The horizontal space denotes the 25 portfolios.The horizontal axis denotes the B/M value Figure 7 . Figure 7. One-Step-Ahead forecasting of an MR-FF3 model.Notes: The figure illustrates the expanding estimation window and the forecasting window of an out-of-sample analysis for an MR-FF3 model. Figure 8 . Figure 8. Out-of-sample forecasting return vs. actual return for each portfolio.Notes: The figure depicts the one-step-ahead, out-of-sample forecasting returns (red line) and actual monthly returns (blue line), during the period from April 2015 to December 2017 for the 1st, 5th, 21st, and 25th of the Fama-French 25 portfolios (P01, P05, P21, P25). Figure 7 . Figure 7. One-Step-Ahead forecasting of an MR-FF3 model.Notes: The figure illustrates the expanding estimation window and the forecasting window of an out-of-sample analysis for an MR-FF3 model. Figure 7 . Figure 7. One-Step-Ahead forecasting of an MR-FF3 model.Notes: The figure illustrates the expanding estimation window and the forecasting window of an out-of-sample analysis for an MR-FF3 model. Figure 8 . Figure 8. Out-of-sample forecasting return vs. actual return for each portfolio.Notes: The figure depicts the one-step-ahead, out-of-sample forecasting returns (red line) and actual monthly returns (blue line), during the period from April 2015 to December 2017 for the 1st, 5th, 21st, and 25th of the Fama-French 25 portfolios (P01, P05, P21, P25). Figure 8 . Figure 8. Out-of-sample forecasting return vs. actual return for each portfolio.Notes: The figure depicts the one-step-ahead, out-of-sample forecasting returns (red line) and actual monthly returns (blue line), during the period from April 2015 to December 2017 for the 1st, 5th, 21st, and 25th of the Fama-French 25 portfolios (P01, P05, P21, P25). Figure 9 . Figure 9. Average out-of-sample forecasting return vs. actual return.Notes: The figure depicts the one-step-ahead, out-of-sample forecasting returns (denoted by Avg Forecast), actual monthly returns (denoted by AvgTrue), and their differences (denoted by Diff), during the period from April 2015 to December 2017 for the Fama-French 25 portfolios. Figure 9 . Figure 9. Average out-of-sample forecasting return vs. actual return.Notes: The figure depicts the one-step-ahead, out-of-sample forecasting returns (denoted by Avg Forecast), actual monthly returns (denoted by AvgTrue), and their differences (denoted by Diff), during the period from April 2015 to December 2017 for the Fama-French 25 portfolios. Table 1 . Formation of common risk factors. Table 2 . Parameter estimates of the MR-CAPM. Note: This table reports the estimates of parameters in the two-state MRS model developed in Section 2.2.2.The p-values are reported in the parentheses under the corresponding mean.Π is the state transition matrix, which reports the probability of switching from one state to another.The sample period is from July 1995 to March 2015, on a monthly basis. Table 3 . Parameter estimates of the MR-FF3 model. Note: This table reports the estimates of parameters in the two-state MRS model developed in Section 2.2.3.The p-values are reported in the parentheses under the corresponding mean.Π is the state transition matrix, which reports the probability of switching from one state to the other.The sample period is from July 1995 to March 2015 on a monthly basis. Table 3 . Parameter estimates of the MR-FF3 model. which reports the probability of switching from one state to the other.The sample period is from July 1995 to March 2015 on a monthly basis. Table 4 . Time-series regressions of "5-21" portfolio returns on risk factors.The sample period is from July 1995 to March 2015 on a monthly basis.p-values are reported in the parentheses under the corresponding estimates.SSR is the sum of squared residuals.AIC refers to Akaike information criterion, while BIC refers to Schwarz criterion, and HQC denotes Hannan-Quinn information criterion. Table 5 . Performance of the MR-FF3 in in-sample and out-of-sample fitting.This table reports the performance of the MR-FF3 model in in-sample and out-of-sample fitting.Results of portfolio 1, 5, 21, and 25 of the FF25 portfolios and a "5-21" hedging portfolio are reported.The in-sample period spans from July 1995 to March 2015, while the out-of-sample period is from April 2015 to December 2017, on a monthly basis.RMSE is the Root Mean Squared Error. Note: Table 5 . Performance of the MR-FF3 in in-sample and out-of-sample fitting. Sample RMSE Out-of-Sample RMSE 1995.07-2015.03 2015.04-2017.12 This table reports the performance of the MR-FF3 model in in-sample and out-of-sample fitting.Results of portfolio 1, 5, 21, and 25 of the FF25 portfolios and a "5-21" hedging portfolio are reported.The in-sample period spans from July 1995 to March 2015, while the out-of-sample period is from April 2015 to December 2017, on a monthly basis.RMSE is the Root Mean Squared Error.
9,577.4
2018-05-20T00:00:00.000
[ "Economics" ]
Geographic distribution and ecological niche of plague in sub-Saharan Africa Background Plague is a rapidly progressing, serious illness in humans that is likely to be fatal if not treated. It remains a public health threat, especially in sub-Saharan Africa. In spite of plague's highly focal nature, a thorough ecological understanding of the general distribution pattern of plague across sub-Saharan Africa has not been established to date. In this study, we used human plague data from sub-Saharan Africa for 1970–2007 in an ecological niche modeling framework to explore the potential geographic distribution of plague and its ecological requirements across Africa. Results We predict a broad potential distributional area of plague occurrences across sub-Saharan Africa. General tests of model's transferability suggest that our model can anticipate the potential distribution of plague occurrences in Madagascar and northern Africa. However, generality and predictive ability tests using regional subsets of occurrence points demonstrate the models to be unable to predict independent occurrence points outside the training region accurately. Visualizations show plague to occur in diverse landscapes under wide ranges of environmental conditions. Conclusion We conclude that the typical focality of plague, observed in sub-Saharan Africa, is not related to fragmented and insular environmental conditions manifested at a coarse continental scale. However, our approach provides a foundation for testing hypotheses concerning focal distribution areas of plague and their links with historical and environmental factors. Background Plague is a rapidly progressing, serious illness in humans that is likely to be fatal if not treated [1,2]. It remains a public health threat in many parts of the world, but particularly in sub-Saharan Africa [3]. Plague is endemic to countries across Africa; however, most human cases are currently being reported from East Africa and Madagascar [3], with > 10 000 cases during the last decade (WHO plague archives, unpublished). Plague is a zoonotic disease caused by the bacillus Yersinia pestis; plague bacteria circulate mainly in rodent hosts and are transmitted between them and to other mammals via adult fleas, and predation or cannibalism, but potentially also by contaminated soil [1,[4][5][6][7][8]. The disease is enzootic in a variety of wild rodent species and in diverse habitats [2]. In Africa, plague cases generally occur in seasonal pulses, and show a geographically clearly disjunct distribution in circumscribed foci that are assumed to be correlated with distributions of dominant vectors and rodent reservoirs and their ecology [9]. Many such foci have been identified, and the current observed distribution of human plague appears to coincide with the natural foci; however, a recent World Health Organization report concluded that it is unlikely that all foci have been discovered [7]. In spite of plague's highly focal nature, the ecological understanding of the general distribution pattern of plague across Africa has not been established to date. Studies of plague in Africa have generally focused at micro-scales, examining host-vector-parasite systems and human social activity patterns within single plague foci [10][11][12][13][14]. These studies can help in identifying the hosts/ vectors involved and in evaluating human risk behavior in a particular region, but have been unable to elucidate ecological factors shaping the general pattern of plague's geographic distribution at broader scales. Alternatively, macro-scale studies have been performed to examine the distributional patterns of plague, and potential links with environmental conditions, but not in Africa. In the United States spatial patterns in plague transmission were evaluated in view of changing climates, with the conclusion that observed temporal patterns in plague distributions are consistent with changing climates [15]. In a recent study in the US, the geographic distributions of 13 flea species -potential plague vectors -were predicted and explored [16]. Still in the US, positive relationships were established between human plague incidence, and winterspring rainfall and elevation [17,18]. In this study, we aim to test the potential of using coarseresolution environmental factors to predict the geographic distribution of plague across sub-Saharan Africa. Given the observed focal nature of plague, we suspect that environmental factors play a role in the complex plague cycle and so may explain -at least partly -the details of its spatial distribution. To this end, we develop ecological niche models (ENMs) using human plague case occurrence data to explore the potential geographic distribution of plague and its ecological requirements across sub-Saharan Africa. This approach provides a foundation for test-ing hypotheses concerning focal distributional areas of plague and links with environmental variables. If ecological factors affect the distribution of plague, and these factors can be identified, models can be developed to predict distributions of yet unknown plague foci. Results The plague occurrence data set consists of 45 unique locations from central (Democratic Republic of the Congo and Uganda), eastern (Tanzania, Malawi, and Mozambique), and southern Africa (Botswana, Lesotho, Namibia, South Africa, Zambia, and Zimbabwe; Figure 1). Using the full data set, the overall ENM predicts a broad potential geographic distribution of plague in Africa (Figure 1). The main regions predicted as suitable are south of the Sahara Desert, specifically around Lake Victoria and in the central-southern part of the continent. Several areas where plague has never been reported are also predicted, e.g. regions in Ethiopia, Nigeria, and the Central African Republic. Large areas from sub-Saharan Africa appearing unsuitable for plague are the desert areas (e.g. Kalahari Desert in southern Africa) and the wettest areas, most notably the Congo Basin in the Democratic Republic of the Congo and the coastal regions in eastern and western Africa. Projecting the final model outside the training area (sub-Saharan Africa; see Figure 1), a large part of Madagascar and small areas along the coast of northern Africa are predicted as suitable for plague. Testing the overall model's transferability using independent occurrence data from Madagascar and northern Africa (not used for testing the model) indicates that our model has significantly higher agreement between the niche projection and independent test occurrence data than is expected by chance (both binomial tests, P < 0.001), suggesting that our overall ENM can anticipate the potential distribution of plague occurrences outside the sample area robustly. Relative contributions of the various environmental data sets to the overall niche model of plague occurrence were assessed using a jackknife manipulation. All environmental coverages appeared to contribute to the overall model (Figure 2 &3). Precipitation of the driest month contributed the least, while elevation, potential evapotranspiration, mean diurnal temperature range, annual rainfall, and December NDVI appeared to be key factors having substantial influence on the model. To examine the generality and predictive ability of the ENMs, we developed models based on regional subsets of occurrences. In first tests, with 3-region models predicting the spatial distribution of occurrences in the fourth, model predictivity was poor ( Figure 4). For example, the ENM based on subsets B, C, and D predicted 0 of 13 A points successfully; the ENM based on A, C, and D predicted 4 of 9 B points; the ENM based on A, B, and D pre-Geographic overview of plague in Africa Figure 1 Geographic overview of plague in Africa. Inset: 45 occurrence points, colored differently to indicate the four regional subsets: subset A (dark blue triangles); subset B (light blue squares); subset C (green circles); subset D (pink diamonds). Sub-Saharan region shown in inset covers training region used for ENM development. Table 1). The ENM based on A, B, and D was the only model for which predictions of the independent occurrence points are statistically better than random (binomial test, P < 0.02). These results thus show that the ENMs in this study based on regional subsets were generally unable to predict independent occurrence points accurately, and that plague may occur in diverse ecological situations across Africa. To visualize sub-Saharan African plague niches in ecological dimensions, ENM predictions were related to conditions across the landscape. To this end, we integrated four ENMs based on individual regional subsets and all environmental layers with the base environmental layers. Figure 5 presents example visualizations in two dimensions (temperature and soil carbon) for regional subsets to illustrate broad trends: the regional ecological niches are included within a broader, composite ecological niche that is discernible in ENMs based on all available occurrences. This result is consistent with the results of our spatially stratified testing procedure, in which ENMs could not generally predict plague presences in regions outside the training area. More generally, predicted areas based on all available occurrence locations coincide more or less with the diversity of conditions across sub-Saharan Africa; Relative contributions of single-data environmental data sets Figure 2 Relative contributions of single-data environmental data sets. Horizontal bar graph presents a summary of results of single-data coverage model analyses, indicating mean omission percentages (and standard deviations) calculated based on predictions of 10 best-subset models and independent testing points. Note that positive contribution by the variable is indicated by low values in the single-coverage models. only regions where extreme ecological conditions are present were left out (e.g. the Kalahari Desert in South Africa). In sum, our plague ENMs suggest that sub-Saharan African plague occurs in ecologically diverse landscapes under wide ranges of environmental conditions. Discussion Since 1970, plague has been reported from several African countries, including (in decreasing order of importance) the Democratic Republic of the Congo, Madagascar, Tanzania, Uganda, Angola, Zambia, Mozambique, Zimbabwe, Malawi, South Africa, Lesotho, Botswana, Kenya, Namibia, Algeria, and Libya [7]. Countries presently affected most seriously are the Democratic Republic of the Congo, Madagascar, Tanzania, Uganda, and Mozambique [3]. The overall ENM, which predicted much of sub-Saharan Africa and Madagascar as suitable for plague, seems consistent with these observations: plague has a very broad geographic potential in Africa. It is worth noting that, despite the high number of human plague recordings, only small parts of the Democratic Republic of the Congo were predicted; but indeed, the highly infected northeastern region where plague mostly occurred, i.e. the Ituri district, was accurately predicted by our models. Relative contributions of N-1 environmental data sets Potential plague distribution predictions based on regional subsets Figure 4 Potential plague distribution predictions based on regional subsets. Potential plague distribution predictions were based on all environmental coverages and 3 regional subsets of occurrence points in sub-Saharan Africa. [19,20]. Nonetheless, in spite of statistical significance, we observed some predictive failures as well: areas in the Democratic Republic of the Congo with historical plague occurrences were predicted only at low levels, while some plague-positive areas in Libya were not modeled as suitable. The broad modeled potential distribution, however, contrasts with the focal nature of plague in Africa: generally, active plague foci are circumscribed, down to areas of just a few hundred square kilometers [21]. We suspected ecological variables as climate, topography, and land cover to play a role in the distribution of plague in Africa -Soviet scientists were the first to bring attention to the relationship between landscapes and the distribution and occurrences of diseases, as plague [1]. Land cover might influence rodent and flea diversity or densities, as climate dynamics showed to influence the abundance of mammals and fleas in the United States and Vietnam, affecting human plague incidence [17,22]. Additionally, recent studies have established relationships between elevation and human plague occurrence [14,18]. From our African analyses, however, it becomes clear that plague can persist in various biotopes under diverse ecological conditions. ENMs based on regional subsets of occurrence points suggest that various ecological subniches may correspond to each plague focus, within a broad niche in which plague seems able to persist. Hence, on this coarse spatial scale, plague occurrences are only predictable in a very general sense. More particularly, persistence of plague in biotopes ranging from dry lowlands to wet highland areas restricts the possibilities of the ENM approach in identifying specific sets of conditions. The great diversity of ecological circumstances under which plague persists in Africa might be explained by plague ecology being diverse relative to host ecology. Various rodent species are apparently involved in the ecology and epidemiology of plague, and the reservoir(s) in many African plague foci remain(s) unknown [1,6]. It has been suggested that plague survives through a complex suite of rodent species in some areas (e.g., Mastomys natalensis, Arvicanthis abysinnicus, Lemniscomys striatus, Mus minutoides, and Rattus rattus in the Ituri focus in the Democratic Republic of the Congo), while in others, one species is the main reservoir (e.g., Rattus rattus in parts of Madagascar [23]). A question that arises is what makes a plague focus exist. In our analyses, all environmental variables appear to contribute positively to the overall ENMs, with elevation, potential evapotranspiration, mean diurnal temperature range, annual rainfall, and December NDVI as the most important factors, suggesting that different environmental conditions are likely to influence plague distribution. Yet, uncertainty still exists as to whether absence of plague in rodent populations near an active plague focus is the result of a deterministic process (i.e., the host is present but the conditions are not conducive for plague transmission), or whether it is the result of historical stochasticity (i.e., by chance, plague did not establish in these wild rodent populations or never even reached them). Put another way, the focal plague distribution might result from plague being present only where it was introduced and established locally but without subsequent broad spread from the point of introduction (introductiondriven but dispersal-limited). Alternatively, its spatial distribution may prove broader than observations have suggested, but plague has not been detected or reported in spite of local presence (detection-driven). Apparent lack of plague in an area might also be due to the fact that the region lacks a suitable bridging vector (flea species) to result in transmission to humans (biotically limited). Certainly, involvement of several species (one or more reservoirs, vectors, incidental hosts, and the pathogen itself), results in a complex system difficult to comprehend. Niche modeling tools may prove valuable, for example, in identifying likely candidate species participating in the transmission cycle [24][25][26]. Hence, further research using the same tools, but on finer scales, focusing within a single endemic plague region and its surroundings, is needed to explore the mechanisms of a plague focus in greater depth. Table presents for every regional subset prediction proportional area predicted present, total number of test points, number of test points correctly predicted, and P-value. Presence was defined as when 5 of 10 of the replicate models predicted presence. Visualizations of plague ecological niches It is important to emphasize that our models are based on incidences of human plague derived from historical reports . We are aware that models based on human cases have limitations regarding disease occurrences in natural environments [27]. Underestimation of plague's geographic distribution might be possible; however, since no occurrence data exist for plague in African animals, human records were our only resource. Additionally, available geographic information on occurrences of human plague can be rather coarse in resolution, as plague occurs chiefly in remote areas, often in developing countries. Given incomplete gazetteers, geographic complexities, changing toponyms, or deficient information, we could only assign geographic coordinates to 45 spatially unique occurrence locations, so the occurrence data are not extensive and may not detect phenomena occurring at scales finer than the 5-10 km error margin in georeferencing. Also, since plague occurrences were considered only once, with no weighting to account for multiple cases occurring at single locations, our models do not distinguish epidemic occurrences from ongoing transmission. Finally, it must be remarked that, although ENMs can be developed using relatively small samples of occurrence points [28], sample sizes from the regional subsets of occurrence locations in this study were approaching minimal, such that single data points could change the overall results. However, we developed these ENMs to visualize the ecological distribution of plague in sub-Saharan Africa in ecological space, and not so much to produce accurate predictive maps of plague. Conclusion In this study, an ENM approach was applied to the distribution of plague in sub-Saharan Africa, to identify ecological factors related to the occurrence of the disease. Our main conclusion is that plague in Africa persists in ecologically diverse biotopes, which implies that the typical focality of plague, which is observed here, is not related to fragmented or insular environmental conditions manifested at this coarse scale. Although our overall ENMs may not predict real incidences of human plague accurately, they do outline overall geographic potential, and finerscale analyses are under development (Neerinckx et al. in preparation). Human plague occurrence data Locations of known plague occurrences in endemic regions of sub-Saharan Africa were compiled through an extensive literature search. We first searched the international databases PubMed [29] and Web of Science [30]. Information was derived from scientific publications, supplemented with data from other literature and web-accessible documents. Furthermore, local plague experts, ministries of health in endemic African countries, and the World Health Organization were contacted for additional plague occurrence information. For this study, an occurrence point was defined as a site from which a human bubonic plague case or outbreak of local origin was detected and reported. We assigned geographic coordinates to occurrence locations using world gazetteer databases [31-33] and hardcopy maps. In all, 45 locations from sub-Sahara Africa were georeferenced with a spatial precision of 5-10 km (~0.05-0.1°). More occurrence locations were available, but coordinates could not be assigned for lack of detailed geographic information. Environmental data Environmental data sets (25 'coverages') for ENM development were drawn from five sources. (1) Climatic data layers in the form of seven 'bioclimatic variables' (native resolution 1 × 1 km) were drawn from the WorldClim data set [34], summarizing annual mean temperature, mean diurnal temperature range, maximum temperature of the warmest month, minimum temperature of the coldest month, annual precipitation, and precipitation of the wettest and driest months. (2) Topographic data (native resolution 1 × 1 km) summarizing elevation, aspect, slope, and compound topographic index (a measure of tendency to pool water) were obtained from the U.S. Geological Survey Hydro-1K data set [35]. (3) Seven variables describing soils and ecosystems (native resolution ~10 × 10 km), including actual evapotranspiration, potential evapotranspiration, growing degree days, soil organic carbon, soil moisture, average annual relative humidity, and soil pH were drawn from the Atlas of the Biosphere [36]. (4) One coverage (tree cover percentage) was derived from the Global Land Cover 2000 Project of the European Commission (native resolution 1 × 1 km) [37]. Finally, (5) six monthly maximum NDVI values from the Advanced Very High Resolution Radiometer (AVHRR) sensor for April, June, August, October, and December 1992, and February 1993, were drawn from the University of Maryland Global Land Cover Facility, (native resolution 1 × 1 km) [38]. All data layers were projected in geographic coordinates and generalized to a pixel resolution of ~10 × 10 km for analysis. Ecological niche modeling Our approach to ENM is based on ecological niches defined as the set of environmental conditions under which a species is able to maintain populations without immigration [39,40]. Known occurrences of species were related to digital GIS data layers summarizing environmental variation to develop a quantitative picture of the ecological distribution of the species [39,41]. We used the Genetic Algorithm for Rule-set Prediction (GARP) for ENM development; GARP uses an evolutionary-computing genetic algorithm that develops a set of conditional rules to relate observed occurrences of species to environ-mental characteristics across the overall study area [42]. Although early evaluations indicated poor predictive ability by GARP [43], more recent analyses with altered performance tests indicate considerable better performance [44,45]. As such, given that GARP has been the basis for essentially all previous ENM analyses of disease systems, the choice for this package was obvious. All modeling in this study was carried out on a desktop implementation of GARP (DesktopGarp) [46]. Within GARP processing, available occurrence data are subdivided as follows: 50% of occurrence points set aside for filtering among replicate models based on their error statistics (extrinsic testing data, see below), 25% used for developing rules (training data), and 25% used for model refinement internal to GARP (intrinsic testing data). Distributional data are converted to raster layers, and 1250 'pseudoabsence' points created by random sampling from areas lacking known presences. GARP works in an iterative process of rule selection, evaluation, testing, and incorporation or rejection. Initially, a method is chosen from a set of 4 basic rule types (atomic rules, bioclimatic rules, range rules, and logistic regression), each a different method for predicting presence versus absence across landscapes [42]. Specific operators designed to mimic chromosomal evolution (e.g., crossing-over among rules, point mutations, deletions, etc.) are then used to modify the initial rules. After each modification, the quality of the rule is evaluated (to maximize both significance and predictive accuracy), based on the intrinsic testing data, and a size-limited set of rules is retained. Because rules are tested based on independent data (intrinsic testing data), performance values reflect the expected performance of the rule, an independent verification that gives a more reliable estimate of true rule performance. The final result is a set of conditional rules that have "evolved" (hence the "genetic" algorithm) for maximum significance and predictive ability; these rules are projected onto the broader landscape to identify a potential geographic distribution for the species [42]. To optimize model performance, we developed 100 replicate models based on independent random subsamples from available occurrences. A "best subset" of 10 models was chosen on the basis of omission (leaving out true potential distributional areas) and commission (including areas not potentially suitable) error statistics calculated based on the extrinsic testing data [47]. Specifically, we used a relative omission threshold, in which the 20% of the models with lowest omission rates were retained. We then chose the 10 models having intermediate levels of commission, i.e., the central 50% of the commission index distribution among the 20 low-omission models. The 10 models selected by this procedure were summed pixel by pixel in ArcView 3.2 to produce a final prediction. ENM validation To test the ability of the ENM algorithm to predict accurately across broad unsampled areas, a "regional jackknife" procedure was used. We divided the overall sub-Saharan Africa data set into four regional subsets to provide a quantitative assessment of transferability (i.e., ability to predict into unsampled areas) of our ecological niche models [44]. Specifically, we focused on spatial stratification (i.e., rather than random splits) of available occurrence data to avoid problems with spatial autocorrelation and consequent non-independence of training and testing data -we used natural gaps as a first criterion (e.g., separating area including Congo and Uganda cases from remaining areas based on an apparent broad disjunction), and then split these disjunct areas further to produce testing regions of approximately even sample sizes (A: 11; B: 9; C: 14; and D: 11). These areas are explicitly arbitrary in nature, but are simply designed to permit testing of the constancy of the niche 'signal' across broad spatial realms. Models were trained based on the available occurrences in three areas, and tested using the distribution of occurrences in the fourth. Predicted presence was defined as areas predicted present by ≥ 5 of 10 of the replicate models. Cumulative binomial probabilities were used to assess the degree to which observed levels of agreement exceeded expectations under the null hypothesis of no association [47], which avoids the low expected frequency limitations of chi-squared testing approaches. To evaluate the potential geographic distribution of plague across Africa, and to test model transferability, we projected an ENM based on all occurrences onto landscapes covering all of Africa and Madagascar. An independent set of occurrence locations (i.e., not used in model training) from Madagascar (29 points) and northern Africa (10 points), obtained similarly as described above, both with spatial accuracies of 1 km (~0.01°), was used to test the transferability of the final overall model, using methods similar to the binomial testing method described above. To assess empirical contributions of particular environmental dimensions to the final model, environmental data were manipulated using a jackknife procedure [48]. We developed 25 models each using different combinations of 24 of the 25 environmental coverages; similarly, each coverage was included systematically in single-coverage analyses to evaluate explanatory of each on its own. Inspecting patterns of model performance based on single coverages and based on all other coverages in relation to omission error rates (given that commission "error" includes both true error and apparent error resulting from distributional disequilibrium [39]), then provides a sort of sensitivity analysis, in which we assess the contribution of each coverage to model predictivity [48,49]. Publish with Bio Med Central and every scientist can read your work free of charge
5,685.4
2008-10-23T00:00:00.000
[ "Biology" ]
Synthesis of Tetramic Acid Fragments Derived from Vancoresmycin Showing Inhibitory Effects towards S. aureus Abstract An efficient route to various vancoresmycin‐type tetramic acids has been developed. The modular route is based on an effective Fries‐type rearrangement to introduce various appending acetyl residues. The minimum inhibitory concentration (MIC) values of the new tetramic acids against Staphylococcus aureus and Escherichia coli were determined, revealing that three of the new compounds exhibit antimicrobial activity against S. aureus. These bioactive compounds were structurally most closely related to the authentic vancoresmycin building block. Additionally, the compounds induced a lial‐lux bioreporter, which responds to cell wall stress induced by antibiotics that interfere with the lipid II biosynthesis cycle. These data suggest the tetramic acid moiety to be a part of the vancoresmycin pharmacophore. Introduction Vancoresmycin (1, Figure 1) presents a structurally unique linear metabolite that is characterized by a functionalized acetyl tetramic acid connected to an extended polyketide chain with a plethora of hydroxy-and methyl-bearing stereogenic centers and also contains an attached aminoglycoside moiety. It was first isolated in 2002 from the fermentation broth of the actinomycete Amycolatopsis sp. ST 101170. [1] This polyketide shows extremely potent MICs values ranging from 0.125 to 2 μg/mL against a variety of pathogens, including multiresistant Gram-positive bacteria. In 2017, the mode of action was investigated, and it was proposed that vancoresmycin is involved in a non-pore-forming, concentration-dependent depolarization of bacterial membranes. [2] Additionally, a stereochemical assignment based on domain analysis of the ketoreductase domains was proposed for most of the configurations, but has not yet been proven. [2] As part of our studies with potent polyketide antibiotics bearing extended polyene segments, [3][4][5][6][7] we became interested in developing a synthetic route to the vancoresmycin-type acetylated tetramic acids bearing characteristic labile diene segments and to evaluate their antibacterial properties. In general, tetramic acids including their 3-acyl derivatives found in a variety of terrestrial and marine species have been studied and reviewed previously. [8,9] A range of bioactivities like antibacterial, [10] antiviral, [11] and antitumoral [12] potencies have been reported for these compounds. Their intriguing structures enables complex formation with various metal ions like Mg 2 + , Fe 2 + , Zn 2 + and Cu 2 + [13] and in some cases this type of chelation has been shown to be essential for the biological activity. [14] Furthermore, these features are characterized by extended tautomerism. As shown in Scheme 1, 3-acyltetramic acids may occur in solution in four forms (a, b, c, d), which may be classified as a pair of "external" (ab/cd) and a pair of "internal" (a/b; c/d) tautomers. Due to the CÀ C bond rotation of the 3-acyl group, the interconversion between the external isomers is a rather slow process on the NMR timescale, resulting in separate NMR signals in nonpolar solvents (e. g. CD 2 Cl 2 ). [15,16] Complexation as well as tautomerism render tetramic acids fascinating heterocycles but also complicate synthesis and analysis. [8,9] [a] L. M Results and Discussion As shown in Scheme 2, our successful route to vancoresmycintype tetramic acids started with introduction of the 5substituent to commercially available 4-methoxy-3-pyrrolin-2on (2). As previously reported, isobutyraldehyde was added in basic conditions to give compound 3 in high yield. [17] After methylation of the ring nitrogen with iodomethane, the methoxy group was cleaved under acidic conditions yielding compound 5 in 79 % over three steps, without the need of column chromatography. If required, any intermediate may also be purified by using column chromatography. For the elaboration of structures 12/13 and 17/18 (Scheme 3), compound 8 was required. For synthesis of 8 a modification of a reported procedure from propanediol (6) was applied. [18] In detail, a Parikh-Doering oxidation was used instead of a Swern reaction and a different Horner-Wadsworth-Emmons (HWE) procedure was applied for better E/Z selectivity. Finally, saponification gave desired 8, which was used for coupling with compound 5. Various esters at the 4-position of compound 5 are easily accessible using acid chlorides or esterification reactions like the Steglich protocol using N,N'dicyclohexylcarbodiimide (DCC) and 4-(dimethyl-amino)pyridine (DMAP) giving compounds 9-12 (Scheme 3) in good to excellent yields. [15,17] To realize the required oxygen-to-carbon transfer, the novel tetramic acid esters 9-13 were submitted to a known rearrangement by using CaCl 2 in the presence of DMAP and triethylamine. [19,20] This process efficiently realized the synthesis of 3-acyl derivatives 14-18 in high yield. Acylation reagents were chosen to be authentic to vancoresmycin and also to allow further modifications like aldol condensation, cross metathesis or a nucleophilic attack of the ester at the 3position. The TBS protecting group of compounds 12 and 17 can be cleaved either at the ester stage to furnish 13 or at the final stage to give 18. Notably, preparation of compound 18 represents the successful synthesis of a vancoresmycin fragment, which is also the largest part of the natural product that bears no chiral center. In CD 2 Cl 2 , tautomerism was observed for all tetramic acids 14-18, resulting in double data sets of NMR measurements and also in broadening of the signals. For compound 15, even more data sets could be observed in the NMR, due to the extended polyene system, emerging from the additional conjugated double bond. On the other hand, compound 16 showed only traces of a second data set, which indicates a stabilization of the double bond system by the introduced ester group. Several trials to protect and trap one of the tautomers of 14 as a silyl ether failed. Therefore, protection of 16 was evaluated as this representative is characterized by a lower degree of such isomerization. Here, an allylic protection was chosen as silyl (1) with the proposed stereoinformation. [2] Scheme 2. Synthesis of compound 5. Scheme 3. Synthetic route to vancoresmycin derived 3-acyltetramic acids (14)(15)(16)(17)(18). enol ethers would be expected to be more labile. However, likewise the hydroxy group could not be protected in initial experiments. Instead, the allyl group was attached to the 3position yielding compound 19 as a racemic mixture, which was proven by HPLC on chiral phase (Scheme 4). Future investigations of suitable reaction conditions will show whether the alkylation may be tuned with respect to the oxygen to carbon regioselectivity. In total, 14 tetramic acid derivatives, that is, 3-5 and 9-19, have been synthesized with 13 not reported before. Only 3 has been described previously. [17] For these compounds the MIC values against Gram-positive and Gram-negative bacteria, that is E. coli K12 and S. aureus SG511, were determined and summarized in Table 1. As shown in Table 1, some of the synthesized tetramic acids demonstrated inhibitory effects against S. aureus. The more potent compounds were compounds 14, 15 and 18, which are also most similar as compared to vancoresmycin. Among these compounds 18 was a little less potent (MIC: 64 μg/mL) as compared to 15 (MIC = 32 μg/mL), even though 18 is the longest and most authentic compound. A possible reason for this discrepancy may be due to lower degrees of stability, as evidenced by NMR studies showing that 18 is not stable in DMSO or acetonitrile. Additionally, 18 decomposes when stored over weeks (under argon, at À 15°C). No activities were observed against E. coli, which is in agreement with the reported data for vancoresmycin. [2] To see if the synthesized compounds interact with the bacterial cell wall machinery they were further screened in a lial-lux bioreporter assay, which responds to cell wall stress induced by antibiotics that interfere with the lipid II biosynthesis cycle substantiating again antimicrobial effects of the compounds, in line with the MIC values. [21] As shown in Figure 2, 14, 15 and 18 induce the Bacillus subtilis lial-lux bioreporter, resulting in measurable luminescence and revealing interference with cell wall biosynthesis. A figure including the lial-lux bioreporter assay outcome of additional novel tetramic acids can be found in the Supporting Information. Conclusion In summary, we have reported an efficient synthetic route to novel vancoresmycin-type 3-acyl tetramic acids. The successful route uses an aldol condensation to efficiently attach the alkylidene substituent at C-5 and a Fries-type rearrangement to introduce the acetyl residue. Additionally, we determined the MIC values against E. coli and S. aureus, thereby identifying three compounds with antimicrobial activities against the Gram-positive pathogen. Among all tested compounds the active heterocycles are structurally most closely related to vancoresmycin, thus indicating that this segment is part of the vancoresmycin pharmacophore. These structures are also characterized by high degrees of tautomerism, which might suggest that polyene shifts are important for biological activity. The active tetramic acids also induced the B. subtilis lial-lux bioreporter, revealing their interference with cell wall biosynthesis.
2,007.6
2020-06-04T00:00:00.000
[ "Chemistry", "Medicine" ]
Therapeutic Effects of Genetically Modied Wharton’s Jelly Mesenchymal Stem Cells Expressing Erythropoietin on Breast Cancer-related Anemia in Mice Model Cancer-related anemia (CRA) negatively inuences cancer patients’ survival, disease progression, treatment ecacy, and quality of life (QOL). Current treatments such as iron therapy, red cell transfusion, and erythropoietin-stimulating agents (ESAs) may cause severe adverse effects including hemolytic transfusion reaction and the possibility of host immunity against rhEPO. Therefore, development of long-lasting and curative therapies is highly required. Combined cell and gene therapy platform can introduce a new route for permanent production of erythropoietin (EPO) in the body with various degrees of clinical benets and avoiding the need for repeat treatments. In this study, we developed cell and gene therapy strategy for in-vivo delivery of EPO cDNA via genetic engineering human Wharton’s jelly mesenchymal stem cells (hWJMSCs) to long-term produce and secret human EPO protein after transplantation into the mice model of CRA. To evaluate CRA's treatment in cancer-free and cancerous conditions, at rst, we designed recombinant breast cancer cell line 4T1 expressing herpes simplex virus type 1 thymidine kinase (HSV1-TK) by a lentiviral vector encoding HSV1-TK and injected into mice. After 3 weeks, all mice develop metastatic breast cancer associated with acute anemia. Then, we administrated ganciclovir (GCV) for 10 days in half of the mice to clear cancer cells. Meanwhile, we designed another lentiviral vector encoding EPO to transduce hWJMSCs. Following implantation of rhWJMSCs-EPO, the whole peripheral blood samples were collected from the tail vein once per week for 10 weeks which were immediately analyzed for the measurements of EPO, hemoglobin (Hb), and hematocrit (Hct) plasma levels. The blood analysis showed that plasma EPO, hemoglobin (Hb), and hematocrit (Hct) concentration signicantly increased and remained at a therapeutic level for >10 weeks in both treatment groups which indicates that the rhWJMSCs-EPO could improve CRA in both cancer-free and cancerous mice model. Introduction Cancer-related anemia (CRA), as a common consequence of tumor burden, occurs in more than 30% of cancer patients at diagnosis time which can reach 90%, when the patients go under aggressive chemoradiotherapy [1,2]. The mechanisms contributed to CRA are included in chemotherapy-induced anemia (CIA), blood loss, iron de ciency, erythropoietin de ciency due to renal disease, and bone marrow involvement with the tumor [3]. Due to CRA's negative effect on survival, disease progression, treatment e cacy, and the patient's quality of life (QOL), developing effective therapy for CRA is in great demand. Current treatments for CRA include iron therapy, red cell transfusion, and erythropoietin-stimulating agents (ESAs) [4,5]. Erythropoietin (EPO) is a 30.4 kDa glycoprotein hormone primarily produced by the fetal liver and adult kidney, which plays an important role in the body such as erythropoiesis, tissueprotective effects, and immune regulatory effects on immune cells [6]. The administration of recombinant human erythropoietin (rhEPO) as a key regulator in the production of functional red blood cells is approved to treat the anemia of patients with long-lasting diseases such as cancer and renal failure with clinical bene ts in correcting hemoglobin levels and markedly reduced the required number of blood transfusion and so, patients bene t from advantages, such as improved cardiac function, enhanced exercise capacity, and better QOL [7]. However, rhEPO administration may cause some adverse effects, including the possibility of host immunity against rhEPO, frequent self-administration by the patients which may not know the correct injection method and high cost [8]. Gene therapy via designing the plasmid DNA and viral vectors encoding the EPO gene introduced an attractive research area for treating anemia. In preclinical studies, gene therapy for direct delivery of EPO gene into animal models' skeletal muscle has shown a signi cant increase in EPO and erythropoiesis; however, life-threatening polycythemia and host immune responses to viral vectors limit its utilization in the clinic [9,10]. Cell therapy is therapy in which viable cells such as primary, stem or progenitor cells or stem cell derivatives are injected or implanted into a patient [11]. Cell-based therapies were studied as a treatment option in animal and clinical phases since 1990 to address incurable diseases, such as autoimmune, skeletal, cardiovascular, neurological, ophthalmologic, and blood diseases which showed satisfactory safety and e cacy pro les [11][12][13]. Furthermore, the combined cell and gene therapy platform which we propose here is emerging as a potential alternative treatment to the traditional pharmacologic and also direct gene therapy; since it can permanently produce therapeutic proteins in the body with various degrees of clinical bene ts and avoiding the need for repeat treatments [14,15]. An effective cell and gene therapy protocol approach to deliver the EPO have more clinical and economic bene t than the repeated injection of EPO protein. Mesenchymal stem cells (MSCs) as desirable cell carriers can be easily obtained, expanded, and genetically engineered to express and secret therapeutic proteins in-vivo [14,16]. Human Wharton's jelly mesenchymal stem cells (hWJMSCs) are multipotent stem cells that showed the potential to differentiate into mesodermal, ectodermal, and endodermal lineages [17]. hWJMSCs have led to promising outcomes in preclinical and clinical studies due to their limited heterogeneity, ease of their isolation and culture, availability in several tissues, and ability to self-regenerate [18]. Also, compared to adult and fetal stem cells, hWJMSCs show a higher proliferation rate and minimum stimulation of immune and in ammatory systems [19]. So, they have newly emerged as an appropriate therapeutic vehicle for gene therapy and drug delivery. Their therapeutic applications in various disease models, including in ammatory and autoimmune diseases, and cancer are being studied [20][21][22]. However, there are still some challenges that need to be addressed for the successful application of hWJMSCs including age-related telomere shortening at higher passages, morphological changes and loss of their differentiation ability, and rapid death of the transplanted cells [23]. This investigation genetically modi ed hWJMSCs to long-term produce and secret human EPO protein after transplantation into CRA mice. We used breast cancer cell line 4T1 to develop mice model of CRA as it causes tumor-associated acute anemia [24]. To evaluate CRA's treatment in cancer-free and cancerous conditions, we genetically altered breast cancer cell line 4T1 to express HSV-TK and inject into the mice. After con rming anemia induced by breast cancer, we eliminate cancer cells by administrating ganciclovir (GCV) in half of the mice. Following implantation of rhWJMSCs-EPO, plasma EPO, hemoglobin (Hb), and hematocrit (Hct) concentration signi cantly increased which indicate that the EPO-transduced hWJMSCs could improve the anemia of cancer in both cancer-free and cancerous mice model and can provide supporting evidence for future studies as a valuable therapeutic tool for the treatment of anemia. (Gibco, USA), penicillin (100 units/ml)/streptomycin (100 mg/ml) (Invitrogen, Carlsbad, CA, USA). Finally, the cells were incubated in a humidi ed atmosphere with 5% CO 2 at 37°C. 2.2. Gene Design. The coding sequences (CDS) of EPO (Accession#: XM_001468996.1) were retrieved from GeneBank, NCBI. The sequences were synthesized by Genscript, USA, and were incorporated into pUC57 plasmid. In this experimental study, a dual promoter lentiviral vector, pCDH-513B, was purchased from System Bio, USA. The rst promoter is the cytomegalovirus (CMV) promoter with a downstream multiple cloning site (MCS) used for gene cloning. The second promoter is the EF1a promoter which regulates the expression of CopA-GFP (copepod green uorescent protein) (cGFP) and puromycin resistance genes. 2.3. hWJMSCs Isolation and Characterization. Human healthy umbilical cord (UCs) Wharton's jelly tissue (n= 1) was collected from full-term newborn in Vali-e-Asr Central Hospital and processed after obtaining the mother's informed consent and approved by the Ethics Committee of Fasa University of Medical Sciences (IR.FUMS.REC.1397.177). It was washed in phosphate-buffered saline (PBS) to eliminate the blood clots, disinfected, and cut into 1-2 mm lengths, and then were expanded on culture dishes with a minimum of Dulbecco's Modi ed Eagle Medium containing Nutrient Mixture F-12 (DMEM/F12) medium (Gibco, USA), supplemented with 30-40% FBS. Then, it was incubated at 37°C with 5% CO 2 which routinely monitored. After one week, solid umbilical cord pieces were removed, and cell migration was evaluated under the invert light microscope. Upon reaching 90% cell con uence, in the second changing medium, the adherent cells detached by the addition of 0.25% Trypsin-Ethylene Diamine and re-plated, usually at 4_6 days' intervals [25]. For adipogenic, osteogenic, and chondrogenic differentiation, a six-well plate was cultured with 1× 10 5 cells per well. In the third passage, after reaching a 40-45% cell con uence, an adipogenic differentiation medium (Gibco, USA), osteogenesis supplement (Gibco, USA), and chondrogenic supplement (Gibco, USA) were added to the basic medium, respectively. After about 20 days, lipid droplets were visualized using Oil Red O (Sigma-Aldrich, USA) staining. Successful osteogenic differentiation was veri ed by Alizarin Red (Sigma-Aldrich, USA) staining. Safranin-O (Sigma-Aldrich, USA) staining was used to determine the presence of proteoglycans (PGs). In the control group, hWJMSCs were grown in the culture medium without adipogenic, osteogenesis, and chondrogenic supplement. Passage 3 WJMSCs were used in all experiments. A small number of undifferentiated hWJMSCs (passage 3, 10 5 cells) were analyzed using BD FACSCalibur ow cytometry (BD Bioscience, USA) to evaluate surface markers expressed by hWJMSCs. After adding speci c antibodies at the recommended concentrations, the tubes were incubated in the dark at room temperature for 30-60 minutes. Then, ow cytometry analysis was performed to study two positive markers _CD105 and CD90_ and two negative markers _ CD45 and CD34_ and data were analyzed using FlowJo (version 7.6.1) software. Trono lab protocol, CaPO4 transfection of HEK293T cells was performed with some modi cations using the following amounts of DNA: 21 μg transfer/control vector, 10.5 μg pMD2.G vector, 15 μg pMDLg/pRRE vector, and 13 μg pRSV-Rev vector which all were dissolved in HEPES buffered water to reach 921μl. Then, 33μl Tris-EDTA (TE) buffer was added, and the mixture was strongly mixed and kept at room temperature for 3 minutes. Then, 105 μl CaCl2 2.5 M was added, and the mixture was strongly vortexed and left for 3 min for making DNA-CaCl2 interaction. Then, 1050 μl HEPES 2X was added while the mixture was being vortexed. HEK293T cells (2×10 6 cells) medium was changed 2 h before transfection. Early-passage HEK293T cells (passages under 15) with 80 % con uency was co-transfected by plasmids in a way that 2100 μl transfection master mix was added per 10 cm HEK293T cells as a droplet to all areas of the plate. Then, they were incubated at 37 °C in 5 % CO 2 for 16 h. After 24 hours, the transfection e ciency was assessed by GFP expression and visualized by an inverted uorescent microscope (Leica, German). We selected ve elds randomly under a uorescent microscope, and the percentage of GFP-positive cell numbers determined the transfection e ciency to total cell number. Mediums containing the virus were collected after 24, 48, and 72 h following transfection which were passed through 0.25μm pore lters to remove cellular debris. Recombinant lentivirus concentration was measured by the addition of polyethylene glycol (PEG) 600 50%, NaCl4 M, and PBS to recombinant viruses inside polypropylene bottles which stored at 4°C for 1.5 hours. Then, the tubes were centrifuged at 15000 rpm for 15 minutes at 4°C. To determine the titration of recombinant lentiviral, the number of GFP positive cells were counted using ow cytometry according to the equation "TU (Transduction Unite/ml) = [F × C/V] × D" in which F is the frequency of GFP-positive cells, C is the total number of cells in the well at the time of transduction, V is the volume of inoculum in mL, and D is the lentivirus dilution factor. Fresh recombinant titrated viruses at the volumes of 1000, 500, 100, 50, 20, and 0 μl were used for transducing hWJMSCs and 4T1 cells. Transduction and Viability Assay. The second passage of hWJMSCs and 4T1 cells were cultured at low con uency of 30-40% in a 6-well dish and incubated at 37 °C, 5 % CO 2 overnight. Then, they were transduced by rLV-GFP, rLV-EPO, and rLV-TK to generate rhWJMSCs, rhWJMSCs-EPO, and r4T1-TK respectively. Cell transduction was evaluated using a uorescent microscope, 72 hours after the transduction, and was compared with non-transduced MSCs and 4T1. Puromycin (1.5 μg/ml) selection started 72 hours after the transduction at passage 3 for the next 5 days. Then, we analyzed them morphologically by phase-contrast microscopy to con rm that transduction with lentiviral vectors has not changed the morphology of WJMSCs. For MTT assay, we cultured 5×10 3 cells from transduced and normal hWJMCSs and 4T1 cells in 96-wells plates. After 1 day, we added MTT reagents and incubated them for 4 hours. Also, to measure r4T1-TK cells' sensitivity to GCV, we add 20 μg/ml GCV to the r4T1-TK cells culture medium. Then, we added the dimethyl sulfoxide (DMSO) to terminate the reaction and the plate was read at 570 nm wavelength using BioTek Instruments (Vermont, United States) microplate reader. 2.6. Western Blot Analysis. EPO and TK gene expression was con rmed after the transduction by western blotting assay. rhWJMSCs-EPO and r4T1-TK supernatants were collected 72 hours after the transduction. Evaluating the protein concentrations was examined using a BCA Protein Assay Kit (Thermo Fisher, USA). Equivalent amounts of proteins (30 µg/lane) were loaded onto 12% sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE) and then transferred onto nitrocellulose membranes (Bio-Rad, USA). The membranes were blocked using 5% non-fat milk and immunoblotting was performed using antibodies against EPO and TK (Santa Cruz, USA). Proteins of interest were detected using HRPconjugated sheep anti-mouse IgG antibody (Abcam, ab6785). Finally, the protein bands were visualized using chemiluminescence (ECL) reagent, and the integrated optical density (IOD) of each protein band was measured. The internal standard β-actin adjusted IOD values. 2.7. CRA Mice Model. The mice were obtained from the laboratory animal center of Pasteur Institute of Iran. Male and female BALB/c mice (6-to 8 weeks old, n=60; weight, 18-20 g; n=10 mice/group) were housed and treated in a pathogen-free environment with the access to autoclaved food and water ad libitum according to national rules on animal experiments. Also, the mice were maintained on an iron-su cient diet for 2 weeks before the injection of tumor cells or saline, which continued until the end of the study[26]. 5 ×10 5 recombinant 4T1-TK cells diluted in 100 μl PBS were injected into the right fourth mammary gland of mice using a 25-gauge needle [27]. The tumor was palpably detected after one week of injection in all 60 mice; thereafter, twice a week routinely, the tumor volume was measured until the end of the study. Because r4T1-TK tumors were metastatic within 2-3 weeks after injection, to evaluate the consequences of tumor progression on erythropoiesis compare to the control group, peripheral blood samples were obtained from the tail vein at week 3 and were analyzed using a Sysmex XT-2000i automated hematology analyzer (Sysmex Corp., Hyogo, Japan) for the measurements of red blood cell (RBC) count, reticulocyte numbers, Hct, and Hb concentration to con rm anemia in mice. Those r4T1-TK-bearing BALB/c mice quali ed for developing the anemia were randomly divided into 6 groups (n= 10 mice/group). In this study, taking into account our published ndings in cancer-free and cancerous mice, three groups of animals were treated intraperitoneal (IP) injection with Ganciclovir (GCV, 100 mL) at 75 mg/kg twice daily for 10 continuous days 2.8. Implantation of rhWJMSCs-EPO. We evaluated our anemia treatment protocol in six groups of animals, all of which had anemia (3 cancer-free groups and 3 cancerous groups) according to the following instructions: A: cancer-free mice that received a moderate dose of rhWJMSCs-EPO to evaluate the effects of treatment during cancer suppression. B: control cancer-free mice that received control rhWJMSCs. C: control cancer-free mice that received PBS. D: control cancerous mice that received rhWJMSCs-EPO to evaluate treatment effects during the active phase of cancer. E: cancerous control mice which received rhWJMSCs. F: control cancerous mice which received PBS. Based on recent studies, 10×10 6 MSCs expressing EPO is considered a high dose of treatment which can develop polycythemia and 4.5×10 6 MSCs expressing EPO as a low dose of treatment has shown no correction of anemia [29]. So, to avoid polycythemia and also to achieve a better therapeutic response, we used a moderate dose of rhWJMSCs-EPO (~7×10 6 ) implanted into mice's skeletal muscle with a 29-G insulin syringe [29]. For laboratory measurements, the whole peripheral blood samples were collected from the tail vein once per week which were immediately analyzed for the measurements of Hct and Hb plasma levels. Moreover, the changes in plasma EPO levels in all CRA mice were detected by ELISA (Quantikine ELISA kit, R&D Systems, Minneapolis, MN, USA) according to the manufacturer's protocol. 2.9. Statistical Analysis. All data are present as mean ± standard error of the mean (S.E.M.). The statistical analysis of data was performed using the Student's t-test and analyzed by Prism software (GraphPad, San Diego, CA). P-value < 0.05 were considered statistically signi cant. Results 3.1. Lentiviral Vectors Construction. Vectors containing EPO and HSV-TK were constructed using the DNA assembling method. Human EPO and HSV-TK cDNA were cloned in pCDH-513B-1 lentiviral vector ( Figure 1(a), (b)). Both genes were con rmed following enzymatic digestion and sequencing. The digestion of pCDH-EPO using XbaI and ApeI produced two bands of 1900 and 7800 base pair (bp) (Figure 1(c)), and digestion of the HSV-TK gene with SpeI and EcoRI showed two bands of 1100 and 8200 bp (Figure 1(d)). Sequencing was done for nal con rmation. In this construct, EPO and HSV-TK mRNA were transcripts from CMV promoter, and cGFP and Puromycin mRNA were transcriptions from the EF1a promoter. 3.2. hWJMSCs Isolation and Characterization. The hWJMSCs derived from human UC were isolated and cultured based on cell migration and surface attachment. In the third passage, the identity and properties of isolated hWJMSCs were veri ed by immunophenotyping of cell-surface antigens. According to ow cytometry analysis hWJMSCs were highly positive for CD105 and CD90 surface markers and negative for hematopoietic markers, CD45 and CD34 (Figure 2(a)). The broblastic-like morphology of the cells was con rmed under the inverted microscope. The hWJMSCs had a typical spindle shape with a capacity to differentiating into osteogenic, adipogenic, and chondrogenic lineages. The accumulation of lipid vacuoles was evaluated by Oil Red staining, Alizarin Red revealed calcium deposition, and the formation of PGs was con rmed via histological staining using Safranin-O (Figure 2(b)). 3.3. Lentivirus Production, In-vitro Transduction and Viability Assay. GFP reporter gene in the lentiviral vector was an index for transfection and transduction e ciency. Transfer and control vectors were cotransfected in early-passage HEK293T cells (passages under 15) with the helper packaging vectors based on CaPO 4 protocol. The transfection e ciency due to the GFP positive and negative cells counted under a uorescent microscope was higher than 90% (Figure 3(a)). Recombinant virus titrations were done as we described in materials and methods, and fresh rLV-GFP, rLV-EPO, and rLV-TK particle titration were approximately 2×10 6 , 1.8×10 6 , and 1.5×10 6 particles, respectively. We transduced 4T1 cells with rLV-TK and hWJMSCs with rLV-GFP and rLV-EPO and then cultured in a medium containing puromycin to separate them from non-transduced cells. The uorescent microscope visualized the expression rate of GFP. The transduction of hWJMSCs and 4T1 cells with concentrated and fresh recombinant viruses does not show any signi cant difference and we observed the broblast-like morphology by microscopy ( Figure 3(b)). Based on GFP positive and negative cells counted under uorescent microscope, transfection rate was higher than 90% and transduction rate was around 30% -40% (Figure 3(c)). MTT assay showed that transduction and genome integration of transfer lentiviral vectors do not signi cantly affect the viability of both transduced hWJMSCs and 4T1 compared with non-transduced cells (Figure 3(d)). As TK activates by GCV, we tested the sensitivity of r4T1-TK cells to GCV by adding 20 μg/ml GCV to the r4T1-TK cells culture medium. Results indicated that r4T1-TK cells' survival decreased with the addition of GCV drug (Figure 3(d)). Western blot results con rmed gene expression data at the protein level. Western blot analysis showed similar expression levels for β-actin protein in control and transduced hWJMSCs and 4T1 cells, compared to EPO and TK protein which was only overexpressed in transduced hWJMSCs and 4T1 cells, respectively (Figure 3(e)). These results showed that both proteins (EPO and TK) were transcribed and translated correctly. CRA Mice Model. Generating mice model of CRA, the recombinant r4T1-TK was injected into mice to develop breast cancer and anemia as a result of cancer development. At 14 days after injection, all tumor mice appeared in distress, as evidenced by lethargy and poor feeding secondary to tumor load. At rst, the consequence of tumor burden on erythropoiesis was evaluated via analyzing RBC counts, Hb concentrations, Hct, and reticulocyte counts in the peripheral blood samples of r4T1-TK tumor-bearing mice at week 3. r4T1-TK-bearing mice showed an anemic feature with lower RBC count, Hb, and Hct levels, as well as increased reticulocytosis, compared to control mice (Figure 4(a)). After con rming the presence of anemia in all tumor mice, we treated three groups with GCV drug, whereas the other three groups received PBS, as mentioned in Material and Methods. As expected, we observed a tumor suppression in mice that received a 75 mg/kg dose of GCV drug than the PBS groups in which tumor volume consistently increased (Figure 4(b)). Effects of Treatment. Our experimental procedures to evaluate the time course of the biological effect of EPO-secreting rhWJMSCs on the anemia of cancerous and cancer-free mice with experimental breast cancer is described in Materials and Methods which illustrated in Figure 4. Among cancer-free groups (A, B, and C), plasma EPO level in group A which received rhWJMSCs-EPO, signi cantly raised and reached number 105.4± 23 mU/ml at week 4, compared to its two control groups B and C ~ 15 ± 2.8 mU/ml which received rhWJMSCs and PBS, respectively ( Figure 5(a)). This model of change detected in cancerous groups (D, E, and F) in which plasma EPO level was 103.1 ± 21.6 mU/ml in group D, which received rhWJMSCs-EPO in comparison to its two control groups E and F ~17.5 ± 3.2 mU/ml which received rhWJMSCs and PBS, respectively ( Figure 5(a)). Although there was a signi cant difference in EPO level between each treatment group (A and D) and its control groups, we found a slight difference in EPO level between two treatment groups A and B. After 7 days of intramuscular-implanted rhWJMSCs-EPO, an increase in Hb was achieved in A and D groups. However, compared to control groups that receive rhWJMSCs and PBS, they showed little difference in the rst week after the treatment (Figure 5(b)). The increase in hemoglobin becomes more substantial in two treatment groups per week 4 in which in group A, hemoglobin level rose to level 17.2 ±2.3 g/dl, to compare with group D 15.5 ± 1.8 g/dl. So, although hemoglobin level rose to its highest number in group D, it did not increase as much as the amount of hemoglobin in group A. Thus, we noticed a considerable difference in response to EPO treatment in cancer-free and cancerous groups. This different response to treatment was more noticeable after observing similar hemoglobin changes pattern among control groups (B and C vs. E and F) which didn't receive rhWJMSCs-EPO in which cancer-free groups B and C showed more increase in hemoglobin compare with cancerous groups E and F ( Figure 5(b)). It was present, while we had a similar pattern of changes in the rate of Hct increase in response to treatment. The Hct in group A, as in group D, reached the therapeutic level, more precise, to 60.3% ±2.7 in group A and 51.5% ±1.4 in group D, by approximately 4 weeks after rhWJMSCs-EPO transplantation, whereas the Hct was not signi cantly altered in control groups mice which received the control rhWJMSCs and PBS ( Figure 5(c)). Although the amount of Hct in cancer-free control groups (B and C) increased further compare to cancerous control groups (E and F), none of them reached therapeutic levels, similar to what was seen in Hb level of control groups ( Figure 5(c)). In general, we found a signi cant correlation between the amount of plasma level of EPO and Hb and also Hct concentration. In both groups of A and D which received ~7×10 6 rhWJMSCs-EPO, the Hb and Hct which had declined from basal 16.1 ± 2.1 to 9.3 ± 0.7 and 51% ± 1.2 to 31.7% ± 0.5 respectively, approximately 3 weeks after r4T1-TK injection, reach the therapeutic level at week 4. As we measured the plasma level of EPO weekly, we found a decrease in its level after week 5, but stayed in therapeutic level for >10 weeks in groups A and D. As EPO decreased, the plasma level of Hb and Hct declined in both treatment groups. However, Hb and Hct concentration persist at a therapeutic level for >10 weeks for both treatment groups of A and D. Discussion Our study on CRA treatment has included cell and gene therapy approaches, whereby the EPO gene is transferred to hWJMSCs by lentiviral vectors in-vitro, and the cells are subsequently implanted in-vivo to serve as EPO-releasing vehicles to establish, if EPO signi cantly increased Hb and hct levels. Several studies have reported using gene therapy to deliver the EPO gene using plasmid DNA and viral vectors such as adenoviral and AAV vectors [10,30,31]. By designing plasmid DNA expressing rat EPO gene and direct delivery into skeletal muscle of rat anemia model by subtotal nephrectomy, studies showed a noticeable increase in Hct [32,33]. The administration of adenoviral or AAV vectors into an animal model of anemia to deliver EPO resulted in an increase in plasma level of EPO and erythropoiesis which introduces viral vectors as a highly e cient gene delivery vehicle [34,35]; however, life-threatening polycythemia was reported [10,30]. The erythropoiesis response to therapy was proportional to the dose of plasmid DNA or viral vectors delivered. Moreover, host immune responses to these vectors and their transgene products are associated with potential health risks limiting their entry into the clinical phase [9,36]. One remedy to overcome the safety risks and the limitation of gene therapy approaches is using cells as delivery vehicles for plasma-soluble therapeutic proteins in-vivo like EPO, which allow us to quantify and control the serum level of EPO expressed by transduced cells through adjusting the number of implanted gene-modi ed cells secreting EPO to prevent severe polycythemia and also reduce the risk of systemic virus dissemination [37]. MSCs are promising candidates for gene delivery to treat hematological diseases like anemia, mostly due to their accessibility for genetic modi cation and the simplicity of their culture and expansion in vitro [16,38]. Indeed, some experimental studies were reported using the MSCs as a suitable delivery vehicle for therapeutic proteins in vivo [29,39]. Viral methods were widely used in the production of therapeutic protein by MSCs [40]. The main purpose of our investigation was to apply this biopharmaceutical approach for the EPO delivery in vivo for the treatment of CRA. In this study, we isolated MSCs from the human UCs as a good source of MSCs, because they can be harvested noninvasively in large numbers after birth with no ethical problems compared to MSCs derived from adults, have some advantages such as an improved proliferative capacity, life span, differentiation potential, and immunomodulatory properties which offer the best clinical utility [21,41]. We transduced hWJMSCs with an EPO-encoding lentiviral vector under highly controlled conditions in vitro to avoid any risk of viral dissemination in vivo. Transplantation of a moderate dose of rhWJMSCs-EPO (~7×10 6 ) into the CRA mice model's skeletal muscle resulted in a cell dose-dependent increase of EPO level that reached up to 100 mU/ml in both treatment groups (A and D) after 4 weeks. It remained high until the end of the study (>10 weeks) (Figure 5(a)). Both Hb and Hct increase in response to EPO in both groups A and D; however, the increase in Hb and Hct in cancer-free group A was more signi cant than the cancerous group D (Figure 5(b, c)). Also, cancer-free control groups (B and C), in comparison to cancerous control groups (E and F), which received control treatments (rhWJMSCs and PBS), showed a higher level of Hb and Hct. Whereas all control groups had a low level of EPO. Thus, we concluded that combined cell and gene therapy strategies for correcting CRA could be more effective if the cancer is treated at the same time. It is currently believed that chemoradiotherapy is the key means for treating cancer patients, making CRA worse, and other serious side effects [2,42]. So, developing advanced therapeutic procedures which precisely target cancer cells are in great demand. In this study, we engineered recombinant 4T1 cells expressing HSV-TK to inject and develop breast cancer-associated anemia in mice, followed by injecting GCV to clear almost all cancer cells expressing TK in three groups of anemic animals. As a result, showed (Figure 4(b)), three groups that receive GCV displayed a signi cant tumor regression compared to cancerous groups. So, we could evaluate the e cacy of rhWJMSCs-EPO in cancer-free and cancerous groups to correct CRA in which the cancer-free groups had no serious side effects or other organ damage due to cancer treatment by GCV. Although this cancer treatment has no clinical utilization and we just design it in our study to evaluate the effect of rhWJMSCs-EPO on CRA in the condition in which cancer is treated via a precise targeted-therapy method without serious side effects which we see in other methods like chemo-radiotherapy, this hypothesis has important clinical implications, because developing therapeutic methods that only target cancer cells and clear all of them without in uencing other tissues or organs, similar to what we did as an animal study, not only can improve CRA over time, but also can pave the other CRA treatment such as cell and gene therapy that we used in this study. We observed a gradual decrease in plasma concentration of Hb and Hct during ~ 10 weeks in correlation to a decrease of EPO. It could be because, according to some studies, MSCs do not persist in the recipient organism for the prolonged periods [41,43]. According to some studies, the survival time of MSCs transplanted to the skeletal muscle varies from 72 hours to 8 months [44]. We hypothesized a second dose of rhWJMSCs-EPO transplantation could be associated with satisfactory therapeutic results that need further investigation in MSC engineering and therapy. Consequently, we will able to schedule treatment plans in which MSCs transplantation courses will be done with determined doses depending on the disease stage. Therefore, the cell and gene therapy approach used here in our study has its limitations as a long-term approach to CRA therapy. Although cell and gene therapy approach to correct anemia of cancer is in its infancy and has its own limitations, this strategy for the sustained production and delivery of EPO using ex vivo gene therapy to genetically engineer hWJMSCs to produce EPO can eliminate many of the adverse effects and complications of current therapies for CRA (Table 1). Supraphysiologic response leading to polycythemia may develop after the rst transplantation of EPO-secreting hWJMSCs which may require resection of cells or, conversely as we mentioned earlier the modi ed MSCs lose their effectiveness over time and therefore, re-implantation may be required to enhance their clinical usage. as cell vehicle has some limitations such as the non-sustained release of the desired secretory protein due to inactivation of vector sequence following transplantation, and also, depending on the donor's age the expansion capability of normal broblasts may be restricted because they ultimately reach a stage when the cell division cycle slow down leading to cell aging which limits their clinical applications [47]. In contrast, hWJMSCs used in this study are attractive candidates due to their potential expansion ability, an immuno-privileged status, and easy access for collection, which afford us high-e ciency lentiviral engineering cells, culture, and utilization in vivo of selected modi ed cells[48-50]. Conclusion Our data con rmed that administration of a moderate dose of rhWJMSCs carrying the EPO cDNA resulted in the elevated circulating level of EPO in the CRA mice model, which caused a persistent increase in Hb and Hct. So, a combination of hWJMSCs and lentiviral vector could be suggested as a novel cell and gene therapy approach for CRA. When this anemia treatment protocol is combined with a precise targeted-therapy for cancer cells, it will give the best results. Obviously, there are challenges to the clinical use of cell and gene therapies for diseases such as anemia. We will watch closely for clinical safety and e cacy using these methods for disease treatment that is needed to keep this eld ahead. The data used and/or analyzed to support the ndings of this study is available from the corresponding author on reasonable request. Con icts of Interest The authors have declared that no competing interest exists. Isolation, expansion, and characterization of human Wharton's jelly mesenchymal stem cells (hWJMSCs). (a) Flow cytometry analysis of hWJMSCs showed a positive marker for CD105 and CD90 and negative marker for CD45 and CD34. Results showed that more than 95% of hWJMSCs are positive for MSCs markers and negative for hematopoietic stem cell markers. (b) Cultured hWJMSCs showed broblasts morphology, stained mineral calcium indicated osteogenic differentiation, oil droplets con rmed adipogenic differentiation, and Safranin-O staining con rmed the formation of PGs. Scale bar = 100 μm. Cancer-related anemia (CRA) in mice following injection of r4T1-TK and Effect of GCV on the tumor regression. (a) r4T1-TK-bearing mice showed an anemic feature with lower RBC count, Hb, and Hct levels, and increased reticulocytosis, compared to control mice (*p <0.05). (b) Three tumor-bearing mice groups were treated by 10 days of GCV, and the other three groups received saline. Tumor volumes were measured two times a week. Signi cant tumor regression was observed in three groups that received GCV. Data are presented as mean±SD. All tests were done in triplicate, *P < 0.05
7,914.2
2020-10-02T00:00:00.000
[ "Medicine", "Biology" ]
The Responsibility of Private Pawnshops in Facilitating Ecologically-Friendly Marine Economies As the largest maritime country in the world, Indonesia has extremely large and diverse natural resources, both in the form of renewable and non-renewable natural resources. The natural wealth must be managed optimally to improve the welfare and prosperity of the Indonesian people. In the operation of marine economy, financial services also have a huge effect on the development of marine industry as a whole. This paper aims to analyze the supporting activities of auxiliary operations of financial services, especially pawnshops, in supporting the development of the ecologically friendly marine economies, particularly for vulnerable small fishermen. The research method used is empirical juridical by conducting library research related to legal principles, legal rules and legal norms related to private pawning. In addition, field research was also conducted to obtain primary data related to the existence of private pawnshops in the development of the fishing industry and its environmentally friendly approaches. The results of the study show that pawning has actually great potential for industrial development, including the fishing industry; but even though there are arrangements to facilitate the supervision of private pawning businesses, in practice there are still not many private pawns registered. Although not many have been registered, from time to time it shows progress. The existence of private pawnshops in Indonesia in the direct development of the fishing industry is still not widely used. Introduction Indonesia is the largest archipelagic country in the world with approximately 17,504 islands in Indonesia, and 16,671 islands have been standardized and registered with the United Nations (UN). The area of Indonesian waters is 6.4 million km 2 which consists of 0.29 million km 2 of territorial sea, 3.11 million km 2 of inland waters and archipelagic waters, and 3.00 million km 2 of the Indonesian Exclusive Economic Zone (EEZ). In addition, Indonesia has an additional water zone area of 0.27 million km 2 , the continental shelf area of 2.8 million km 2 , and a coastline length of 108,000 km [1]. As the largest maritime country in the world, Indonesia has very large and diverse natural resources, both in the form of renewable natural resources (fisheries, coral reefs, seagrass beds, mangrove forests, seaweed, and biotechnology products), non-renewable natural resources (oil and gas), natural gas, tin, iron ore, bauxite, and other minerals), marine energy (such as tides, waves, wind, OTEC (Ocean Thermal Energy Conversion), as well as marine and small island environmental services for marine tourism, sea transportation, and sources of biodiversity and germplasm). The natural wealth is one of the basic capitals that must be managed optimally to improve the welfare and prosperity of the Indonesian people. Fish resources in Indonesia's seas cover 37% of the world's fish species, of which several species have high economic value, such as tuna, shrimp, lobster, reef fish, various types of ornamental fish, shellfish, and seaweed. The sustainable potential of Indonesia's marine fish resources is estimated at 12.54 million tons per year spread over the territorial waters of Indonesia and the waters of the Indonesian Exclusive Economic Zone (ZEEI) [2]. As one of the 10 largest fish-producing countries in the world, Indonesia has large marine resources, including marine biodiversity and non-biological diversity [3,4]. This is very potential in the development of the fishing industry. The development of these businesses requires large capital, so the need for financing is also very large. As one of the materials guarantees to guarantee creditors' receivables, pawns have great potential for industrial development. In this regard, business actors in starting a business or developing their business always need financing. This financing can come from self-financing and financing from other parties, both from banks and non-bank institutions [5]. In various activities, especially business activities, there is always an agreement as a legal basis for the parties in the activity [6]. Nonetheless, there are still many illegal private pawn practices in people's lives, and there are still many deviations from the principles of pawning and the principle of guarantees in general. The problem in this research is the existence of private pawnshops in the development of the fishing industry. This study attempts to originally investigate the role of pawnshop financial services in increasing the capability of the environmentally friendly fishing industry sector, considering Indonesia's ecological wealth that needs to be maintained for the prosperity of the people as well as sustainable environmental preservation. Literature Review The need for existence of private pawns in the development of the fishery industry started from the marine resource potential of the fishing industry in Indonesia which is ecologically diverse and rich, resulting in great potential for financing development to meet industrial needs. However, in case of financial service of pawnshops, the implementation there are still not many private pawns registered and not evenly distributed in various regions. Therefore, it is still not widely used in the development of the fishing industry, let alone in supporting the environmentally friendly business. This is because the existence of legal private pawnshops in the community is not widely known and the people in general still use government pawnshops, or even many still borrow through illegal private pawns. As a country that has very wide waters and marine resources, Indonesia has a huge potential for developing the fishing industry [7]. As the largest maritime country in the world, Indonesia has very large and diverse natural resources, both in the form of renewable natural resources (fisheries, coral reefs, seagrass beds, mangrove forests, seaweed, and biotechnology products), non-renewable natural resources (oil and gas), natural gas, tin, iron ore, bauxite, and other minerals), marine energy (such as tides, waves, wind, OTEC (Ocean Thermal Energy Conversion), as well as marine and small island environmental services for marine tourism, sea transportation, and sources of biodiversity and germplasm). The natural wealth is one of the basic capitals that must be managed optimally to improve the welfare and prosperity of the Indonesian people. Fish resources in Indonesia's seas cover 37% of the world's fish species, of which several species have high economic value, such as tuna, shrimp, lobster, reef fish, various types of ornamental fish, shellfish, and seaweed. The sustainable potential of Indonesia's marine fish resources is estimated at 12.54 million tons per year spread over the territorial waters of Indonesia and the waters of the Indonesian Exclusive Economic Zone (ZEEI) [2]. However, the current practices are not considered sustainable and many marine ecosystems are ecologically endangered, as there are no regulations to directly link between financial services and fishery sector [8][9][10]. In this context, the huge potential for developing the fishing industry also has the potential to increase the existence of private pawnshops. On the other hand, the lack of private pawnshop companies in various regions can be an opportunity for the development of pawnshops to help finance the development of the fishing industry. At present, private pawning is regulated in the Financial Services Authority Regulation Number 31 /POJK.05/2016 concerning Pawning Business. Since the issuance of OJK regulations, the existence of private pawning has become a legal business. The OJK regulation is intended to facilitate supervision of private pawning, so as to provide legal protection for the parties. The issuance of the Financial Services Authority Regulation has influenced the regulation of collateral law, especially the Pawning system as one of the guarantees for material goods in Indonesia [11]. So far, positive law that regulates pawning as collateral with movable objects is regulated in Articles 1150-1160 of the Civil Code. Article 1150 of Civil Code states that a pawn is a right obtained by a debtor on a movable property, which is handed over to him by a debtor or by another person on his behalf, and which gives the debtor the power to take payment of the goods first over the debtors other; with the exception of the costs of auctioning the goods and the costs that have been incurred to save them after the goods have been pawned, which costs shall take precedence. The emergence of the policy on pawnshops which provides regulation and supervision of private pawnshops is expected to strengthen the existence and provide legality and supervision of private pawnshops in the financing business for both consumptive and productive needs. With the policy, it is hoped that it can provide balanced legal protection for both parties, because it will be easier to supervise. Methods This study aims to investigate the role of pawnshop financial services in increasing the financial capability of the environmentally friendly fishing industry sector. The development of the fisheries industry in Indonesia certainly involves many business actors in various fields, both capture fisheries and aquaculture. These business actors in developing their businesses often get capital from other parties, namely financial institutions. In this case, in general, creditors require special guarantees for the security of their receivables. One of the collaterals that is widely used is a private mortgage. Pawns are widely used in people's lives, among others, by business actors, especially micro, small and medium enterprises. However, the use of legal private pawns in the community is still not much compared to government pawns, including in the development of the fishing industry. Interactions between human societies and fish stocks have played an important part in the history [12]. The juridical basis for this study is by examining the pawning as stated in Article 1150-1160 of Civil Code. Pawn is one of the collateral-based financial service, giving rise to material rights. Therefore, the pledge creates a Preferred Right for the creditor holding the lien. Thus, the creditor holding the mortgage has the right to prioritize the settlement of his receivables than other creditors. Article 1133 of the Civil Code states that receivables that take precedence are receivables with privileges, liens and mortgages. Pawnshops are all businesses related to providing loans with collateral for movable goods, deposit services, appraisal services, and/or other services, including those held based on sharia principles (Article 1 point 1 policy for pawnshops). Pawnshops are private pawnshops and government pawnshops regulated and supervised by the Financial Services Authority (Article 1 point 2 on pawnshops). The object of the pledge is a movable object, whether bodily or immovable (receivable). Private pawnshop company is a legal entity that conducts pawn business (Article 1 point 3 of the pawn business regulation). Furthermore, the approach method used in this study is an empirical juridical approach. In this case, a systematic, chronological description and analysis will be carried out, based on scientific principles regarding the existence of private pawnshops in Indonesia with existing legal theories and/or laws and regulations related to legal protection for parties in the implementation of private pawning in Indonesia. The research method used is empirical juridical by conducting library research related to legal principles, legal rules and legal norms relating to private pawning. In addition, field research was also conducted to obtain primary data related to the existence of private pawnshops in the development of the fishing industry and its environmentally friendly approaches. Results So far, only government pawnshops have the legality of pawnshops. This does not mean that there are no private pawnshops. In practice, there are many private pawnshops, but they are not legalized. The Financial Services Authority Regulation for Pawnshops stipulates that private business actors are required to obtain a permit. With this arrangement, the existence of private pawning becomes a legal business. Private pawnshop arrangements aim to increase financial inclusion for the lower middle class and micro, small and medium enterprises. In addition, for the implementation of a pawn business that provides easy access to loans, especially for the lower middle class and micro, small and medium enterprises, it is necessary to have a legal basis for the Financial Services Authority in supervising the pawnshop business in Indonesia, as well as a legal basis to supervise the pawnshop business to create a sound pawn business, to provide legal certainty for pawnshop business actors, and to protect consumers. The rapid development of business in Indonesia makes more and more business actors (including in the fishing industry) and consumers need funds to fulfill both consumptive and productive needs. These funds can come from internal sources or other parties. The existence of institutions that provide capital (loans) is very important for the community, especially small-scale business [13]. Pawn companies are one of the institutions that provide loans that offer easy access to loans so that they become an alternative solution to the community in the midst of the difficulty of obtaining loans from lending institutions such as banks. Pawnshops are a solution for people who need funds easily, quickly, and safely. People can get funds without having to sell their valuables. Pawnshops can serve funding needs ranging from tens of thousands to hundreds of millions of rupiah with guarantees for electronic goods, motor vehicles, gold, jewelry, and other types of goods that continue to grow. The development of private pawns registered and licensed by the Financial Services Authority can be seen in Table 1. Table 1 shows an increase in the number of companies and financing carried out by private pawn companies. According to Pratama [14], head of the Supervision Department of the Non-Bank Financial Industry (IKNB), this is supported by Badriyah et al. [3] showing that there were only 24 pawnshop companies out of a total of 585. The increase in the number of pawn companies cannot be separated from the large potential of the pawn business in Indonesia. The characteristics of a fast, easy and simple business make pawn services become one of the main alternatives sought by people who need funds. In addition, the increase in private pawn companies during the Covid-19 pandemic was due to the need for pawn services as an alternative for the lower middle class, as well as micro, small and medium enterprises (MSMEs) to obtain loan funds. In addition, the risk of the pawn business is relatively well managed because the collateral goods are under the control of the pawnshop. Private pawn companies that have permits are still very few compared to private pawnshops that are not registered. Private unregistered pawn companies are very difficult to control. OJK does not have the authority to supervise, so there are still private pawning practices that are not in accordance with the principles of pawnshops and deviate from the principles of guarantee law in general and contract law. The legal relationship between the parties in a pawn is basically an agreement, namely a pawn agreement. This pledge agreement is consensual, obligatory and in free form. The pawn agreement is an accessor agreement, meaning that the pawn agreement is a follow-up (additional) agreement to the main agreement. Thus, the presence or absence of a pawn depends on the presence or absence of the principal agreement. Pawning occurs because of the transfer of power over the pawned goods to the creditor who holds the pledge. The surrender of the pledged object from the hand of the pawnbroker to the holder of the pawn is absolute for the occurrence of the pledge. Thus, there must be a real surrender, namely a direct hand-to-hand submission. If the submission is made in a constitutum possessorium, it does not result in a pledge. This is because there is a condition for inbezitstelling in pawning as stipulated in Article 1132 Paragraph (2) of the Civil Code which states that there is no lien if the pledged item is left in the hands of the debtor or pawnbroker or returns to the power of the debtor or pawnbroker at the will of the creditor pawnbroker. The object that becomes the object of the pledge is a movable object, both tangible and intangible. The subject of the pawn is the pawnbroker and the holder of the pawn. The subject of the pawn agreement is the party giving the pledge and the holder of the pledge. The pawn holder in this case is a company in the form of a legal entity as stated in Article 1 Number 3 of POJK Number 31 / POJK.05/2016. The creditor holding the pledge has the right to take repayment of the pledged goods before other creditors. This is a consequence of the preference rights owned by the holders of material guarantees, including the Pledge Holders. In the law of guarantees there are also legal principles of guarantees that can be implied in Articles 1131 and 1132 of the Civil Code. Article 1131 of the Civil Code states that all objects of a debtor, both movable and immovable, both existing and new in the future, become collateral for all personal engagements of the debtor. Article 1132 of the Criminal Code. The Civil Code states that the objects referred to in article 1131 serve as mutual guarantees for creditors and the proceeds from the sale of the goods are divided among creditors in a balanced manner according to the size of their respective receivables, unless there are valid reasons to prioritize one receivable over another. In civil law, it can be concluded that there is the principle of schuld and haftung, the principle of trust, the principle of morality, the principle of creditor parity, the principle of balance and the general principle, namely the existence of equal rights of creditors to the assets of their debtors. Thus, if there is no special guarantee, one of which is a pawn, the creditor is domiciled as a concurrent creditor. Therefore, the existence of a pawn to guarantee creditors' receivables is very necessary. (Figure 1) [15]. The results highlighted that for small-scale fishing, as they are facing difficulties in accessing formal financial services, the existence of pawns helps them so much to operate their activities [16]. Previous research examines the relationship between small-scale fishing with ecological sustainability in the context of marine resources [17][18][19][20][21][22][23][24]. Schuhbauer et al. [25] showed that financial assistance can help the capability of small-scale fishing which may result in sustainable fish stocks. In this context, Jacquet & Pauly [26] argue that funding is a critical priority to create a sustainable small-scale fishing. As one of the implementations of financial assistance to increase the financial capacity of the fishing industry in general, Indonesia has made it easier for this industry to access financial services, such as by using pawnshop to fishery industry. Pawns always develop from time to time, and high market demand makes this business have great interest in running a pawnshop business [27,28]. In various countries in the world, including the Philippines, Malaysia, Russia, Romania and various other countries including Indonesia, pawns are no longer only used by the lower middle class, but have become a necessity for all levels of society, and it is also used in fishery industry to make it more environmentally friendly [6,[29][30][31][32][33]. In addition, the findings also encourage the involvement of the private sector and small and medium scale loan providers to be involved in the fisheries economy and marine industry. The involvement of the private sector in the fishing pawn business plays a role in expanding the economic base, equalizing income distribution and increasing financial inclusion in the marine economic sector. In addition, increasing the capacity and diversity of financial service providers needs to be strengthened by providing loans to environmentally friendly fisheries business actors. This is because the marine sector plays an important role in efforts to reduce carbon emissions and global warming which has an effect on climate change throughout the world. The ocean is one of the main carbon dioxide sinks, and the damage to the ocean caused by the unsustainable fishing industry has worsened the carbon sequestration capacity of the atmosphere. Because the fishing industry is closely related to the financial services sector as a lender, the government is encouraged to provide stricter regulations on the financial sector, both banking and non-banking, such as private pawnshops, to provide loans to environmentally friendly fisheries. Conclusion This study aims to find facts and analyze the existence of private pawning in the development of the fishing industry and its environmentally friendly approach. In Indonesia, the existence of private pawnshops has experienced rapid development, both in terms of regulation and supervision. In the past, private pawning did not have regulation and supervision, so it often caused losses to the parties due to various deviations from the principles of pawning. Since the issuance of the Financial Services Authority Regulation Number 31/POJK.05/2016 concerning the private pawnshop business, it has become a legal business entity. This is to make it easier to supervise, so that the existence of private pawning becomes a legal business. Therefore, the development of private pawnshops in various fields, including the development of the fishing industry, is very potential, but it turns out that not many private pawns have been registered and obtained permits. The existence of private pawnshops is now a non-bank financing business which has been legalized since the existence of the Financial Services Authority Regulation. This private pawn has progressed when viewed from the number of registered private pawns and the value of financing provided to customers. However, the existence of legal private pawns is still not widely known to the public, including its use in the development of the fishing industry. This finding underscores the enormous potential for developing the fishing industry which can potentially increase the existence of private pawnshops. Small number of private pawnshop companies and the rapid increase of marine business investment can be an opportunity for pawnshops to finance environmentally friendly fishing industries to more actively contribute in reducing global warming and carbon emissions. In particular, this study emphasizes the need for efforts to provide adequate financial guarantees to small and medium-sized businesses in the fisheries sector, and at the same time involve small and medium-scale fisheries businesses to protect the marine environment while increasing income from sustainable economy. As recommendation, the government should conduct massive socialization of private pawning to the public by collaborating with various parties, namely academics, business actors and community leaders, to provide more legal protection for business actors and customers.
5,114.2
2021-12-01T00:00:00.000
[ "Environmental Science", "Economics", "Business" ]
Fano Resonance in Excitation Spectroscopy and Cooling of an Optically Trapped Single Atom Electromagnetically induced transparency (EIT) can be used to cool an atom in a harmonic potential close to the ground state by addressing several vibrational modes simultaneously. Previous experimental efforts focus on trapped ions and neutral atoms in a standing wave trap. In this work, we demonstrate EIT cooling of an optically trapped single neutral atom, where the trap frequencies are an order of magnitude smaller than in an ion trap and a standing wave trap. We resolve the Fano resonance feature in fluorescence excitation spectra and the corresponding cooling profile in temperature measurements. A final temperature of around 6 $\mu$K is achieved with EIT cooling, a factor of two lower than the previous value obtained using olarization gradient cooling. I. INTRODUCTION Single neutral atoms in optical dipole traps form a potential basis for quantum information processing applications, including quantum simulation [1,2], computation [3,4], and communication [5,6].Ideally, the atom can be prepared in an arbitrary quantum state and can be made to exchange quantum information coherently with a tightly focused optical mode.A prerequisite for an efficient coupling between a photon and an atom is minimizing the atomic position uncertainty, which requires the atom to be sufficiently cooled [7].Furthermore, cooling of the atom can extend the coherence time of the qubit state [4,8,9] and allow for the manifestation of quantum mechanical properties of the atomic motion [10,11]. Raman sideband cooling techniques [12][13][14] can be employed to cool atoms to the motional ground state of the trapping potential.However, this method requires an iteration of the cooling process over several laser settings to address individual vibrational modes.Alternatively, cooling by electromagnetically induced transparency (EIT) is a simpler approach that can also help achieve ground state cooling. EIT cooling relies on suppression of diffusion when a three-level atom is transferred to a superposition of the ground states that is decoupled from the excited state (dark state).On probing the excitation spectrum of a Λ system with a strong field (coupler) and a weaker probe, the dark state is revealed via a reduction in fluorescence when the probe and coupler are equally detuned from the excited state.This dip, in combination with the fluorescence peak from the dressed state, results in an asymmetric Fano profile [15].When the motional spread of the atomic wavepacket in an external conservative potential is taken into account, the dark state becomes sensitive to the atomic position.Particularly, cooling occurs when the dark state is decoupled from the excited state at the carrier frequency but is coupled to the bright (dressed) *<EMAIL_ADDRESS>n denotes the vibrational quantum number for atomic motion in a harmonic trap with a frequency of ωtrap.By choosing a suitable intensity for the coupling beam, the scattering spectrum can be engineered such that the transition |g, n → |+, n − 1 is enhanced to achieve cooling.Right: Spectral profile of the dressed states.Scattering of a weak probe beam that couples an atom prepared in ground state |g to the dressed states reveals two peaks corresponding to each of the dressed states and an asymmetric-Fano profile due to the dark state. state at the red sideband [16].For this, the system simply needs to be engineered such that frequency difference between the dark state and the bright dressed state matches the vibrational mode spacing of the potential (see Fig. 1). The Fano profile was first observed in the fluorescence spectroscopy of a single Barium ion [17,18], and a cooling technique exploiting this effect was proposed fifteen years later [19].Since then, this EIT cooling method has been implemented in platforms such as trapped ions [20][21][22], neutral atoms confined in standing wave traps [23], and quantum gas microscopy setups [24]. In this work, we investigate free-space EIT cooling of a single neutral 87 Rb atom in a mK deep far-off-resonant optical dipole trap (FORT), where the trap frequencies are typically around tens of kHz, one to two orders of magnitude smaller than in typical standing wave traps and ion traps.A three-level Λ system is realized using the magnetic sublevels in the hyperfine manifolds of the ground and excited states.We first resolve the Fano profile via excitation spectroscopy, and then implement a cooling scheme on altering the configuration and detunings.We also explore the parameter space to identify detunings and intensities that minimize the temperature. II. THEORETICAL OVERVIEW Theoretical descriptions of the Fano spectrum, and cooling by EIT have been extensively reported earlier [15,16,18,19].Here we summarize the results and extend some of the outcomes to describe our measurements.Consider a Λ system formed by two ground states |g and |g ′ as well as an excited state |e that can decay to both ground states with a total decay rate Γ.A weak (strong) probe (coupling) field of frequency ω p (ω c ) couples |g (|g ′ ) to |e with a Rabi frequency Ω p (Ω c ) and a detuning ∆ p = ω p − ω eg (∆ c = ω c − ω eg ′ ). In the limit of a weak probe driving field (Ω p ≪ Ω c , ∆ c ), the ground state |g remains an eigenstate with the eigenvalue λ g = (∆ c −∆ p ).The other two eigenstates |± are associated with the two light-shifted resonances close to ∆ p = 0 and ∆ p = ∆ c as the probe detuning ∆ p is being varied.Their corresponding eigenvalues are λ + = −δ − iΓ + /2 and λ − = ∆ c + δ − iΓ − /2, respectively, with an associated light shift δ and radiative decays Γ ± [18].For a large detuning ∆ c ≫ Ω c , Γ, these can be obtained through a perturbative expansion of 1/∆ c : For a larger Ω p , the probe-induced coupling between |g and |e cannot be neglected, and the light shifts and decay rates have been obtained from the steady-state solution for the three-level optical Bloch equation in the vicinity of ∆ p = ∆ c [18]: The narrow resonance associated with λ + is shown to exhibit a Fano-shaped profile [15] and possess a spectral width Γ + ≪ Γ for Ω c , Ω p ≪ ∆ c .The Fano-type profile manifests in the excitation spectrum of the scattering rate |T | 2 [15]: which matches the form of a typical Fano profile [25]. When including the atomic center-of-mass motion of the atom to the description, the energy change due to recoil from a scattering event should be considered.For an atom confined in a harmonic potential of frequency ω trap , when the position uncertainty is much smaller than the wavelength of light (Lamb-Dicke limit), the coupling between the motional states and internal energy levels is characterized by the Lamb-Dicke param- Here k p and k c are the wave vectors of the probe and coupling beams, φ is the angle between k p − k c and the motional axis, and a 0 = (h/(2mω trap )) 1/2 is the position uncertainty of the particle with mass m in the ground state of the harmonic oscillator [19].For an atom initially in the dark internal state and the motional eigenstate |n , the momentum imparted by light when | k p − k c | = 0 leads to coupling with the bright state |+ of neighboring motional modes |n ± 1 .By choosing ∆ p = ∆ c > 0 and a suitable Ω c such that δ = ω trap , the scattering spectrum can be tailored such that the transition probability of the |g, n → |+, n − 1 red sideband transition is greater than the probability of the |g, n → |+, n + 1 blue sideband transition.This results in effective cooling.A detailed quantitative analysis of the cooling dynamics using a rate equation description is provided in [16,19].Figure 2(b) shows a schematic of our experimental setup.We trap a single 87 Rb atom at the focus of a pair of high numerical-aperture (NA=0.75)aspheric lenses in a far-off-resonant dipole trap (FORT).The FORT is formed by a linearly polarized Gaussian laser beam at 851 nm, tightly focused to a waist of w 0 = 1.1 µm.The aspheric lenses not only enable tight spatial confinement of the atom in the FORT, but also allow efficient collection of fluorescence from the atom.Refer to [26] for a complete description of our single atom trap. For driving the Λ system, the coupling and probe beams employed are generated from the same external cavity diode laser.This ensures a fixed phase relationship between the two driving fields.The light from this laser is split into two paths for the coupling and probe beams with the frequency of light independently controlled by an acousto-optic modulator (AOM) in each path.The two beams are then overlapped in a beam splitter (BS) and co-propagate to the atom in this part of the experiment.The co-propagating configuration minimizes the momentum transfer to the atom (∆ k = k c − k p = 0, and equivalently η = 0) via the two-photon process, thereby decoupling the center-of-mass motion from the dynamics and allowing the Fano profile to be resolved. To prevent probe and coupling beams from entering the detection system, the atomic fluorescence is collected in the backward direction using a 90:10 BS.An interference filter (IF) prevents dipole trap radiation from reaching the detectors.Additionally, we employ a polarization filter consisting of a quarter-wave plate (QWP) and a polarizing beam splitter (PBS) to eliminate scat- When a single 87 Rb atom is loaded into the FORT, we apply 10 ms of polarization gradient cooling (PGC) to cool the atom to a temperature of 14.7(2) µK, as measured by the "release-recapture" technique [27,28].Then, a bias magnetic field of 1.44 mT is applied along the FORT laser propagation direction to remove the degeneracy of the Zeeman states.Next, the single atom is illuminated with the pair of strong coupling and weak probe beams for 3 ms.During this interval, the atomic fluorescence is detected using an avalanche photodetector (APD).The measurement is repeated for approximately Figure 3 shows a series of scattering spectra for increasing probe powers.The detected photoevents shown here also include the APD's dark counts, which contribute to a background of around 300 events per second.Red points are experimental data and blue lines are fits to Eqn. 3. In all measurements, an asymmetrical Fano peak is observed with a linewidth smaller than the natural linewidth (Γ = 2π × 6 MHz).The Fano linewidths extracted from the fits increase linearly with the probe power (Γ + /2π = 350 (30), 410 (30), 700 (40), and 1000 (50) kHz for saturation parameters of 2Ω Furthermore, the energy of the dark state indicated by the dip in the scattering spectra, should ideally remain fixed at ∆ p = ∆ c = 2π × −80 MHz, independent of the Rabi frequencies Ω c and Ω p of the driving fields.However, we observe that the minimum of the scattering spectra shifts to a larger detuning for increasing Ω p .It seems likely that this is because the probe field Ω p also drives the transition between the state |g ′ = |F = 2, m F = 0 , and the excited state |F ′ = 3, m F ′ = 1 , which is not taken into account in the three-level model.This coupling introduces an additional light shift on the |g ′ state, leading to a shift in the scattering spectrum for increasing probe field strength. IV. COOLING OF ATOMIC MOTION Having developed a better understanding of the absorption profile, we now turn to the cooling of atomic motion.In order to utilize the sensitivity of the internal dark state to the spatial gradient of the electric fields, we require a configuration in which the momentum transferred by light to the atom is non-zero (∆ k = k c − k p = 0).For this, the direction of the probe beam is altered such that it is sent orthogonal to the coupling beam in a top down direction, polarized parallel to the bias magnetic field to excite π transitions (see Fig 4).The Λ configuration is now realized with a σ − polarized coupling light connecting |g ′ ≡ |F = 2, m F = −1 Our FORT traps the atom in a 3-D harmonic oscillator with radial (ω x/y ) and axial (ω z ) trapping frequencies (ω x/y , ω z ) = 2π × ( 73(2), 10(1) ) kHz, deduced from a parametric excitation measurement [29].Accordingly, the associated Lamb-Dicke parameters (η x , η z ), which quantify the motional coupling, are estimated to be ( 0.23, 0.61 ) for our EIT cooling beam geometry. Similar to the experimental sequence described in the previous part, we start with 10 ms of PGC to cool the atom upon successful loading, followed by a bias magnetic field of 1.44 mT along the FORT laser propagation direction to remove the degeneracy of the Zeeman states.We then apply EIT cooling on the Λ system by switching on the coupling beam and probe beam for 20 ms, a duration chosen to be sufficiently long to ensure that the system reaches a steady state.During this cooling process, a weak repumper beam resonant to the D1 line at 795 nm between 5 2 S 1/2 F = 1 and 5 2 P 1/2 F ′ = 2 is also switched on to transfer the atom back into the F = 2 hyperfine ground state if it spontaneously decays into the F = 1. Following that, we employ a "release and recapture" method [27,28] to quantify the temperature of the single atoms.During this process, the EIT cooling beams are switched off, and the atom is released from the trap for an interval τ r by switching off the FORT beam.Subsequently, the FORT is switched on to recapture the atom and we observe atomic fluorescence by switching on the MOT's cooling and repumping beams to check the presence of the single atom.We repeat each experiment around two hundred times to obtain an estimate of the recapture probability.We then infer the atomic temperature by comparing the experimentally obtained recapture probability at τ r to Monte Carlo simulations of recapture probabilities for single atoms at various temperatures [27]. In the first part of the thermometric experiment, we investigate the capability of the two-photon process to either cool down or heat up the single atoms.We apply EIT cooling by varying ∆ p and ∆ c over a range of ±2π × 1 MHz while fixing Ω c and Ω p to 2π × 5.2 MHz and 2π×2.0MHz, respectively.We choose Ω c = 2π×5.2MHz because this parameter is expected to give a Fano resonance shift coinciding with the trap frequency (δ = ω x/y following Eqn. 1) that leads to optimal cooling.Here, we fix the release interval to τ r = 30 µs, empirically determined to yield the largest signal contrast for recapturing measurements from which the temperature can be inferred. The resulting atomic temperature is shown in Fig. 5(a).Cooling and heating effects close to the dressed states for the two-photon process are significantly visible.We observe an effective cooling in the anti-diagonal stripe where ∆ p = ∆ c , in agreement with the theoretical prediction.Heating occurs most dominantly around ∆ p = ∆ c + 2π × 250 kHz, where the blue sideband transitions have a larger probability. In the following parts, we maintain ∆ c to be fixed at 2π × 94.5 MHz.To obtain a more accurate estimation of the atomic temperature, we now deduce a temperature value based on a series of recapturing probabilities for 12 different release intervals, ranging from 1 to 80 µs.We vary the probe detuning ∆ p around ∆ c , as shown in Fig. 5(b).We observe the typical asymmetric Fano profile also in the temperature of the atoms, with the lowest temperature of 5.7(1) µK measured at ∆ p = ∆ c . We expect optimal cooling to be achieved when the dressed state energy shift δ caused by the coupling beam is equal to the trap frequency, δ = ω x/y , as it maximizes the absorption probability on the red sideband transition [16].To confirm this behavior, we record the atomic temperature using the same "release and recapture" scheme for different coupling beam powers, keeping ∆ c = ∆ p = 2π × 94.5 MHz and Ω p = 2π×2.0MHz fixed.The results are shown as a function of the saturation parameter s = 2Ω 2 c /Γ 2 in Fig. 6(a).Cooling is observed for s between 0.5 and 3, with the lowest temperature obtained at an optimal cooling parameter of s = 1.42(3) (or Ω c = 2π × 5.06(5) MHz).This corresponds to a dressed state energy shift of δ = Ω 2 c /(4∆ c ) ≈ 2π × 68(1) kHz, as introduced in Eqn. 2, which is comparable with the radial trap frequency ω x/y in our system. We then extract the cooling rate by measuring the atomic temperature after a variable time of of EIT cooling, as shown in Fig. 6(b).Here, we apply the optimal cooling parameters (∆ c = ∆ p = 2π × 94.5 MHz, Ω c = 2π × 5.06 MHz and Ω p = 2π×2.0MHz) to the pair of coupling and probe beams.From an exponential fit to the experimental data, we deduce a 1/e cooling time constant of 2.1(3) ms, and a steady-state temperature of around 5.9(2) µK. V. DISCUSSION AND CONCLUSION By applying EIT cooling optimized for the radial directions, we have successfully cooled the atom to a temperature of 5.7(1) µK.This is 2.5 times lower than the temperature of 14.7 µK typically achieved with conventional PGC.We note that our temperature measurement predominantly reveals the temperature along the radial direction due to the limitation of the "release and recapture" technique.Particularly, a Gaussian optical dipole trap typically has a much smaller spatial confinement in the radial direction than in the axial direction.Consequently, it is much easier for the atom to escape the trap in the radial direction during the release interval. From the measured atomic temperature, we infer a mean phonon number of n x/y = 1.18 (5).This is higher than the theoretical value of 0.002 expected for our pa-rameters from the rate equation described in [19].Additionally, we also observe that the measured cooling time constant is about 10 times longer than the expected value of 0.2 ms estimated from the same theoretical work.These discrepancies are possibly due to unaccounted heating effects originating from scattering of the strong coupling beam which is red-detuned from the |F = 2, m F = −2 ↔ |F ′ = 3, m F ′ = −3 cycling transition.In the absence of the EIT cooling, this scattering process alone would impose a lower limit on the energy reached to be in the order of ∼ hΓ which is ∼ 100 µK in temperature.We expect a steady state between these two processes settling at a final temperature approximately two orders of magnitude lower. In addition, the cooling time would also be limited by the high probability (50 %) of an atom in the state |e of 5 2 P 3/2 F ′ =2 to decay into the 5 2 S 1/2 F =1 hyperfine level, which is decoupled from the pair of EIT cooling beams.Despite the use of a repump light to transfer the atom back to the F = 2 state, this process introduces a delay as well as heating.In comparison, EIT cooling is 1.9 times slower than the conventional PGC, which has a typical 1/e cooling time constant of 1.1(1) ms [28]. Although prior work with EIT cooling has demonstrated approximate ground state occupation, the temperature of 5.7(1) µK achieved here is comparable to the 7 µK obtained previously in a standing wave optical trap of [23] and an order of magnitude lower than the temperatures achieved in an ion trap [20].Our demonstration could be extended to lower temperatures further by adding a second stage of EIT cooling that targets cool-ing along the axial direction with δ matched to the axial trap frequency spacing ω z .Exploring strategies to mitigate heating caused by scattering in a multi-level atom could improve the cooling even further. In conclusion, we have demonstrated electromagnetically induced transparency (EIT) cooling for a single neutral atom confined in a shallow optical dipole trap, and have resolved the signature Fano profiles in the excitation spectrum due to a large solid angle for fluorescence collection.A final temperature of less than 6 µK has been reached with EIT cooling, a factor of two below the value obtained by polarization gradient cooling in the same system. Technologically, the use of magnetic sublevels to realize the Λ scheme is convenient as it requires only a small frequency difference (on the order of MHz) between the pump and coupling fields, which allows simple frequency shifting from the same laser to provide both components.This cooling scheme therefore can diversify the spectrum of techniques for manipulation of atomic motion of ultracold atoms in optical tweezer arrays. FIG. 1 . FIG. 1. Left: EIT cooling transition in a three-level Λ system.A strong coupling beam forms new eigenstates |+ and |− from the bare atomic states |g ′ and |e .Here n denotes the vibrational quantum number for atomic motion in a harmonic trap with a frequency of ωtrap.By choosing a suitable intensity for the coupling beam, the scattering spectrum can be engineered such that the transition |g, n → |+, n − 1 is enhanced to achieve cooling.Right: Spectral profile of the dressed states.Scattering of a weak probe beam that couples an atom prepared in ground state |g to the dressed states reveals two peaks corresponding to each of the dressed states and an asymmetric-Fano profile due to the dark state. B FIG. 2. (a) Energy levels and transitions in 87 Rb used for observing the Fano scattering profile.(b) Experimental configuration for Fano spectroscopy.The backscattered atomic fluorescence is collected by a high numerical aperture lens and coupled to a single-mode fiber connected to an avalanche photodetector.BS: beamsplitter, QWP: quarter-wave plate, PBS: polarizing beamsplitter, IF: interference filter, APD: avalanche photodetector, UHV: ultra-high vacuum, B : magnetic field. FIG. 3 . FIG. 3. Observation of Fano scattering profiles.Red dots: Single photon scattering detected in APDs from the twophoton process for ∆c/2π = -80 MHz and Ωc = 1.4Γ, projected into the probe polarization.Blue curve: Fits to Fano profiles following Eqn.3. The probe beam power increases from subplot (a) to (d) as indicated by the Rabi frequency values.All plots show a clear suppression in scattering around ∆p/2π = -80 MHz where the atom is optically pumped to the dark state.Error bars represent one standard deviation due to propagated Poissonian counting statistics. FIG. 4 . FIG. 4. (a) Experimental configuration for the off-resonant EIT cooling process.The probe beam propagates orthogonally to the optical axis to allow for motional coupling.(b) Energy levels and transitions in 87 Rb used in the cooling experiment. T 2 F FIG. 5. (a) Atomic temperature for various probe and coupling field detunings, inferred from release and recapture measurements after 20 ms of EIT cooling.The anti-diagonal blue band indicates dark state resonance which has the highest cooling efficiency.(b) EIT cooling profile in atomic temperature as a function of probe detuning ∆p for a fixed coupling detuning (here ∆c = 2π× 94.5 MHz as indicated by the boxed region in (a)) also shows an asymmetric Fano feature. FIG. 6 . FIG. 6.(a) Atomic temperature at ∆p = ∆c = 2π ×94.5 MHz for varying Ωc.We observe an effective cooling for s = 2Ω 2 c /Γ 2 between 0.5 and 3, with the optimal cooling around s = 1.42(3) (cooling duration fixed to 20 ms).The dotted line indicates the initial atomic temperature after PGC of 14.7 µK.Error bars represent standard error of binomial statistics accumulated from around 200 repeated runs.(b) Atomic temperature measured after different cooling durations.A cooling time of 2.1(3) ms and final temperature of 5.9(2) µK are extracted from the exponential fit.
5,608.4
2023-12-11T00:00:00.000
[ "Physics" ]
Assessment of Similarity Measures for Accurate Deformable Image Registration Purpose: Deformable image registration is widely used in radiation therapy applications. There are several different algorithms for deformable image registration. The purpose of this study was to evaluate the optimal similarity measures needed to obtain accurate deformable image registration by using a phantom. Methods: To evaluate the optimal similarity measures for the deformable image registration, we compared several similarity measures, including the normalized correlation coefficient, the mutual information, the dice similarity coefficient, and the Tanimoto coefficient. In this study, the mutual information was normalized to have a value of 1 when there is complete correspondence between the images in order to compare it with other similarity measures. First, a reference image was acquired with the phantom located in the center of the field of view of a computed tomography. The phantom consisted of two sections a Teflon sphere and four samples of various electron density values. Then, to acquire the moving images, the phantom was scanned for various displacement values as it was moved to the left (range: 1.00-30.0 mm). Second, images for various Teflon sphere diameters (range: 0–25.4 mm) were acquired with the CT scanner. The image similarity for each condition was compared with the reference image by using several similarity measures. Results: In the moved phantom study, although the normalized correlation coefficient, dice similarity coefficient, and Tanimoto coefficient showed the same tendency of sensitivity for measuring image similarity, the mutual information showed significant sensitivity for both of the two distinct sections of the phantom. In the study in which the phantom sphere diameter was varied, the mutual information also showed the best performance among the tested similarity measures. Conclusions: Mutual information appears to have an advantage over other similarity measures for accurate deformable image registration. J o u r n a l o f N uc lea r M edicine & Riatio n T h e r a p y ISSN: 2155-9619 Journal of Nuclear Medicine & Radiation Therapy Citation: Yaegashi Y, Tateoka K, Fujimoto K, Nakazawa T, Nakata A, et al. (2012) Assessment of Similarity Measures for Accurate Deformable Image Registration. J Nucl Med Radiat Ther 3:137. doi:10.4172/2155-9619.1000137 Introduction Image-Guided Radiation Therapy (IGRT) is the most advanced technology for localizing targets. IGRT is the use of frequent imaging during a course of radiation therapy to improve the precision and accuracy of the delivery of treatment. The use of daily images in the radiotherapy process leads to Adaptive Radiation Therapy (ART), in which the treatment is evaluated periodically, and the plan is modified in an adaptive manner for the remaining course of radiation therapy. The images obtained from Cone-Beam Computed Tomography (CBCT) at the time of treatment delivery also provide information on the changes that can occur in the patient anatomy during a course of radiation therapy, including therapeutic response of the tumor or normal tissue, internal motion, and weight loss. Recently, Deformable Image Registration (DIR) has been a very important component in ART [1,2]. For instance, organ contours have been transferred from the planning CT images to the daily CBCT images by using DIR such as auto-segmentation [3,4]. DIR is otherwise used for four-Dimensional (4D) treatment optimization [5][6][7][8][9] and dose accumulation [3,10]. Several different DIR algorithms have been proposed, including B-spline [11], thin-plate spline [12], Thirion's demon [13,14], and viscous fluid [15,16]. The B-spline method is the transformation of a point which computer from the control points using a defined grid between two images. The thin-plate spline method is a physically motivated interpolation scheme for arbitrarily spaced tabulated data. Thirion's demons method uses gradient information from a static reference image to determine the 'demons' force required to deform the 'moving' image [14]. Viscous fluid registration is considered to be embedded in viscous fluid, the motion of which is determined by Navier-Stokes equations for conservation of momentum [16]. For comparisons of the DIR accuracy of different algorithms, several similarity measures exist such as the Normalized Correlation Coefficient (NCC), Mutual Information (MI), the Dice Similarity Coefficient (DSC), and the Tanimoto Coefficient (TC). However, it is not clear which similarity measures are suitable for assessing the accuracy of deformable image registration. Therefore, it is necessary to investigate quantitatively the sensitivity of the image similarity measurement for each of the similarity measures. The purpose of this study is to evaluate the optimal similarity measures needed to obtain accurate DIR by using a phantom. for DIR, the accuracy of the similarity measures were estimated by using the phantom under some set of conditions. The phantom study was performed only under simple conditions, in order to measure as quantitatively as possible. The similarity measures we considered were the NCC, MI, DSC, and TC. Phantom study In this phantom study, the ISIS QA-1 (TGM 2 , Clearwater, FL) was used to evaluate the accuracy of the similarity measures ( Figure 1). The ISIS QA-1 has been developed for quality assurance for CTsimulators and treatment accelerators. The phantom is composed of two sections: a 25.4-mm Teflon sphere located in the center of the phantom and four built-in known electron-density values of bone, water, and lung at inhale and exhale. The ISIS QA-1 was scanned with a 4-slice GE Lightspeed RT wide-bore CT scanner (GE Medical Systems, Waukesha, WI). All images were acquired under the same CT scanner settings (kV, mA, slice thickness, etc.). First, a reference image was acquired with the phantom located in the center of the CT field of view. Then, to acquire the moving images, the phantom was scanned for various displacement values as it was moved to the left (range: 1.0-30.0 mm) of both the Teflon sphere section and the four electron density values ( Figure 2). Then, the ISIS-QA1 was scanned at various Teflon sphere diameters (range: 0-25.4 mm). The reference image was defined as 25.4 mm sphere diameter image. The image similarity compared with the reference image was computed for each condition by using similarity measures ( Figure 3). Similarity measures To assess the optimal similarity measures for accuracy of different DIR algorithms, we evaluated four similarity measures as follows. Normalized correlation coefficient Cross-correlation can be used as a measure for calculating the degree of similarity between two images. The advantage of the NCC over cross-correlation is that it is less sensitive to linear changes in the amplitude of grayscale values in the two compared images. Furthermore, the NCC is confined in the range between -1 and 1. If the two images correspond completely, the value of NCC is 1. Its mathematical definition is ) and B (i, j) are the moving image and the reference image of the coordinate (i, j), respectively. N and M represent the dimensions of the image matrix N×M. Mutual information Mutual information is an information theory measure of the statistical dependence between two random variables, which represents an entropy measure. The most commonly used measure of information in image processing is the Shannon-Wiener entropy measure. The entropy of the image can be thought of as a measure of dispersion in the distribution of the image grayscale values. Maximization of MI indicates complete correspondence between two images. The MI is defined as follows: The entropies and joint entropy can be computed from the following equations: where P A (a) and P B (b) are the marginal probability mass functions and P A,B (a,b) is the joint probability mass function. The MI measures the degree of dependence between A and B by measuring the distance between the joint distribution P A,B (a,b) and the distribution associated with the case of complete independence P A (a) P B (b). The probability mass function P A,B (a,b) can be calculated using the joint histogram of two images. The MI for grayscale values a and b at equivalent locations in two images A and B is defined as; Here, the MI was normalized to have a value of 1 when there is complete correspondence between the images in order to compare it with other similarity measures. Dice similarity coefficient The DSC is a similarity measure between images A and B, which ranges from 0 for no correspondence between the images to 1 for complete correspondence. The DSC is defined as Tanimoto coefficient The TC (also known as the extended Jaccard coefficient) is another measure of the similarity between images A and B. A higher TC indicates a better correspondence between the images. A value of 1 indicates complete correspondence, and a value of 0 means that there is no correspondence at all. The TC is defined as Figure 4 shows the image similarity of the various displacements of the phantom at the section of the Teflon sphere. The mean rates of change of the image similarities with displacement can be obtained from the linear fits to the data and the corresponding slope, which are 0.0015 (NCC), 0.0019 (DSC), 0.0035 (TC), and 0.0163 (MI). These values also indicate the sensitivity of the image similarity measurement. The image similarities indicated by the NCC, DSC, and TC all show a similar slight decrease with increasing phantom offset in position. Compared to other similarity measures, the MI showed a significant decrease in image similarity with phantom offset. Figure 5 shows the image similarity using the part of the phantom containing the four electron density values, which has a more complex Hounsfield Unit (HU) for the CT image used to make figure 5 than that of the Teflon sphere section. The mean rates of change of the image similarities with displacement in figure 5 were found to be 0.0017 (NCC), 0.0023 (DSC), 0.0045 (TC), and 0.0190 (MI). The results were similar under both phantom conditions. Therefore, the MI showed the highest sensitivity of all the image similarity measures in the various phantom displacement studies. Figure 6 shows the image similarity for the various Teflon sphere diameters. Despite the fact that the Teflon sphere diameter gradually decreased to 0 mm, the mean rates of change of the image similarities with respect to sphere diameter indicate almost no change for the NCC, DSC, and TC. Although the image similarity calculated from the MI initially shows a decrease, it indicates a constant value thereafter. Discussion For the two phantom displacement studies, the image similarity sensitivities calculated from all of the similarity measures demonstrated the same tendency. Consequently, the MI image similarity sensitivity is higher than other that of similarity measures; it is not dependent on the complexity of the HUs in the CT images. The reasons for this are as follows: the NCC indicates the linearity of the image similarity between two images in a pixel-by-pixel manner, while the DSC and TC simply express the rate of change of the image similarity between two images with a change in displacement. Thus, the NCC, DSC, and TC demonstrate that when there are many pixel values that match between the CT images, the differences in image similarity may cancel out. However, the MI demonstrates good sensitivity to the image similarity for the phantom displacement case, because MI is not calculated pixel by pixel but instead uses the joint histogram of the grayscale values of the two images. The joint histogram is used to estimate a joint probability distribution of their grayscale values by dividing each entry in the histogram by the total number of entries. For the study involving various Teflon sphere diameter values, none of the similarity measures showed any significant differences. The results suggested that there are many grayscale values that match between the two images. To verify this explanation, we drew regions of interest (ROIs) around each of the various Teflon sphere diameter images as shown in figure 7, and the image similarities in each ROI were compared with the reference image (25.4 mm sphere diameter image) by using several similarity measures as shown in figure 8. The image similarity sensitivities became higher for all the similarity measures. In particular, the NCC showed a negative correlation as compared with the reference image when the Teflon sphere diameter was zero. Similarly, in figure 8, the image similarity calculated by the MI initially shows a decrease, and thereafter, it has a constant value. Based on these results, it seemed that the similarity measures other than MI can be used for image similarity measurement when it is possible to define an ROI. To estimate similarity measures for clinical images, we tested patient data obtained from four-dimensional computed tomography (4DCT) images (Figure 9), because the 4DCT images have complete correspondence in the locations for each respiratory phase image. Therefore, the error caused by the difference in the location between images can be excluded. Figure 10 shows a comparison of the differences in similarity measures for the lung cancer images obtained with 4DCT for different respiratory phases. The 4DCT dataset used comprised 10 respiratory phases. The end-inhalation phase was typically defined as the 0% phase, and the end-exhalation phase was defined as the 50% phase. In this study, the similarity measures of the 4DCT images were evaluated with respect to the 50% phase image. That is, the image similarity with respect to the 50% phase image decreases as the percentage of the respiratory phase increases. The NCC image similarity is constant at about 1.0, and the MI has the largest rate of change from the 70% phase onward. The MI also indicated the greatest deviation in image similarity using the 4DCT images when the ROIs are defined as around the lung tumor (Figures 11 and 12). From these results, the image similarity measurement using an ROI is also found to be effective for clinical imagery. Consequently, the MI has the highest image similarity sensitivity among the tested similarity measures. Future studies will estimate the accuracy for each DIR method by using MI. Conclusions We have demonstrated the optimal similarity measures needed to obtain accurate DIR by using a phantom. In this study, although the NCC, DSC, and TC showed almost the same sensitivity tendency in measuring image similarity, MI showed the best performance among the tested similarity measures. A modest difference between two images can be obscured under the influence of the image background and many static regions when evaluating image similarity using the entire CT image. Therefore, in such a case, it may be possible to detect image differences by using similarity measures that confine the analysis to a region of interest.
3,334.8
2012-11-27T00:00:00.000
[ "Medicine", "Physics" ]
On the embedding of convex spaces in stratified L-convex spaces Consider L being a continuous lattice, two functors from the category of convex spaces (denoted by CS) to the category of stratified L-convex spaces (denoted by SL-CS) are defined. The first functor enables us to prove that the category CS can be embedded in the category SL-CS as a reflective subcategory. The second functor enables us to prove that the category CS can be embedded in the category SL-CS as a coreflective subcategory when L satisfying a multiplicative condition. By comparing the two functors and the well known Lowen functor (between topological spaces and stratified L-topological spaces), we exhibit the difference between (stratified L-)topological spaces and (stratified L-)convex spaces. are some complete lattices. When L = 2, an (L, M)-convexity is precisely an M-fuzzifying convexity; when M = 2, an (L, M)-convexity is precisely an L-convexity; and when L = M = 2, an (L, M)-convexity is precisely a convexity. Similar to (lattice-valued) topology, the categorical relationships between convexity and latticed-valued convexity is an important direction of research. When L being a completely distributive complete lattices with some additional conditions, Pang and Shi (2016) proved that the category of convex spaces can be embedded in the category of stratified L-convex spaces as a coreflective subcategory. In this paper, we shall continue to study the categorical relationships between convex spaces and stratified L-convex spaces. We shall investigate two embedding functors from the category of convex spaces (denoted by CS) to the category of stratified L-convex spaces (denoted by SL-CS). The first functor enables us to prove that the category CS can be embedded in the category SL-CS as a reflective subcategory when L being a continuous lattice. The second functor enables us to prove that the category CS can be embedded in the category SL-CS as a coreflective subcategory when the continuous lattice L satisfying a multiplicative condition. The second functor is an extension of Pang and Shi's functor (2016) from the lattice-context. Precisely, from completely distributive complete lattice to continuous lattice. And the second functor can be regarded as an analogizing of the well known (extended) Lowen functor between the category of topological spaces and the category of stratified L-topological spaces (Höle and Kubiak 2007;Lai and Zhang 2005;Li and Jin 2011;Lowen 1976;Warner 1990;Yue and Fang 2005). By comparing the two functors and Lowen functor, we exhibit the difference between (stratified L-)topological spaces and (stratified L-)convex spaces from the categorical sense. The contents are arranged as follows. In "Preliminaries" section, we recall some basic notions as preliminary. In "CS reflectively embedding in SL-CS" section, we present the reflective embedding of the category CS in the category SL-CS. In "CS coreflectively embedding in SL-CS" section, we focus on the coreflective embedding of the category CS in the category SL-CS. Finally, we end this paper with a summary of conclusion. Preliminaries Let L = (L, ≤, ∨, ∧, 0, 1) be a complete lattice with 0 is the smallest element, 1 is the largest element. For a, b ∈ L, we say that a is way below (Gierz et al. 2003). For a directed subset D ⊆ L, we use ∨ ↑ D to denote its union. Throughout this paper, L denote a continuous lattice, unless otherwise stated. The continuous lattice has a strong flavor of theoretical computer science (Gierz et al. 2003). The following lemmas collect some properties of way below relation on a continuous lattice. Lemma 1 (Gierz et al. 2003 Lemma 2 (Gierz et al. 2003) Let L be a continuous lattice and let {a j,k | j ∈ J , k ∈ K (j)} be a nonempty family of element in L such that {a j,k | k ∈ K (j)} is directed for all j ∈ J. Then the following identity holds. where N is the set of all choice functions h : Let X be a nonempty set, the functions X −→ L, denoted as L X , are called the L-subsets on X. The operators on L can be translated onto L X in a pointwise way. We make no difference between a constant function and its value since no confusion will arise. For a crisp subset A ⊆ X, we also make no difference between A and its characteristic function χ A . Clearly, χ A can be regarded as an L-subset on X. Let f : X −→ Y be a function. For a nonempty set X, let 2 X denotes the powerset of X. Definition 1 (Van De Vel 1993) A subset C of 2 X is called a convex structure on X if it satisfies: The pair (X, C) is called a convex space. A mapping f : (X, C X ) −→ (Y , C Y ) is called convexity-preserving (CP, in short) provided that B ∈ C Y implies f −1 (B) ∈ C X . The category whose objects are convex spaces and whose morphisms are CP mappings will be denoted by CS. Definition 2 (Maruyama 2009; Pang and Shi 2016) A subset C of L X is called an L-convex structure on X if it satisfies: The category whose objects are stratified L-convex spaces and whose morphisms are L-CP mappings will be denoted by SL-CS. Definition 3 (Adámek et al. 1990) Suppose that A and B are concrete categories; CS reflectively embedding in SL-CS In this section, we shall present a functor from the category CS to the category SL-CS, and then by using it to prove that the category CS can be embedded in the category SL-CS as a reflective subcategory. At first, we fix some notations. For a ∈ L, x ∈ X, we denote x a as the L-subset values a at x and values 0 otherwise. For ∈ L X , let pt( ) = {x a | a ≪ (x)} and let Fin( ) denote the set of finite subset of pt( ). Obviously, = ∨pt( ) = ∨{∨F | F ∈ Fin( )}. Definition 4 Let (X, C) be a convex space and let Proof (LCS). It is obvious. (LC2). It is easily seen that B is closed for the operator ∧. . It follows by Lemma 1 (4) that a ≪ j xa (x) for some j x α ∈ J. Since { j } j∈J is directed then there exists a j ∈ J, denote as j F , such that j xa ≤ j F for all j x a . By Lemma 1 (2) we get a ≪ j F (x). This shows that F ∈ Fin( j F ). By a similar discussion on j F we have that F ∈ Fin(µ j F ,k F ) for some k F ∈ K (j F ). It follows that ∨F ≤ µ j F ,k F . We have proved that for any F ∈ Fin( ), there exists a µ j,k such that ∨F ≤ µ j,k . Because B is closed for ∧, we get that σ is definable. Lemma 3 Let Proof The sufficiency is obvious. We check the necessity. Let χ U ∈ ω 1 L (C). Then with ∀j ∈ J , a j ∈ L, U j ∈ C and {a j ∧ U j } j∈J is directed. Without loss of generality, we assume that a j � = 0 for all j ∈ J. It is easily seen that {U j } j∈J is directed. In the following we check that On one hand, it is obvious that U ⊇ U j for any j ∈ J and so U ⊇ ↑ j∈J U j . On the other hand, for any x ∈ U we have which means x ∈ U j for some j ∈ J. Thus U ⊆ ↑ j∈J U j as desired. It follows that f : (X, ω 1 Conversely, let f : It is easily seen that the correspondence (X, C) � → (X, ω 1 L (C)) defines an embedding functor Proposition 2 (Pang and Shi 2016) Let (X, C) be a stratified L-convex space. Then the set ρ L (C) = {U ∈ 2 X | U ∈ C} forms a convex structure on X and the correspondence (X, C) � → (X, ρ L (C)) defines a concrete functor Theorem 1 The pair (ρ L , ω 1 L ) is a Galois correspondence and ρ L is a left inverse of ω 1 L . Proof It is sufficient to show that ρ L • ω 1 L (C) = C for any (X, C) ∈ CS and ω 1 L • ρ L (C) ⊆ C for any (X, C) ∈ L-CS. with ∀j ∈ J , a j ∈ L, U j ∈ C and {a j ∧ U j } j∈J is directed. It follows by the definition of stratified L-convex space that ∈ C. CS coreflectively embedding in SL-CS In this section, we shall give a functor from the category CS to the category SL-CS, and then by using it to prove that the category CS can be embedded in the category SL-CS as a coreflective subcategory. This functor extends Pang and Shi's functor (2016) from the lattice-context. Precisely, from completely distributive complete lattice to continuous lattice. Firstly, we fix some notations used in this section. Let ∈ L X and a ∈ L. Then the set [a] := {x ∈ X| a ≤ (x)} and the set (a) := {x ∈ X| a ≪ (x)} are called the a-cut and strong a-cut of , respectively. Let a, b ∈ L, we say that a is wedge below b (in symbol, a ⊳ b) if for all subsets D ⊆ L, y ≤ ∨D always implies that x ≤ d for some d ∈ D. For each a ∈ L, denote β(a) = {b ∈ L| b ⊳ a}. The following lemma generalizes Huang and Shi's result from lattice-context. Huang and Shi (2008) defined (a) := {x ∈ X| a ⊳ (x)} and assumed that L being completely distributive complete lattice. The way below relation ≪ on L is called multiplicative (Gierz et al. 2003 Lemma 6 Assume that the way below relation ≪ on L is multiplicative. Then for any ∈ L X and any a ∈ L, the set In addition, by the multiplicative condition we have a ≪ b ∧ c. This proves that Pang and Shi (2016) proved a similar result when L being a completely distributive complete lattice with the condition β(a ∧ b) = β(a) ∩ β(b) for any a, b ∈ L. It is easily seen that this condition is equivalent to that the wedge below relation on L is multiplicative. Definition 5 Let (X, C) be a convex space and the way below relation ≪ on L be multiplicative. Then the set ω 2 L (C) defined below is a stratified L-convex structure on X, Proof The proofs of (LCS) and (LC2) are obvious. We only check (LC3) below. Let { j } j∈J ⊆ ω 2 L (C) be directed and a ∈ L. Then It follows immediately that j∈J ↑ j ∈ ω 2 L (C) by ( j ) [c] ∈ C for any j ∈ J , c ∈ L and C being closed for intersection and directed union. Lemma 4(1) Similar to Pang and Shi (2016), we can prove that the correspondence (X, C) � → (X, ω 2 L (C)) defines an embedding functor Let (X, C) be a stratified L-convex space. Then Pang and Shi (2016) defined ι L (C) as the finest convex structure on X which contains all [a] for all ∈ C, a ∈ L. They proved that the correspondence (X, C) � → (X, ι L (C)) defined a concrete functor Similar to Pang and Shi (2016), when the way below relation ≪ on L being multiplicative, we get the following results. Theorem 2 The pair (ω 2 L , ι L ) is a Galois correspondence and ι L is a left inverse of ω 2 L . Corollary 2 The category CS can be embedded in the category SL-CS as a coreflective subcategory. Remark 1 Let us replace the convex space (X, C) in Definition 4 and Definition 5 with a topological space (X, T ). Then ω 2 L defines an embedding functor from the category of topological spaces to the category of stratified L-topological spaces. This functor was first proposed by Lowen (1976) for L = [0, 1] and then extended by many researchers (Höle and Kubiak 2007;Lai and Zhang 2005;Liu and Luo 1997;Wang 1988;Warner 1990). If we further remove the directed condition in ω 1 L then we also get an embedding functor from the category of topological spaces to the category of stratified L-topological spaces. By the definition of stratified L-topology, it is easily seen that ω 1 L (T ) ⊆ ω 2 L (T ). Conversely, if ∈ ω 2 L (T ) then = a∈L (a ∧ [a] ) ∈ ω 1 L (T ). Thus ω 1 L = ω 2 L and it follows the following well known result. That is, the category of topological spaces can be embedded in the category of stratified L-topological spaces as a both reflective and coreflective subcategory. Remark 2 Does CS can be embedded in L-CS as a both reflective and coreflective subcategory? Now, we can not answer it. For a convex space (X, C), the inclusion ω 1 L (C) ⊆ ω 2 L (C) holds obviously. But the reverse inclusion seems do not hold. The reason is that for an L-subset ∈ L X , the set {a ∧ [a] | a ∈ L} is generally not directed. At last, we give two interesting examples to distinguish (L-)convex space from (L-) topological spaces. Example 1 An upper set U on L is called Scott open if for each directed set D ⊆ L, ∨ ↑ D ∈ U implies that d ∈ U for some d ∈ D. It is known that the Scott open sets on L form a topology L, called the Scott topology (Gierz et al. 2003). It is not difficult to check that the Scott open sets on L do not form a convex structure on L since they are not closed for intersection. Example 2 An L-filter (Höhle and Rodabaugh 1999) on a set X is a function F : L X −→ L such that for all , µ ∈ L X , (F1) The set of L-filters on X is denoted by F L (X). Since F L (X) is a subset of L (L X ) , hence, there is a natural partial order on F L (X) inherited from L (L X ) . Precisely, for F, G ∈ F L (X) , It is known that F L (X) is closed for intersection, but is not closed for union (Fang 2010;Jäger 2001). In the following, we check that F L (X) is closed for directed union. Let {F j } j∈J ⊆ F L (X) be directed. Then it is readily seen that ↑ j∈J F j satisfies the conditions (F1) and (F2). Taking , µ ∈ L X , then Thus ↑ j∈J F j satisfies the condition (F3). We have proved that F L (X) is closed for directed union. 1. Let Y = L X and C = {0, 1} ∪ F L (X). Then it is easily seen that C is an L-convex structure on Y but not an L-topology on Y. 2. If we call a function F : L X −→ L satisfying (F2) and (F3) as a nearly L-filter on X. Let F N L (X) denote the set of nearly L-filters on X. Then it is easily seen that F N L (X) is a stratified L-convex structure on Y but not a stratified L-topology on Y. Note that L X forms a continuous lattice. If replacing L X with a continuous lattice M, similar to (1)-(2), we can define (stratified) L-convex structure on M. Conclusions When L being a continuous lattice, an embedding functor from the category CS to SL-CS is introduced, then it is used to prove that the category CS can be embedded in the category SL-CS as a reflective subcategory. When L being a continuous lattice with a multiplicative condition, Pang and Shi's functor (2016) is generalized from the lattice context, then it is used to prove that the category CS can be embedded in the category SL-CS as a coreflective subcategory. It is well known that the category of topological spaces can be embedded in the category of stratified L-topological spaces as a both reflective and coreflective subcategory. But, we find that the category of convex spaces seem not be embedded in the category of stratified L-convex spaces as a both reflective and coreflective subcategory. This shows the difference between (stratified L-)topological spaces and (stratified L-) convex spaces from categorical sense.
3,987.2
2016-09-20T00:00:00.000
[ "Mathematics" ]
Method for determining the rational parameters of dynamic dampers of low-frequency vibrations The problems have been considered of natural nonlinear vibrations of an absolutely rigid semiball and a semicylinder on a horizontal plane, assuming that there is no energy dissipation, sliding and tipping on foundation. To adjust the damper to a frequency close to the fundamental tone of vibrations, it is necessary to assess the natural frequency of the damper, which is determined under the assumption on smallness of the vibrations amplitude. This paper represents the comparison of the natural frequency of linearized and nonlinear system. The relative error has been estimated of the natural frequency calculation, which is caused by linearization. It is shown that the ratio of the natural frequency of the linearized system to the natural frequency of the nonlinear system does not depend on the mass and radius. This conclusion made it possible to generalize the results of particular computational solutions and to obtain a formula which takes into account the amplitude influence on the natural vibrations frequency and helps to determine the natural frequency for the initial angles to ninety degrees. Introduction The dynamic dampers of vibrations, which are characterized by low-frequency natural vibrations (less than 10 Hz, and often less than 1 Hz), are widely used to reduce the loads in different mechanisms and engineering structures [1] when mining operations and underground space development. In this case, the vibration dampers with rolling bodies are used [2][3][4][5][6], which have high reliability. Their overview is represented in the works [2,6]. The damper parameters must be such that natural frequency is close to the frequency of fundamental tone of vibrations [2]. Therefore, it is important to know the natural frequency of the vibration damper, which, as a rule, is determined under the assumption on smallness of the vibrations amplitude, which makes it possible to linearize the motion equation. At high amplitudes, the nonlinear differential motion equation is solved by numerical methods, which, however, allow to find only particular solutions for specific conditions. There is a need to generalize the particular solutions. The objective of work is to develop a generalization method of numerical results for determining the rational parameters of dynamic dampers of vibrations, which provide the required natural frequency of nonlinear vibrations. Main part Let us consider one of the simplest dampers [6], made in the form of a semicylinder or a semiball 1, vibrating on a plane 2 (Fig. 1). When solving a number of applied problems related to the vibrational impact on the loose medium [7,8], the vibrations of a semicylinder and a semiball can serve as a model representation of the solid particles motion. When a semicylinder swings, we assume that there is no energy dissipation, sliding and tipping on foundation. where  -angle; g -gravitational acceleration; 1 r -semicylinder radius (identical symbols used in the description of vibrations of a semicylinder and a semiball, will be recorded with 1 and 2 indices, respectively). A point above the letters means time differentiation. Initial conditions: . Since the dissipation is not considered, then 0  is the amplitude of natural vibrations. The nonlinear equation (2) has no analytical solution. For its linearization, it is assumed that the angle  is low and take , sin Dividing (3) by a factor before the second derivative, we get -the natural angular frequency. Then the frequency and period of natural vibrations From (5) we determine the radius at which the semicylinder has the natural frequency 0 In order to determine the natural frequency of nonlinear vibrations for each specific value r and 0  , the equation (2) can be solved numerically. However, this makes it difficult to generalize the results. The task may be simplified if to consider that, based on the theorem on the change in the kinetic energy, we have The semicylinder is turned through an angle  d during the time By integrating both sides of the equation (7), we determine the vibration period   When integrating, symmetry of a phase portrait with respect to the coordinate axes is taken into account, which made it possible to set the limit values 0 and 0  ; and for obtaining the period, the result should be quadrupled. Figure 2 represents the characteristic form of a phase portrait, which has been obtained by numerical integration of equation (2) using the Runge-Kutta-Fehlberg method of an order 4-5 with 1 . 60 and 90. Hereinafter, the calculations are performed through mathematical Maple package. Let us find the natural frequencies ratio This value will be called the relative natural frequency. Pay attention to the from the mass and radius of the semicylinder -the first provision of the method. The relative error is caused by linearization of equation (2)         % 100 1 % 100 In Figure 3, the round markers represent the results of numerical integration of the expression (9) by Newton-Cotes method, and the triangular markers represent the errors. At does not depend on the mass and radius of the semicylinder, taking into account (9) and (10) Having substituted (5) into (9), and solving relatively   0 1  r , we obtain the radius at which the semicylinder has the natural frequency   In formula (13), in contrast to (6), there is a factor   Let us analyse the vibrations of a damper [6], made in the form of a semiball. When modelling the semiball motion, we use the assumptions and numerical methods described above. The equation of semiball vibrations [10-12] 0 sin cos sin where p I -the moment of inertia of a semiball relative to the instantaneous centre; mthe semiball mass; l -the distance of the mass centre C from the semiball base O . The moment of inertia p I and a distance l are calculated by the formulas: -the moment of semiball inertia relative to the horizontal axis, passing through the centre of inertia, and which is perpendicular to the drawing plane. Having divided (15) by c I , we get Having done the same way as when determining the period of nonlinear vibrations of a semicylinder, we have: Pay attention to the fact that in this case as well the   In Figure 3, the results are represented of the numerical integration of expression (19) and the error   With account of (19) and (20), we obtain an expression for determining the natural frequency of nonlinear vibrations
1,510.6
2019-06-01T00:00:00.000
[ "Engineering", "Physics" ]
Correlations Between the Scaling Factor and Fitness Values in Differential Evolution Designing fitness-based adaptive scaling factor (<inline-formula> <tex-math notation="LaTeX">$F$ </tex-math></inline-formula>) is an effective method to enhance the performance of differential evolution (DE) algorithms. This paper investigates the correlations between <inline-formula> <tex-math notation="LaTeX">$F$ </tex-math></inline-formula> and fitness values of target vectors, base vectors and difference vectors. The correlations are described by the notations of monotonicity and nonlinearity. Monotonicity is used to examine whether the optimization performance of DE and the fitness values of certain vectors have positive or negative correlation. Nonlinearity denotes the operation in which nonlinear mappings are used to redistribute the values of <inline-formula> <tex-math notation="LaTeX">$F$ </tex-math></inline-formula> in [0, 1] so as to boost the optimization performance. These two aspects of correlations are empirically tested on the Numerical Optimization Competition benchmark functions in IEEE Congress on Evolutionary Computation. Simulation results reveal different qualitative and quantitative correlations between <inline-formula> <tex-math notation="LaTeX">$F$ </tex-math></inline-formula> and fitness values of different vectors. Then, a new <inline-formula> <tex-math notation="LaTeX">$F$ </tex-math></inline-formula> that combines these relations is designed. Its strength is numerically verified by testing different CEC Benchmark functions. I. INTRODUCTION The differential evolution (DE) algorithm invented by Storn and Price [1] is a powerful population-based global searching tool. DE is believed to be effective for problems involving nonlinear and non-differentiable functions [2]. The number of DE research articles indexed in Science Citation Index database (via Web of Science) during 2007 to 2015 was 8714, as indicated in Reference [3], [4]. DE has been successfully implemented in diverse areas, such as spacecraft trajectory design [5] and statistical fisheries model estimation [6]. The major applications were also summarized in Ref. [3], [4]. DE employs the difference between distinct members from the current population as a guidance to search for a better solution. Compared with other intelligent algorithms, DE has the merits of few control parameters, good optimization performance and low space complexity [3]. Nevertheless, as an evolutionary algorithm, DE needs to compute large numbers of fitness functions to obtain the global optimum. To improve The associate editor coordinating the review of this manuscript and approving it for publication was Wei Wei . the performance, different types of enhanced DE have been developed in recent years. The comprehensive surveys of DE can be found in [3], [4], [7]. Among various techniques to improve DE, choosing good values of the scaling factor (F) in each generation is commonly an efficient option. Generally, the mechanisms to design F can be categorized into four groups: fixed value, random value, history-based adaption and fitness-based adaption. Fixed value indicates that F remains constant during the whole optimization. Storn and Price [1] indicated that F is not difficult to choose for good results. In their opinion, 0.5 can be a good initial choice of F. After testing different parameter settings for DE on the Sphere, Rosenbrock's and Rastrigin's functions, Gämperle et al. [8] found that the global searching ability and the convergence are very sensitive to the value of F. They suggested 0.6 as the initial choice. In another paper, Rönkkönen et al. [9] stated that setting F = 0.9 can balance well between the speed and probability of convergence. The benefits of fixed value lies in its simplicity. For complex problems such as the multimodal optimization problem [10] and problems with constrained experimental domain [11], the fixed value has been successfully employed. Alternatively, the value of F can be updated by random functions, namely, random value type. Das et al. [12] proposed the DE with Random Scale Factor (DERSF) and the DE with Time Varying Scale Factor (DETVSF). DERSF allows for stochastically scaling difference vectors, and thus, can help to retain population diversity. With DETVSF, individuals are encouraged to sample diverse zones of the search space during the early stages of the search. In the late stage, they tend to exploit the interior of a relatively small space in which the suspected global optimum lies. In the SaDE algorithm [13], F is varied by a normal distribution with a mean value of 0.5 and a standard deviation of 0.3. By doing so, SaDE attempts to maintain both exploration capability (with large F values) and exploitation capability (with small F values). Compared with fixed value, randomization can produce more values of F. Thus, it can enhance the performance to some extent. In some modified DE, such as TLBSaDE [14] and MDE [15], random values of F were used. The third type, namely history-based adaption technique, adaptively computes F by learning from the past generations of successes. It is widely applied in adaptive and self-adaptive DE algorithms, such as JADE [16], jDE [17] and SHADE [18]. In recent years, many SHADE-based algorithms [19] have been proposed; and some have performed well in testing different IEEE CEC benchmark functions. However, the mechanism to design the scaling factor in these improved DE algorithms is similar to that in SHADE. Based on the analysis in [20], an ensemble sinusoidal approach to automatically adapt the values of F was designed in LSHADE-EpSin [21]. It is believed that the performance of LSHADE-EpSin is better than that of SHADE. EsDEr-NR [22] is an enhanced version of LSHADE-EpSin. The last type of scaling-factor designing technique is fitness-based adaption, in which F is usually determined by fitness values from the current population. The first research concerning fitness-based adaption was by Ali and Törn [23]. It employed the minimum and maximum fitness values of current generation to calculate F. Ghosh et al. [24] developed a new fitness-based technique considering the fitness difference between the target vector and the best vector. Based on the idea that F for individuals with higher fitness values are larger, Tang et al. [25] designed the rank-based scheme and value-based scheme. In 2017, Mohamed introduced the triangular mutation scheme. In that paper, the adaptive scheme of F also takes into account both the minimum and maximum fitness values in the current generation [26]. As can be seen from the aforementioned reviews of F-designing techniques, history-based and fitness-based adaptive F are favorable in practice. However, compared to the abundance in history-based adaptive strategies, few researches have addressed fitness-based adaptive schemes. Besides, most existing fitness-based methods mainly focus on the minimum and maximum fitness values in each generation [23]- [26]. So far, the fitness values of more vectors have been largely ignored, which we believe has encoded important information about general structure of the fitness function. Thus, each individual's fitness should be exploited so as to obtain good values of F. In order to understand the scaling factor from a fitness-based perspective, this paper comprehensively studies the correlations between F and fitness values. To that end, we propose a way to define the fitness-based correlation. The correlation addresses fitness values of the target vector, the base vector and the difference vector. Then, qualitative and quantitative relations are found by testing on the IEEE CEC 2014 problems. To show the potential of these correlations, a new F that combines the correlations is designed. The performance of the new F are verified on IEEE CEC 2014 and 2017 problems. Several classic and recent F-designing techniques are employed as a comparison. The remainder of this paper is organized as follows: Section II reviews the classical DE and improved DE. Section III details the method to establish the correlations between F and different fitness values. Section IV discusses the correlations based on the numerical experiments. Section V concludes the whole paper. II. CLASSICAL DE AND IMPROVED DE A. CLASSICAL DE ALGORITHM In this subsection, the classical DE algorithm [1] is briefly reviewed. In the rest of the paper, it is assumed that minimization problems are to be resolved. There are four basic steps in the classical DE: initialization, mutation, crossover and selection. 1) INITIALIZATION The population of DE is represented as where D is the dimension of variables and NP denotes the population size. The minimum and maximum of x are defined as Then, a common method to initialize the i-th individual x i,0 is where unif(0, 1) is a uniformly distributed random variable within the range of [0, 1]. 2) MUTATION Let x i,G be an individual at generation G. After initialization, a donor vector v i,G with respect to the target vector x i,G is produced by the following mutation operator: where the indices r i 1 , r i 2 and r i 3 are randomly generated mutually exclusive integers within the range of [1,NP] and are all different from index i. These indices are randomly generated once for each donor vector. Here, x r i 1 ,G is termed the base vector, and x r i 2 ,G − x r i 3 ,G is called the difference vector. F is the scaling factor and is usually constrained in the range of [0, 1]. 3) CROSSOVER The purpose of the crossover operator is to produce a trial vector u i,G by combining x i,G and v i,G . Let u j i,G be the j-th components of u i,G . The following rule is applied elementwisely: where j rand is a randomly chosen integer in the range of [1, D], r i,j = unif(0, 1), and Cr is the crossover rate and defined in the range of [0, 1]. 4) SELECTION Once u i,G is generated, the fitness values of u i,G and x i,G are calculated and compared. The vector that survives to the next generation is selected by the following rule: where f u i,G and f x i,G represent the fitness values of u i,G and x i,G . The framework of the classical DE is shown in Algorithm 1. Generate Cr i,G 12: Generally, the techniques to improve DE can be categorized into four aspects, by respectively or combinedly designing the following: mutation operator, population size NP, scaling factor F, and crossover rate Cr. To classify the different variants of mutation operators, the notation ''DE/x/y/z'' is introduced. Here, x represents the base vector to be perturbed, y donates the number of difference vectors, and z stands for the type of crossover. Two types of crossover have been considered, which are exponential (exp) and binomial (bin). The binomial type is mostly used and the ''DE/x/y/z'' notation is usually shortened as ''DE/x/y''. In the first paper of DE [1], the classical DE can be noted as DE/rand/1. Then, DE/best/1, DE/rand/2, DE/best/2 [27] and DE/currentto-pbest/1 [16] were proposed and widely used in other improved DE. More complicated mutation operators were designed in [26], [28]- [30]. It should be noted that no matter which mutation operator is chosen for DE, designing the values of NP, F and Cr are always necessary. In 2006, Teo [31] firstly demonstrated the feasibility of self-adapting the population size parameter in DE. L-SHADE [32] showed the powerful performance improvement and ranked as the best algorithm in IEEE CEC 2014 problems. In that paper, the linear population size reduction technique was used. Poláková et al. [33] improved the population reduction technique and enables to decrease or increase the population size during the search. In EsDEr-NR [22], the niching-based population reduction method was employed to determine the number of population in each generation. As for F and Cr, many significant developments have also been performed, such as the invention of jDE [17], JADE [16], and DE-RCO [34]. Table 1 lists six improved DE algorithms. Here ''1'' indicates that one of the aforementioned features has been elaborately designed, ''0'' refers to retaining the original setting; ''−'' means that the parameter is not mainly designed, and the bold ''1'' donates the primary concerned parameter in the paper. As can be seen, large numbers of modern DE algorithms have complex strategies to mutually tuning all these factors. However, recent work appears to be concentrating on studying only one parameter. One one hand, focusing on one parameter can help to provide insights into understanding DE algorithms. On the other hand, new advances in tuning a single parameter can be plugged into existed DE algorithms to further improve performance, as is evident in [29] and [34]. Note that F is the unique feature of DE algorithms, as compared with NP and Cr. In fact, F is strongly bound to mutation operators. For example, In (4), F is used to scale the difference vector. In JADE [16], F is responsible for two difference vectors. As a first step to understand the fitness-adaptive DE, we choose mutation operator as in (4). Based on (4), the correlations between F and fitness values of different vectors will be discussed. III. CORRELATIONS BETWEEN F AND FITNESS VALUES A. ARCHITECTURE OF THE CORRELATIONS Note from Section II-A that in a DE algorithm, the trail vector is the vector that is to be evaluated and compared. As given by the diagram in (7), the contribution of F on the trail vector is mainly through the donor vector. It may be at first glance that the donor vector and the target vector are independent with each other. However, donor vector is weakly coupled with the target vector since the base vector and the difference vector are generated deliberately different from the the target vector. Thus, in this work it is reasonable to assume that F is related to the fitness values of target vectors, base vectors and difference vectors. In the later context, these vectors will be referred as tested vectors. (7) Let f (·) denote the fitness value of ''·''. To evaluate the correlations between F and fitness values of these vectors, the functional form of F is written as base vector difference vector where r i , r i 1 , r i 2 , r i 3 refer to the indices in classical DE, g T , g B , g D represent the contributions of target vector, base vector and difference vector, respectively. In the following the functions g T , g B , g D are to be resolved. To that end, several functional forms of g (·) functions are considered to computed F. The computed F are then plugged into existing DE algorithms to run a large number of numerical experiments. The performance of each g (·) function is recorded and then compared. Good functional forms of F can thus be identified. Then, correlations between F and fitness values of tested vectors are obtained. In this paper, the following two features of g (·) functions are mainly considered: monotonicity and nonlinearity. Here ''monotonicity'' refers to the comparison between optimization performance improvement/deterioration with F computed from two reversely-designed formulae. For example, it is termed as ''positive correlation'' if F with g(f (v)) performs better than that with 1 − g(f (v)), where g(·) is a monotonically increasing function of v and v is a tested vector. On the other hand, ''nonlinearity'' denotes the operation that redistributes values of F in [0, 1]. In this work we use modified power functions (see later in (15)) to account for the nonlinearity. B. MONOTONICITY In the following, g T , g B , g D are designed elaborately. For g T , F can be designed from the perspective that how much relative deficiency of the target vector's fitness as compared with the current best. It is denoted as ''proportion type'' and can be written as where f min,G and f max,G are the minimum and maximum of fitness values of generation G. On the contrary, F can be understood as the improvement of the target vector's fitness by comparing with the current worst. It is termed as ''reverse proportion'' and it is written as Note that F p,t + F rp,t = 1. Similarly, for g B , the proportion-type of F is and the reverse-proportion type of F is For g D , the proportion-type of F is and reverse-proportion type of F is Here F p,d and F rp,d can be understood as the local relative roughness/smoonthness of the fitness function, thus providing an intuitive way to characterize the local structure. C. NONLINEARITY The better F in Section III-B, denoted by F 0 is used as a benchmark of performance. Then, the following expressions of F 0 are searched to improve the performance. The exact expressions are Fig. 2 presents the new distribution of F after applying (15). As can be seen from Fig. 2, these maps have distinct behaviors. For power 1/2 and power 2, the new F spans the whole [0, 1]. However, the power-2-map tends to concentrate the value of F into smaller values whereas the power-1/2-map prefers larger values. Similarly, the power-1/3-map and the power-3-map have two reverse behaviors. The power-1/3-map produces two peaks around the left end and right end regions, whereas a condense F around 0.5 can be seen from the power-3-map. Thus, (15) can be used to represent most of the change processes from F 0 to F. IV. NUMERICAL EXPERIMENTS AND RESULTS In this section, numerical experiments are designed to test the performance of different F. Inspired by the machine learning method, a ''training set'' is employed to determine the exact (proportion/reverse-proportion) types and the best powers of F for the tested vectors. Then a ''test set'' is used to validate the performance of these correlations. For the training set, ''Real-Parameter Single Objective Optimization'' of IEEE CEC 2014 (hereafter IEEE CEC 2014 problems) [35] are adopted. IEEE CEC 2014 problems have 30 benchmark functions: 3 unimodal functions, 13 simple multimodal functions, 6 hybrid functions and 8 composition functions. Both unimodal/multimodal and separable/non-separable problems are included. Many of the functions have large numbers of local optimums. In some cases, such as function 11 and 12, the second better local optimum is far from the global optimum. We reversely design two relations to find out the better monotonicity formulation for each tested vector. For example, (9) and (10) on target vector. These two relations, denoted by proportional and reverse-proportional formula, are used to solve the IEEE CEC 2014 problems. The one that performs better is chosen as the representative monotonic relation. For the test set, the benchmark functions (without function 2) in ''Real-Parameter Single Objective Optimization'' of IEEE CEC 2017 (hereafter IEEE CEC 2017 problems) [36] are considered. In IEEE CEC 2017 problems, there are also 30 benchmark functions: 3 unimodal functions, 7 simple multimodal functions, 10 hybrid functions and 10 composition functions. A. MONOTONICITY AND NONLINEARITY OF A SINGLE VECTOR Before solving an optimization problem, Cr and NP need to be designed. According to the paper of [13], Cr is usually sensitive to problems with different characteristics. Thus, the mechanism of determine the value Cr should be designed carefully. In this section, the method to design Cr is designed to be the same as that in EsDE r -NR [22]. As for NP, two cases are concerned: (1) adaptive NP, the same as that in EsDE r -NR [22]; (2) Fixed NP, NP = 5D [8], D is the dimension of the problem. 1) ADAPTIVE NP First, the adaptive NP is used. Table 2 compares (9) and (10) in testing the 10-D version of IEEE CEC 2014 problems. Here the benchmark is set as (10). In order to analyze the solution quality from a statistical point of view, the results are compared using the Wilcoxon's ranksum test with a significance level of 0.05 [37]. For each F and each benchmark function, the tests are run 51 independently. The mean and standard deviation of the errors for these runs are recorded in the table. The best solutions with the smallest error mean values for each function are marked in boldface font. After comparison, one of three signs (+, −, =) is assigned. ''+'' means that (9) performs significantly better than (10); ''−'' means that (9) performs worse than (10); When the two F have no obvious performance difference, their relation is represented as ''=''. Learning from the table, (10) performs better in 6 functions and worse in 4 functions. Thus, (10) is slightly better than (9). a: TARGET VECTOR Furthermore, the performance of mapping F in (10) via (15) is tested. Table 7 (See Appendix) records the detailed optimization results and the comparison results for each function. Figure 3 depicts the total numbers of ''+'', ''−'' and ''='' for different powers. Here the benchmark is set as power 3 of (10). It can be found that the 3 power of (10) outperforms 1/2, 2 and 1/3 to a large extent and is slightly better than 1. Thus, 3 is regarded as the best power of (10). Table 3 shows the comparison results of (11) and (12). Here the benchmark is set as (11). Compared to (12), (11) is better in 13 functions but only worse in 6 functions. Intuitively, if the fitness value of the base vector is low, a smaller F is better because the donor vector can inherit more from the base vector. In other words, it prefers the exploitation operator. Then, the performances of different powers of (11) are compared. The optimization results and the comparison results for each function are given in Table 8 (See Appendix). Fig. 4 is the total numbers of ''+'', ''−'' and ''='' for different powers. Here the benchmark is set as power 3. Among the 5 types of powers, 1 and 3 perform the best. Considering that 3 finds more of the best solutions, 3 is chosen as the best power of (11). Table 4 shows the comparison results for (13) and (14). Here the benchmark is set as (14). Equation (14) is better than (13) in 19 functions and worse in only 4 functions. Intuitively, when the fitness of difference vectors is small, F should be large enough to have a substantial perturbation on the base vector. In other words, it encourages exploration during searching. Then, based on (14), 5 different powers are tested, and the results are recorded in Table 9 (See Appendix). The total numbers of ''+'', ''−'' and ''='' for different powers are shown in Fig. 5. Here the benchmark is set as power 3. Obviously, the power of 3 outperforms the others. d: DISCUSSIONS OF MONOTONICITY AND NONLINEARITY Learning from experiments above, it is found that the correlations between F and fitness values of different vectors are different. Firstly, the number of equivalent results (tie) of (9)/(10), (11)/(12) and (13)/ (14) with power-1 are 20, 11 and 7. It can be understood that the larger the number of ties, the less sensitive F is to the fitness of this vector. Namely, F is most sensitive to the fitness values of difference vectors. Secondly, the number of good and bad results of (10)/(9), (11)/ (12) and (14)/(13) with power-1 are 6/4, 13/6 and 19/4. Thus, F is in proportion to the fitness values of base vectors whereas having the opposite dependence on that of target vectors and difference vectors. It is interesting to note that the sensitivities are different. For example, the ratio 6/4 again indicates that a good F is weakly dependent on the target vector, since (9) and (10) have roughly the same trend of improvement. Thus, the analysis above provides insights of how DE works in a quantitative manner, which can guide the designing of F. Thirdly, from the analysis of nonlinearity, it can be found that the best power of these vectors are all 3. Figure 2 reveals that the power-3-map tends to produce a condense F around 0.5. In the previous studies of [1] and [8], 0.5 and 0.6 are suggested to be initial value of F. Thus, power-3 is consistent with the conclusions in [1] and [8]. 2) FIXED NP In order to study the effect of NP on the correlations, in this subsection, the population size is fixed, namely, NP = 5D. Another round of comparison reveals that the correlations remain the same. The results are recorded in Table 5. B. COMBINATION OF THESE CORRELATIONS In fact, the previously discovered correlations can be combined to obtain a novel way of designing F. As an illustrative example, in this section we choose a simple combination, namely, the average of the power-3 formulae of (10), (11) and (14): Similar as the treatment in Section III-C, we first evaluate the distribution ofF. Without loss of generality, we assume f (x) ∈ [0, 1]. We uniformly take 1000 samples of f (x); Fig. 6 presents the distribution ofF. Different from Section III-C, hereF is computed intermediately via F p,t , F rp,b and F rp,d , which are all directly computed by fitness functions. After being mapped by (15), values of (10), (11) and (14) are mostly concentrated around 0.5, of which the shape appears a normal distribution. Next, we consider applying a composite nonlinear function on (16). Although previously it is empirically revealed that the power-3 map in (15) is a good option, it is inappropriate to be directly used here since the distribution ofF is far from uniform distribution. On the contrary, we adopt a new strategy via a simple translation: where u =F + F, F = 0, ±0.1, ±0.2, ±0.3 and The performance of (17) under each F is compared in solving the 10-D version IEEE CEC 2014 problems. Here the benchmark is set as F = 0. Figure 7 shows that negative F can achieve better results; and F = −0.2 is the best. Inspired by this translation behavior, the final F takes the following form: (19) This form comes from the observation that squaring a randomized variable in [0, 1] decreases the expectation, which is similar to the translation behavior with F < 0. In addition, (19) increases the nonlinearity of (16). Though (19) appears to be complicated, it only involves algebraic calculation of known fitness values. It is worthy to mention that these fitness values of the tested vectors have already been calculated in the stage of selection, namely, the fourth step in the classical DE, or Line 14 in Algorithm 1. Thus, (19) does not add extra burden on computer resources. In fact, numerical test shows that it only takes about 3 milliseconds to produce all the F values (1800 in total) in each generation(CPU: 3.60 GHz, RAM: 8 GB). 1) TESTING F IN (19) ON IEEE CEC 2014 PROBLEMS The performance of (19) is tested on IEEE CEC 2014 problems. As mentioned in previous context, F is strongly associated with mutation operators. Since the current work is built onto the classic mutation operator, numerical comparisons are mainly limited into this type. Comparisons with other DE variants and other metanephritic algorithms are beyond the current scope. The adaptive Cr and NP are used. In order to comprehensively evaluate the strength, 7 types of F in literatures are used, which are: (1) 0.1 and 0.9 [9] are the fixed values; (2) rand [12] is the random value, and can be described as (3) SinDE and SHADE are the history-based adaption; SinDE is designed to be the same as that used in [22] and is used during the whole generations, SHADE represents the method to design F in the SHADE algorithm; (4) FiADE [24] and Rbs [25] are the fitness-based adaption. FiADE refers to the following equation: where f i = |f (x i ) − f (x best )| and λ = f i /10 + 10 −14 . Rbs is short for rank-based scheme: Figure 8 and Figure 9 show the total numbers of ''+'', ''−'' and ''='' for different schemes of F. Here, the benchmark is set as the proposed F in (19). Compared with 0.1, 0.9, rand, SinDE, FiADE and Rbs, the performance of the proposed F is much better. If compared with SHADE, the proposed F seems to be slightly better. For 30-dimension problems, the proposed method has an 8−8 tie with SHADE. When the dimension is set to 50, the proposed method beats SHADE to a larger extent, namely, performing better in 13 functions whereas getting worse in 3 functions. (17) are the worst in this simulation. However, its performance catches up its peers very quickly; it is among the best only after first 100 generations, as can been from Fig. 10 (f). The strength of (17) can be better revealed by checking other functions. For f 1 and f 3 , the exploration capability of (17) is demonstrated. (19) and F in literatures using CEC2017 problems for D= 10, 30, 50 and 100, benchmark: (19). The fitness value decreases rapidly during early generations. The exploitation capability of (17) can be seen by examining f 11 , f 12 and f 22 . Though the error fitness values with (17) in the early stage is not the best, its strength is evident after the generation exceeds 1000. Table 6. The benchmark is set as (19). R + represents the sum of ranks for the test problems in which the aforementioned previous F performs better than (19); R − represents the sum of ranks for the test problems in which the aforementioned previous F performs worse than (19). In this subsection, the technique of designing F that used in the original EsDE r -NR algorithm is also employed as a comparison. Setting the significance level to be 0.05, it can be found that the performance of most previous F are no better than that of (19). Specifically, In most of the dimensions, (19) performs better than 0.1, 0.9, rand, FiADE and Rbs. For SinDE, it is competitive to (19) if the dimension is low. However, when the dimension is high, such as 50 and 100, SinDE can no longer catch up with (19). For SHADE and EsDE r -NR, their performances are similar to that of (19). If the dimension is 100, SHADE becomes better. 2) TESTING F IN (19) ON IEEE CEC 2017 PROBLEMS It should be noted that the combination method of the correlations in (16) is preliminary. More advanced combination strategies may lead to better optimization performance. Studying different combination strategies of F with fitness values of different vectors are beyond the scope of this work. However, the primary purpose of this subsection is to demonstrate the potentials of these relations. Learning from the comparison results above, it can be concluded that the performance of the proposed F in (19) with the relations is competitive. Limited by the classical mutation operator in (4) used in this work, the performance of the new F in (19) is occasionally less powerful than the original EsDE r -NR [22]. Nevertheless, this paper provides a new perspective to design fitness-based F. For complicated mutation operators such as in [22], F usually controls the scale of more than one difference vector. Predictably, the correlations between F and the vectors are more complicated. Fox example, F is likely to be related not only to the fitness values but also to the angles of these difference vectors. The detailed discussions about these correlations are beyond the research of the paper. However, the simulation results in this section already indicate the existence of correlations between F and the fitness values of many vectors. Thus, it is beneficial to exploit this phenomenon. As for specified results for complex mutation operators, it will be the focus of future work. V. CONCLUSION This paper presents a novel method to investigate the correlations between F and fitness values of target vectors, base vectors and difference vectors in classical DE. By testing on the single-objective-optimization problems in IEEE CEC 2014 Competitions, the qualitative and quantitative dependency is obtained. It is found that F is in proportion to the fitness values of base vectors whereas it has the opposite dependence on that of target vectors and difference vectors. Compared with target vectors, F is more sensitive to the fitness values of the base vector and difference vector. To verify the potential of these correlations, a new F is designed that comprehensively combines these relations. The expression involves a second order power function of the arithmetic mean. Simulation results show that the proposed F outperforms most current schemes of F. This work provides a new way to tune fitness-based adaptive parameters. It can be extended to design F for newlydeveloped mutation operators, or to design Cr for general metaheuristic algorithms. More advanced combination of fitness-based F-scheme, or correlation between scaling factor and recent mutation operators, may also be future research directions. Tables 7-15.
7,511
2020-01-01T00:00:00.000
[ "Computer Science", "Mathematics" ]
Economic Policy Uncertainty, Environmental Regulation, and Green Innovation—An Empirical Study Based on Chinese High-Tech Enterprises As the continuous changes in environmental regulations have a non-negligible impact on the innovation activities of micro subjects, and economic policy uncertainty has become one of the important influencing factors to be considered in the development of enterprises. Therefore, based on the panel data of Chinese high-tech enterprises from 2012–2017, this paper explores the impact of heterogeneous environmental regulations on firms’ green innovation from the perspective of economic policy uncertainty as a moderating variable. The empirical results show that, first, market-incentivized environmental regulation instruments have an inverted U-shaped relationship with innovation output, while voluntary environmental regulation produces a significant positive impact. Second, the U-shaped relationship between market-based environmental regulation and innovation output becomes more pronounced when economic policy uncertainty is high. However, it plays a negative moderating role in regulating the relationship between voluntary-based environmental regulation and innovation output. This paper not only illustrates the process of technological innovation by revealing the intrinsic mechanism of environmental regulation on firm innovation, but also provides insights for government in environmental governance from the perspective of economic policy uncertainty as well. Introduction In recent years, as China's economic construction has entered a stage of high-quality development and the construction of ecological civilization has continued to advance, people's green concept has been continuously enhanced. At the same time, Chinese enterprises are also facing the challenge of green transformation and upgrading to contribute the harmonious development. Promoting the upgrading of industrial structure in order to transform the driving force of economic growth has become an important way of economic development in the new era. In China's environmental governance, the main role is still played by local governments, and the effectiveness of environmental regulations in each region directly determines the effectiveness of environmental governance at the macro level in China (Wang and Liu 2020) [1]. With the continuous improvement of environmental regulation policy system, the design of environmental regulation tools in China has become increasingly diversified. Due to the fact that different types of environmental regulation tools have different design principles, their effects on environmental protection governance and corporate business strategies are also different. Therefore, environmental regulation tools have the characteristics of heterogeneity. From the perspective of enterprises, in the face of an increasingly stringent environmental protection system, technological innovation has gradually become an important determinant of balancing environmental regulation compliance costs and business performance, and technological innovation is the core of green innovation (Li et al. 2019) [2]. Due to the continuous changes in the world economic environment and China's own economic environment, China has adopted different economic policies to promote the healthy development of economy (Yao and Morgan 2008) [3]. Since 2008, governments have begun to adjust their economic policies to alleviate the economic depression caused by the financial crisis. "The Belt and Road", "Mass entrepreneurship and innovation" and other macroeconomic policies have been introduced by the Chinese government to promote economic recovery and stable development. Enterprises are the main body of innovation and the new force to promote innovation and creation. Improving the technological innovation ability of enterprises is an important guarantee for the sustainable development of enterprises, so the construction of enterprise innovation ability is of great significance (Omri 2020) [4]. The development of high-tech enterprises is more vulnerable to the influence of economic policy, and the change of economic policy is the wind vane of enterprise development strategy. Because of the particularity of the uncertainty of economic policy, it will affect the decision-making behavior of micro enterprises through the uncertain channels of the external environment, and also has a guiding and leading role for the enterprise behavior. At present, most of the existing studies have analyzed the relationship between environmental regulation and firm innovation from a static perspective, without realizing that the intensity of environmental regulation is not static. Governments change the intensity of environmental regulations according to different economic situations, while firms adjust their coping strategies and their innovation resource allocation in the face of changing environmental regulations. In addition, the different types of environmental regulation tools also determine whether firms adopt short-term production reduction and stress strategies or make long-term innovation investments. Therefore, studying the dynamics of environmental regulation intensity and the impact of environmental regulation heterogeneity on firms' technological innovation can help us gain a deeper understanding of the mechanism of environmental regulation's impact on firms' green innovation. Second, there is much literature on the influence of uncertainty on corporate behavior and the factors affecting corporate technological innovation from all walks of life, but there are few studies on the relationship between economic policy uncertainty, environmental regulation, and corporate technological innovation activities. Compared with the sudden and exogenous nature of other uncertainties, the economic policy uncertainty is more controllable. In this paper, it is significant to study the impact path of economic policy uncertainty in environmental regulation on corporate behavior. The structure of this paper is as follows: the second part introduces the literature review and hypotheses; the third part introduces the data sources and model establishment; the fourth part describes the empirical results and the last part is the conclusion. The contributions of this paper are as follows: (1) based on the heterogeneity of environmental regulation tools, the research on the incentive effect of external governance mechanisms on corporate innovation. Existing research on environment and innovation mostly focuses on the types of environmental regulations and the impact of environmental regulations on the regional level or industry. In addition, the existing research results are limited to linear results. While considering the differences in the impact of heterogeneous environmental regulation tools, this paper also examines both linear and nonlinear results. Since appropriate environmental regulations can stimulate "innovation compensation" effects, they can not only compensate for firms' "compliance costs", but also increase their productivity and competitiveness. However, the greater flexibility of market-based and voluntary environmental regulations can bring about uncertainty. Second, since Porter's hypothesis originates from developed countries and is not fully applicable to Chinese firms, the linear results will be discussed here along with the nonlinear results; (2) Explore the moderating effects of economic policy uncertainty on different types of environmental regulatory tools on corporate innovation investment. This paper not only studies the independent effect of economic policy fluctuations on innovation output, but also multiplies the independent variables with the moderating variables to determine whether economic policy uncertainty has a moderating effect in the process of environmental regulation and innovation. It is helpful for the government and enterprises to realize greater incentive effect of environmental regulation tools on enterprise innovation in the uncertain environment. Literature Review This paper is related to two branches of literature, firstly a literature review of the Environmental Regulation and Green Innovation and the secondly review of Uncertainty of Economic Policy and Enterprise Innovation Activities. Environmental Regulation and Green Innovation Gray (1987) [5], Conrad and Wastl (1995) [6] believe that environmental regulatory policies will increase costs and squeeze innovation resources, thereby hindering enterprise technological innovation. Slater and Ange (2000) [7] consider that when the intensity of environmental regulations is high, the overall R&D level of the enterprise drops, and the benefits of innovation are lower than the costs paid. Many scholars have demonstrated the view that environmental regulation policy restricts enterprise technological innovation from the perspective of empirical test. Nakano (2003) [8] calculated the Malmquist index of the Japanese paper industry and found that environmental regulations did not significantly promote technological innovation. Wagner (2007) [9] and others studied environmental regulation, environmental innovation, and patent application with German manufacturing industry as samples, and found that environmental regulation has a significant negative impact on the number of patent applications, and environmental regulation hindered the green innovation activities of enterprises to a certain extent. Ramanathan (2010) [10] and others used structural equation modeling to analyze the data of the US industrial sector from 2002 to 2006. The study found that in the short term, environmental regulatory policies hindered the technological innovation behavior of enterprises. Kneller and Manderson (2012) [11] found that in the UK manufacturing industry, environmental regulation encourages micro enterprises to carry out environmental innovation, but does not increase the total R & D expenditure. Brunnermeier and Cohen (2003) [12], Hamamoto (2006) [13] believe that appropriate environmental regulations can promote technological progress and diffusion, produce "innovation compensation" effect, and make up for the cost of enterprises complying with environmental regulations. Taking the panel data of industrial enterprises in Taiwan from 1997 to 2003 as samples, it is found that strict environmental regulations can promote the increase of R&D investment of industrial enterprises (Yang et al. 2012) [14]. Greenstone et al. (2012) [15] studied the patent output data of the US manufacturing industry and found that environmental regulations can stimulate technological innovation. Sen (2015) [16] took the automotive industry in a transnational environment as the research object and studied the relationship between environmental regulation and technological innovation, and discovered that environmental regulation can not only improve the level of technological innovation, but also reduce environmental pollution. Yang et al. (2021) [17] found that the strength of environmental regulation was positively related to firms' green innovation. Moreover, Calel (2011) [18], Bréchet and Meunier (2014) [19] believe that there is a non-linear relationship between environmental regulations and the degree of green technology innovation. Shi et al. (2018) [20] indicated that financial support in government environmental regulations can significantly increase the innovation and scientific research investment of enterprises, and the impact of scientific research investment on enterprise innovation is in an inverted "U" shape. Schmutzler (2001) [21] implies that the mechanism of environmental regulation for innovation compensation is very complicated, and the benefits of innovation may not exceed the cost of complying with environmental regulations. Frondel (2007) [22] represents that the marketincentive environmental regulation policy has no significant impact on the development of pollution end control technology and the innovation of cleaner production technology. Cesaroni (2001) [23] reveals that in a competitive market, the incentive effect of the administrative order-based environmental regulation method on enterprise technological innovation is not as good as the market-incentive environmental regulation method. However, in an imperfectly competitive market, industrial characteristics, economic structure, etc., will affect the effects of different environmental regulatory measures on enterprise technological innovation. Uncertainty of Economic Policy and Enterprise Innovation Activities Economic policy uncertainty refers to the inability of economic agents to predict with certainty if, when, and how the government will change current economic policies (Gulen and Ion, 2016) [24]. According to Bloom (2007) [25], the uncertainty of economic policy itself may be an important cause of economic recession. Throughout the existing literature, many scholars agree that the uncertainty of economic policy has a negative impact on the macro-economy. These effects are not only reflected in the rise of economic policy uncertainty, which aggravates the fluctuation of key macroeconomic variables and financial asset variables, but also in the negative impact of economic policy uncertainty on macroeconomic variables such as output and employment, hindering economic recovery (Baker et al. 2016) [26]. These studies suggest that the economic policy uncertainty may inhibit the investment activities of enterprises by changing the cost of business activities, which is related to the characteristics of enterprises and industries such as the proportion of irreversible investment, financial constraints, and the degree of competition. Although many studies at home and abroad focus on the impact of economic policy uncertainty on macroeconomic variables and micro enterprise activities, innovation activities, an important part of economic activities, are ignored by most studies. Bloom (2007) [25] pointed out that although uncertainty will bring temporary negative impact on investment, employment, productivity, and other aspects, due to the difference of adjustment cost characteristics, its impact on R&D may be different from other economic activities. He also indicated that the relationship between uncertainty and R&D activities is a very important topic, which needs more theoretical and empirical research. In addition, Marcus (1981) [27] emphasized that government policies have an important impact on scientific and technological innovation activities. In the face of policy uncertainty, enterprises need to weigh the risks and benefits of innovation activities. This aspect requires more indepth research. Atanassov et al. (2015) [28] regarded US state elections as an exogenous change of government policy uncertainty. They empirically studied the impact of policy uncertainty on corporate R&D activities and found that rising policy uncertainty led to a rise in corporate R&D levels. At the same time, the positive effect of uncertainty is stronger in competitive election years, politically sensitive industries, enterprises with great difficulty in innovation, enterprises with high growth value and enterprises facing more fierce product market competition. These studies reveal that the impact of policy uncertainty varies with the types of business activities, and its impact on R&D activities is different from that on other types of investment activities. The impact of macroeconomic policy uncertainty on emerging economies is more obvious. As the largest emerging economy, China's high-tech industrial innovation activities are inevitably affected by the uncertainty of economic policy. In fact, technological innovation activities of enterprises will also be affected by policy uncertainty, and higher economic policy uncertainty has an obvious "incentive effect" on R&D investment of enterprises. Research Hypothesis With the continuous development of environmental regulations, environmental regulatory tools are upgraded constantly, environmental regulatory designs are becoming more and more diversified, and the types of regulatory tools continue to grow and develop. Generally speaking, environmental regulations can be divided into three types: commandand-control environmental regulation, market-incentive environmental regulation, and voluntary environmental regulation. Among them, command-and-control environmental regulations use mandatory measures issued by the government to encourage enterprises to fulfill their environmental governance responsibilities. Usually, enterprises have no choice but to abide by rules and regulations passively. In the face of command-and-control environmental regulations, enterprises often take stressful behaviors out of luck, such as temporarily reducing production and other measures to reduce corporate emissions and ensure temporary environmental compliance. In addition, the environmental regulation system in China is under the joint leadership of the government at the same level and higher level organizations, so the enterprises will be affected by the improper performance view and excessive intervention of local governments, which leads to the low status of environmental regulation departments, lack of independence, and the effective implementation of environmental regulation policies (Tang et al. 2010) [29]. Based on this, this paper argues that the command-and-control environmental regulation does not significantly promote the technological innovation activities of enterprises, and even has a "crowding out effect" on innovation investment. The main reasons are as follows: first, the command-and-control environmental regulation often has the characteristics of high cost, and the regulated enterprises often need to meet the pollution control standards through high-cost pollution control means. When the enterprises are short of funds, they may use the funds originally used for innovation to pay the pollution fee, or even spend more resources to deal with the environmental regulation policies for the temporary treatment of pollutants, so there may be a crowding out effect on the technological innovation investment of enterprises. Second, the high cost of command-and-control environmental regulation increases the production cost of enterprises, which may have an adverse impact on the profits of enterprises, further reduces the limited resources of enterprises, and then reduces the willingness of enterprises to carry out technological innovation activities. Hypothesis 1a (H1a). Compared with command-and-control environmental regulatory tools, market-incentive environmental regulatory tools and voluntary environmental regulatory tools have a more significant positive impact on the enterprise's green innovation output. Second, since the Porter hypothesis suggests that appropriate environmental regulations can stimulate "innovation compensation" effects, which not only compensate for the "compliance costs" of firms, but also increase their productivity and competitiveness. However, when the Porter hypothesis was developed in developed countries such as the United States and tested directly in developing countries such as China, the premise of the theory underwent a fundamental structural change and, therefore, may not fully reflect the laws of developing countries such as China. Moreover, there is uncertainty about the effect of environmental regulation on technological innovation, so a U-shaped relationship between market-incentive environmental regulation and technological innovation is considered. Therefore, the hypothesis is put forward: Hypothesis 1b (H1b). Market-incentive environmental regulation tools have an inverted U relationship with green innovation output. In recent years, more and more scholars have proposed that economic policy uncertainty will promote enterprise innovation output. When faced with the increase of economic uncertainty, enterprises tend to choose the growth option of innovation investment. Because the higher the uncertainty of economic policy, the greater the possibility of disruptive changes in the market, the greater the opportunity for enterprises to obtain future competitive advantage, and the greater the possibility for enterprises to obtain future growth opportunities through early innovation investment. Vo and Le (2017) found that due to the significant positive correlation between R&D investment and the improvement of corporate competitiveness, in order to maintain sustained competitiveness, enterprises will increase R&D investment to cope with the negative impact of increased economic policy uncertainty on enterprises. The research of Ross et al. (2018) [30] also shows that the increase in economic policy uncertainty stimulates R&D investment. Therefore, this article proposes: Hypothesis 2 (H2). Economic policy uncertainty has a positive impact on green innovation output. The uncertainty of macroeconomic policy brings more uncertainty to the development of enterprises, and also affects the implementation of micro environmental regulation policy. Innovation is an important driving force of economic growth. Enterprises with strong innovation ability can obtain strong market power and higher excess profits. When faced with market competition and risk, enterprises tend to accelerate innovation to increase market power to a certain extent (Aghion et al. 2015) [31]. Meanwhile, the increase of economic policy uncertainty may aggravate the market risk, which will make enterprises further increase innovation investment to maintain or regain market power. Therefore, under different environmental regulation policies, the regulation effect of economic policy uncertainty is also different. First of all, market-incentive environmental regulation is a more flexible policy, and is more affected by market factors. When enterprises face external economic policy uncertainty, it will make the U-shaped relationship between market-oriented environmental regulation and innovation output more obvious. In other words, when enterprises are faced with voluntary environmental regulation, the increase of economic policy uncertainty may aggravate the market risk, which will make enterprises further reduce innovation investment to keep the enterprise itself. Therefore, this paper puts forward the following suggestions Hypothesis 3a (H3a). When enterprises are subject to Market Incentive Environmental Regulation, economic policy uncertainty positively moderates the inverted U-shaped relationship between environmental regulation and enterprise innovation output. Hypothesis 3b (H3b). When enterprises are subject to Voluntary Environmental Regulation, economic policy uncertainty negatively regulates the relationship between environmental regulation and enterprise innovation output. The technical roadmap of this paper is as follows (Figure 1): Sample Selection and Data Sources Regarding the sample selection of high-tech enterprises, the paper considers that listed enterprises only began to regulate the disclosure of R&D investment in 2008, and the certification standards for high-tech enterprises were officially implemented in 2008. Taking into account the immaturity and irregularity of the certification measures and R&D intensity from 2008 to 2011, and the lack of data is more serious. In this section, we use panel data from 2012 to 2017 for empirical research. Through the analysis of the advantages and disadvantages of previous scholars' sample selection methods, based on the considerations of accuracy and cost, we propose a more reasonable sample selection method for high-tech enterprises. First, the 2008-2017 stock code data of all Chinese A-share listed enterprises are derived from the CSMAR database and matched with the CSMAR qualification database to determine the parent enterprise and subsidiary as the sample data of high-tech enterprises. Second, according to the "Administrative measures for the determination of high and new technology enterprises", if it is recognized as a high-tech enterprise, the income tax discount it can enjoy is 15%, and the validity period is 3 years. Then, we used CSMAR's corporate tax rate data to multiple cross-check the sample data during the three years after the high-level enterprise identification to determine whether the enterprise still enjoys the 15% tax preference, so as to further determine the completeness and accuracy of the sample selection. If there is a mismatch, manually collect the annual report and official website data for comparison, and finally determine the appropriate high-tech enterprise sample. Because some listed enterprises did not disclose the certification of high-tech enterprises in the annual reports, or disclosed that they were recognized as high-tech enterprises but the subsequent annual reports did not disclose whether they passed the review, or did not disclose that they did not apply for review, failed the review or were revoked as high-tech enterprises matter, which will affect whether a listed enterprise has the qualifications of a high-tech enterprise, which is very important for the research samples and research conclusions in the paper. Therefore, in addition to sorting out the samples listed in the qualification accreditation database, the paper also further determines the high-tech enterprise qualifications of the samples in combination with the local publicity documents on the "High-tech Enterprise Accreditation Management Network" to ensure that the data are true and reliable. Data Source The number of environmental protection laws and regulations in force from all regions of the year, the amount of sewage charges remitted and put into storage in different regions, the number of households paid and put into storage in each region, the number of environmental letters and visits in each region are from "China Environment Yearbook". However, the environmental petition data of each region is only disclosed to 2015. In order to ensure the comparability of the sample interval, this paper makes up for the missing data of environmental petition in 2016 and 2017 through the data of environmental petition in 2015 and the rising rate of environmental letters and visits disclosed in "China Environmental Yearbook". The main financial data comes from the CSMAR database and the CCER database. Moreover, the sample of high-tech enterprises comes from the Cathay Certified Qualification Database and Tax Rate Database. This paper mainly uses EXCEL2019 and Stata15.0 software for data processing and statistical analysis. After screening, this paper obtained 3438 panel observations from 573 sample enterprises. Variable Definition This section contains the definition of variables, mainly the choice of environmental regulation variables and the choice of green innovation variables for firms, as well as the choice of moderating and control variables. Selection of Environmental Regulation Tool Variables Based on the research of Shen et al. (2019) [32], this paper measures the three kinds of environmental regulation, divides the environmental regulation into command control type, market incentive type and voluntary type. On this basis, it constructs an index system to evaluate the regional industrial competitiveness, empirically analyzes the impact of the heterogeneity of environmental regulation tools on the regional industrial competitiveness, and tests the spatial effect of different types of environmental regulation. First, command-and-control environmental regulation tool refers to the administrative department's direct management and mandatory supervision of environmental related production activities according to relevant laws, regulations, rules, and standards. Governments, industry organizations, and environmental protection departments have formulated a variety of environmental protection systems and standards to control environmental pollution sources by setting the lower limit of environmental protection and putting environmental protection matters in front. Since the stringency of the regulatory system can vary at different levels, the use of quantitative measures also takes into account the stringency of the policy. Therefore, this paper uses the current effective environmental regulations and rules to measure the command-and-control environmental regulation tool (ER-1). Second, the target of market incentive environmental regulation mainly exists in the form of tax preference, while the market incentive environmental regulation system in China is not perfect. Comparatively speaking, the pollution discharge fee system was implemented earlier and the tool implementation was relatively stable, which can effectively measure the cost of corporate pollution control. Therefore, this paper selects the ratio of the amount of sewage charges collected by each province, autonomous region, and municipality to the industrial added value (unit: 10,000 yuan/household) as an indicator for evaluating market-incentive environmental regulations (ER-2). Lastly, this paper selects the data of voluntary supervision at the regional level as an alternative indicator to measure the intensity of voluntary environmental regulation (ER-3). Specifically, the logarithm of the total letters received in each region is used as an alternative indicator of voluntary environmental regulation tools. Variable Selection of Enterprise Green Innovation In order to screen green patents of listed enterprises, this paper is based on the green patent data, which is compared with the international patent classification green list launched by WIPO in the State Intellectual Property Office. This list is generated according to the classification standard of green patents in the United Nations Framework Convention on Climate Change, including seven categories: Transportation, Waste Management, Energy Conservation, Alternative Energy Production, Administrative Regulatory or Design Aspects, Agriculture or Forestry, and Nuclear Power Generation. At the same time, in order to further reflect the innovation and value of green patents, this paper presents the green invention patent (Gpatent) to represent the green innovation output. Moderator Variable In order to measure the economic policy uncertainty, Baker et al. (2016) [26] constructed the measurement index of economic policy uncertainty. This indicator was developed in 2016 based on keyword searches in the English-language articles of the South China Morning Post in Hong Kong, China, and the specific development process was as follows: first, the articles were extracted from the monthly articles that contained both "China", "economic" and "uncertain" keywords; second, the above extracted articles were deeply screened, and the screened articles included at least one of the following keywords such as "Spending", "Policies", "Tax", "Central bank", "Budget", "Deficit", etc. "Finally, the ratio between the number of articles extracted after two screening and the total number of articles in the South China Morning Post for that month was calculated to obtain monthly data measuring the degree of uncertainty in China's economic policies. In this paper, the ln(epu) of monthly average is taken as the alternative variable of economic policy uncertainty. Control Variable The paper first controls the degree of marketization. In addition, enterprise innovation investment is affected by enterprise profitability, risk-taking, and other factors. Therefore, this paper controls the micro enterprise level variables such as return on assets, solvency, profitability, management incentive, tax rate, and so on. See Table 1 for the definition of specific variables. Model Construction In order to test the impact of different environmental regulation tools on green innovation output, the paper constructs model 1. Model 1 verifies the impact of different environmental regulatory tools on the green innovation output of enterprises. Meanwhile, considering the time lag of innovation, the explained variable in model 1 include the current period and the lag period. In order to further test whether there is an inverted U-shaped relationship between market-incentive environmental regulation and green innovation output, this paper adds the square term of ER-2 to characterize the inverted U-shaped effect. In order to test the impact of the fluctuation of economic policy uncertainty on enterprise innovation output, this paper constructs model 3. In order to test the moderating role of economic policy uncertainty in environmental regulation and innovation output, this paper constructs models 4 and 5. Model 4 examines the moderating effect of economic policy uncertainty on the Ushaped relationship between market-incentive environmental regulations and innovation output. Model 5 examines the moderating effect of economic policy uncertainty on the relationship between command-and-control, voluntary environmental regulations, and innovation output. In models 1 to 5, β 0 is the intercept, β 1~βn is the coefficient (n = 1, 2, . . . ), ε the residual. Table 2 shows the descriptive statistical results of enterprise green innovation output, heterogeneity of regional environmental regulation tools moderator variable, and control variables. It can be seen from Table 2 that the average value of the full sample of green technology innovation output is 0.579, and the maximum and minimum values are 65 and 0, respectively. From this point of view, it can be seen that there is a large gap in the innovation input and output of different enterprises. Moreover, most enterprises are in a state of low input and low output. Furthermore, the mean value of ER-1 is 33.778, which is less than the median of 35. The maximum value is 105 and the minimum value is 3. The average ER-2 is 6.272, and the standard deviation of the whole sample is 0.045. The average of ER-3 is 8.588, the minimum is 4.7, and the maximum is 10.077. Therefore, different types of environmental regulations have great regional differences. Furthermore, the maximum value of economic policy uncertainty is 5.902, and the minimum value is 4.744, which indicates that the fluctuation of economic policy uncertainty is small during 2012-2017. Empirical Results This paper selects balanced panel data for analysis and uses the Hausman test to select the model. The p value of Hausman test rejects the null hypothesis at the 1% significance level, that is, the fixed-effects model is the most efficient. Therefore, the empirical model of this paper uses the fixed-effects model for regression as well. This article uses stata15.0 to perform firm-level clustering standard error fixed-effects model regression on sample data. Analysis on the Impact of the Heterogeneity of Environmental Regulation Tools on Enterprise Green Innovation When studying the influence system of different environmental regulation tools on the green innovation output of enterprises, this paper first analyzes the influence of different types of environmental regulation tools on the enterprise innovation output (GPatent-Model 1). Table 3 lists (1)-(3) the regression results of the model panel. Through the regression results of model 1 in Table 3, it can be seen that marketincentive environmental regulation (ER-2) and voluntary environmental regulation (ER-3) have a significant positive impact on enterprise innovation output. Moreover, Jiang et al. (2020) [33] drew the same conclusion when studying voluntary environmental regulation. At the same time, the incentive effect of voluntary environmental regulation tools on enterprise innovation output is more significant. However, Er-1 does not significantly increase innovation output. This may be because when facing the command-and-control type of environmental regulation, enterprises may spend more money on pollution control, which has reached the government control standard, and they have not fundamentally solved the pollution problem based on the consideration of technological innovation. From the regression results of model 2, it can be seen that the coefficients of the first and second power terms are significantly positive and negative at the level of 1%, respectively. This indicates that there is a significant inverted U-shaped curve relationship between market-incentive environmental regulation and innovation output, which means that there is an inflection point between market-incentive environmental regulation and innovation output, and the inflection point is 15 [35]. Specifically, when the intensity of environmental regulation in an area is less than the threshold, the enhancement of the intensity of environmental regulation promotes the increase of enterprise innovation output. At this time, the effect of "innovation compensation" is greater than the effect of "following cost", which reflects the Porter hypothesis effect. Moreover, when the intensity of regulation is greater than the threshold, the inhibitory effect of environmental regulation on the innovation output of the enterprise takes the upper part, and the "innovation compensation" effect cannot effectively compensate for the "compliance cost" effect, which reflect the neo-classical environmental regulation "restraint theory". Analysis of the Impact of Economic Policy Uncertainty on Enterprises' Innovation Output When studying the impact of economic policy uncertainty on green innovation output, this chapter first studies and analyzes the impact of economic policy uncertainty (lnepu) on green innovation output (Gpatent), and then studies the regulatory role of economic policy uncertainty. Table 4 shows the panel regression results of model 4 and 5. Robust t-statistics in parentheses; *** p < 0.01, ** p < 0.05, * p < 0.1. Analysis of the Transmission Mechanism of Economic Policy Uncertainty to Green Innovation From the panel data regression results of model 3 in Table 4, it can be concluded that economic policy uncertainty has a significant impact on enterprise innovation output, which is the same as that of Shen et al. (2020) [32]. To be specific, the result shows that when the economic policy uncertainty increases, high-tech enterprises will increase investment in innovation to save enterprise opportunities and improve enterprise competitiveness. It also shows that Chinese high-tech enterprises are actively coping with the fluctuations of economic policies. Moreover, in the case of strong economic policy uncertainty, enterprises tend to use innovation to resolve market risks and seize market advantages, and innovation has produced better economic and environmental benefits. Thus, the adjustment of economic policy can promote enterprise innovation output. The Adjustment Effect of Economic Policy Uncertainty on the Main Effect The moderating effect of economic policy uncertainty on the relationship between environmental regulation and innovation output is shown in Table 4 list (6), (7). Because command-and-control environmental regulations have no significant impact on the innovation output of enterprises, we will not test the moderating effect below. From Table 4 list (6), we can find that economic policy uncertainty positively regulates the relationship between market-incentive environmental regulations (ER-2) and innovation output, which shows that when economic policy uncertainty is higher, the U-shaped relationship between the two is more obvious. Furthermore, it shows that market-incentive environmental regulations are affected by external factors. In list (7), we find that economic policy uncertainty negatively regulates the relationship between voluntary environmental regulation and innovation output. This shows that when enterprises are subject to voluntary environmental regulation (ER-3), economic policy uncertainty negatively regulates the path of innovation output. This might be voluntary environmental regulation and the behavior of the enterprise's own behavior. When facing the external environment with higher risk, the enterprise will reduce its own risk to ensure its long-term development. Therefore, enterprises may develop more businesses that require less risk, instead of investing a lot of money to engage in technological innovation, which further shows that flexible environmental regulation tools are more conducive to the realization of incentives for corporate innovation. Overall, within the limits of environmental pollution discharge, flexible environmental policies encourage enterprises to adopt environmentally friendly technologies, coordinate their green production behaviors through reward and punishment mechanism, promote enterprises' green management, help enterprises establish a good corporate image, and encourage enterprises to carry out technological innovation. The Lag Effect of Environmental Regulation on Innovation Effect Due to the time lag of innovation, we consider the lagged effect of environmental regulation on innovation output. Since some of the data are only available from 2012 to 2017, the independent variables are selected from 2012 to 2016 to explore the lagged effects. The main data results are presented in Table 5. The result of the lag effect is basically consistent with the result of the non-lag data. Robust t-statistics in parentheses; *** p < 0.01, ** p < 0.05, * p < 0.1. Conclusions The paper uses high-tech enterprises from 2012-2017 as a research sample to empirically test the impact of different environmental regulatory tools on corporate green innovation. At the same time, it studies the impact of economic policy uncertainty on corporate green innovation, and further explored the moderating effect of economic policy uncertainty on the relationship between environmental regulation and enterprise innovation activities. First of all, in the research of the paper, command-and-control environmental regulation tools does not have a significant impact on green innovation because they only set a lower limit for environmental protection, which does not provide enough incentive for green innovation and may even have the effect of "driving out good money from bad money" on the level of environmental protection of the whole society. Moreover, the sudden increase in environmental technology standards may force companies to stop their existing investment projects and have a crowding-out effect on innovation resources. Second, both market-incentive environmental regulations and voluntary environmental regulations have a significant positive impact on the green innovation output of enterprises. Furthermore, there is a significant inverted U-curve relationship between market-incentive environmental regulations and green innovation output. When market-based environmental regulations are small, the "compliance cost effect" is stronger, and environmental regulations are not conducive to green innovation; when market-based regulations are large, the "innovation compensation effect" is stronger, and environmental regulations are conducive to green innovation. This also shows that the market-incentive environmental regulation is more flexible than the voluntary environmental regulation, giving enterprises more free choice, and promoting the green innovation output of enterprises. Addition-ally, economic policy uncertainty is positively promoting the enterprises green innovation output. Second, when economic policy uncertainty adjustment is used as a moderating variable, it positively regulates the u-shaped relationship between market-incentive environmental regulations and corporate green innovation. However, the economic policy uncertainty has weakened the relationship between voluntary environmental regulations and enterprises green innovation. Therefore, the interactive effect of economic policy uncertainty and different types of environmental regulatory tools shows that when enterprises face more flexible regulatory tools, the impact of economic policy uncertainty on enterprises will be more sensitive, which also shows that it is more conducive to the development of enterprise innovation activities, and adjust enterprise innovation strategies according to market volatility. Suggestion First, we should continue to attach importance to the importance of environmental protection and adhere to the sustainability of environmental protection. In the "high-quality" development stage of China's economy, environmental protection is still an important task in the government work. According to research results, environmental regulations can stimulate enterprises' innovative activities or behaviors. In the face of the government's mandatory regulation policies, high-tech enterprises tend to respond to the government's environmental regulations by making use of their independent research. On the other hand, the technological innovation caused by environmental regulations can significantly improve the business performance of enterprises in the short term. In the long run, technological innovation is the decisive factor for the improvement of competitiveness enterprises in the future. Moreover, environmental problems are rooted in social progress, and there is always a dilemma between environmental protection and development. Furthermore, short-term environmental protection effects achieved through strict regulations are not desirable. Accordingly, our government should continue to attach importance to environmental protection, and fully mobilize the enthusiasm of enterprise innovation through effective regulation policies to realize the sustainability of environmental protection and development. Second, improve the performance assessment system for local governments and implement a consistent, stable and transparent environmental regulation policy. At present, the central government is strengthening environmental regulations, but local governments are constantly adjusting the intensity of environmental regulations due to the interests of regional economic growth, so there are large fluctuations in the intensity of environmental regulations in each region, and the uncertainty of environmental regulations is high. To avoid the impact of uncertainty on enterprises, the government should establish a more scientific and reasonable performance appraisal system, incorporate environmental protection and governance indicators into local government performance appraisals, and establish methods for investigating major environmental accidents. This will prevent local governments from constantly adjusting the intensity of environmental regulations in the game of interests with the central government, which will undermine the incentive of enterprises to innovate in technology. At the same time, the government should advocate the implementation of consistent, stable, and transparent environmental regulation policies, improve the transparency of environmental regulation policies, collect opinions from society before introducing or changing policies, allow sufficient time for enterprises to receive information, and pay attention to the impact of the external environment of economic policy uncertainty on enterprises to avoid the risks of enterprise innovation brought about by environmental regulation uncertainty. Limitation This paper investigates the impact of environmental regulation tools on firms' green innovation activities and reveals the realization path of environmental regulation to enhance innovation capability in a cross-level context. However, the research in this paper still suffers from the following shortcomings. Based on the availability of data, this paper only analyzes explicit environmental regulations, but not the impact of implicit environmental regulations on firms' technological innovation activities. At the same time, this paper selects specific environmental regulation policy implementation statistics to measure different types of environmental regulation tools, however, the effectiveness of environmental regulation tools is difficult to be reflected by a single policy implementation effect indicator, so how to better study the impact of different types of environmental regulation tools on firms' technological innovation activities remains to be further discussed.
9,398.2
2021-09-01T00:00:00.000
[ "Economics", "Business" ]
Assessment of cardiovascular & pulmonary pathobiology in vivo during acute COVID-19 Importance: Acute COVID19 related myocardial, pulmonary and vascular pathology, and how these relate to each other, remains unclear. No studies have used complementary imaging techniques, including molecular imaging, to elucidate this. Objective: We used multimodality imaging and biochemical sampling in vivo to identify the pathobiology of acute COVID19. Design, Setting and Participants: Consecutive patients presenting with acute COVID19 were recruited during hospital admission in a prospective cross sectional study. Imaging involved computed tomography coronary angiography (CTCA - identified coronary disease), cardiac 2deoxy2[fluorine18]fluoroDglucose positron emission tomography/computed tomography (18F FDG PET/CT identified vascular, cardiac and pulmonary inflammatory cell infiltration) and cardiac magnetic resonance (CMR identified myocardial disease), alongside biomarker sampling. Results: Of 33 patients (median age 51 years, 94% male), 24 (73%) had respiratory symptoms, with the remainder having non-specific viral symptoms. Nine patients (35%, n=9/25) had CMR defined myocarditis. 53% (n=5/8) of these patients had myocardial inflammatory cell infiltration. Two patients (5%) had elevated troponin levels. Cardiac troponin concentrations were not significantly higher in patients with myocarditis (8.4ng/L [IQR 4.0, 55.3] vs 3.5ng/L [2.5, 5.5], p=0.07) or myocardial cell infiltration (4.4ng/L [3.4, 8.3] vs 3.5ng/L [2.8, 7.2], p=0.89). No patients had obstructive coronary artery disease or vasculitis. Pulmonary inflammation and consolidation (percentage of total lung volume) was 17% (IQR 5, 31%) and 11% (7, 18%) respectively. Neither were associated with presence of myocarditis. Conclusions and relevance: Myocarditis was present in a third patients with acute COVID-19, and the majority had inflammatory cell infiltration. Pneumonitis was ubiquitous, but this inflammation was not associated with myocarditis. The mechanism of cardiac pathology is non-ischaemic, and not due to a vasculitic process. Email: have been conducted (8) these have been limited to the recovery phase and restricted to a single modality. As such these studies were unable to differentiate ischaemic from non-ischaemic cardiac pathology. A multisystem inflammatory syndrome in children (MIS-C) with myocarditis and cardiac impairment as hallmarks of the presentation has been described (9). Whether a similar mechanism of cardiac and vascular injury occurs in the adults with acute COVID-19 remains unknown. Using CMR, computed tomography coronary angiography (CTCA) (10) and 18F-FDG-PET/CT (5)(6)(7) imaging during acute COVID-19 infection, we investigated in vivo pathobiology of the myocardium, arterial vasculature and pulmonary parenchyma. We hypothesised that myocardial or pulmomary inflammation and injury could be described by CMR and 18F-FDG-PET/CT, the presence of vascular inflammation identified by 18F-FDG-PET/CT and the contribution by coronary artery disease shown by CTCA. We investigated the relationship between imagining findings and biomarkers, as well as any association between pulmonary and cardiac pathology. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 22, 2022. ; Study design and population Participants hospitalised with COVID-19 at the Aga Khan University Hospital in Nairobi, Kenya were recruited in this single-centre exploratory observational study. The full study methodology with the inclusion and exclusion criteria and imaging techniques and protocols has been pubsished (11). The study complies with the Declaration of Helsinki with study approval from the Aga Khan University Nairobi Institutional Ethics Review Committee (Reference: 2020/IERC-74 (v2). Excluision criteria were contra-indication to CMR, known previous myocardial pathology, and those with severe symptoms requiring non-invasive or invasive ventilation. Patients underwent multi-modality imaging and serological testing for High-sensitivity cardiac troponin-I (hsTrop, Siemens Healthineers), N-terminal pro B-type natriuretic peptide (NT-proBNP, Siemens Atellica Solution), C-Reactive protein (CRP, Siemens Atellica Solution) and viral load (12) (using cycle threshold [CTVL], RealStar® SARS-CoV-2 RT-PCR Kit, Altona Diagnostics. We additionally identified a small prospective control population of individuals where COVID-19 was excluded, and age-and sex-matched historical control population who had previously undergone 18F-FDG-PET/CT.) Image Acquisition & Assessment Participants underwent simultaneous CTCA and thoracic 18F-FDG-PET/CT following admission, followed by CMR as described previously (Supplement text) (11). Atherosclerotic disease by CTCA: Presence of coronary artery disease in each major coronary artery, and the main side branches were classified as potentially obstructive (>50% stenosis) or non-obstructive. Myocardial disease by CMR: Ejection fraction (EF), regional wall motion abnormalities, myocardial fibrosis, oedema and presence of infarction in the left (LV) and right ventricles (RV) by late gadolinium enhancement (LGE) were determined as previously described (11). The anatomical 17-segment model was used to derive T1, T2 and extracellular volume (ECV) values for each segment excluding the apex (13). Acute myocardial inflammation was defined using the 2018 Lake Louise Criteria II that requires evidence of both myocardial oedema (high T2) and nonischaemic injury (high T1, high ECV or non-ischemic LGE) (14). A sensitivity analysis was presented using the more sensitive criteria requiring the presence of abnormally high T1 or T2 values in conjunction with evidence of pericarditis or systolic dysfunction. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 22, 2022 Myocardial, vascular and pulmonary pathology by 18F-FDG-PET/CT: As previously described (5) CT and 18F-FDG-PET scan images were co-registered and analysis performed using the 17-segment anatomical framework (13). Myocardial uptake was scored based on a visual scale. Patients with focal or diffuse uptake were identified as having acute myocardial inflammation (5). Semi-quantative vascular inflammation on 18F-FDG-PET/CT for the aorta was assessed by the American Society of Nuclear Cardiologists visual grading criteria (15). Quantitative assessment was also undertaken on large vessel inflammation (6). In brief, a maximum arterial standardised uptake value (SUV) was derived in serial axial measurements across the ascending, arch and descending aorta. The target-to-background ratio (TBR) for each aortic region was calculated by averaging the ratio of maximum arterial SUV to mean venous SUV for each segment. Twenty-one age-and sex-matched historical controls who had previously undergone clinical 18F-FDG-PET/CT scans for other indications (eg. investigation of pulmonary nodules and reported as normal) and five healthy active controls were also scanned. For pulmonary analysis, chest CT and 18F-FDG-PET/CT images were analysed separately for lung consolidation and inflammation respectively. Three-dimensional lung contours were generated and linked to the co-registered PET and CT images. Thresholds, for pathology, were determined at three pooled standard deviations above the population means. Control patients were used to define thresholds to delineate consolidation on CT (by lung density in Hounsfield units) and inflammation on 18F-FDG-PET (by SUV). inflamed lung were presented as percentage of total lung volume. Statistical analysis Baseline clinical and imaging data were expressed as the median (interquatile range) for continuous data and categorical data as proportions. Clinical and imaging data were presented by tertile of cardiac troponin (a priori analysis), presence of myocarditis on CMR, myocardial cell infiltration on PET and degree of pulmonary inflammation/consolidation. A priori hypothesis testing was carried out across categorical and continuous covariates by tertile of cardiac troponin (11). Exploratory hypothesis testing was further conducted when comparing clinical and imaging parameters by myocarditis and myocardial cell infiltration status. A p-value of <0.05 was considered as statistically significant. No correction for multiple testing was done. Analysis was done in R (Version 4.0.3 http://www.R-project.org/). . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 22, 2022 The remaining had non specific viral symptoms (fever, myalgia, arthralgia fatigue, diarrhoea, Cardiac magnetic resonance imaging Twenty-six patients underwent CMR scanning. All scans were of adequate quality for volume and wall motion analysis. One scan was of insufficient quality for T1 mapping analysis, one insufficient quality for T2 analysis and one scan inadequate for LGE analysis. Myocarditis status was therefore available in 25 patients using the specific 2018 Lake Louise Criteria (14). LGE, seven (78%) also had evidence of active myocardial oedema by T2 value. . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 22, 2022 Figure 1). Six (57%, n=5/9) of these patients had evidence of LGE, with four in a myocarditis pattern (mid-wall), one subendocardial and one both. Thirteen patients (50%, n=13/25) had evidence of myocarditis by the sensitive criteria (Supplemental Table 4 Table 2). Computed-tomography coronary angiography Twenty-five patients underwent CTCA and all had sufficient image quality. No patients had significant obstructive coronary artery disease (lumen stenosis>50%, Figure 2). Positron emission tomography/computed-tomography Vascular inflammation: Arterial inflammation in the ascending aorta by TBR was 1.97±0.35 (Supplemental Figure 2) and similar to historical or active controls (1.92 ± 0.32 and 2.03±0.05, p=0.74). There was no significant regional variation in TBR values in different aortic segments (Supplemental Table 7). No patients fulfilled the visual criteria for inflammation in the aorta or carotids (Supplemental Figure 3). There was no correlation with aortic FDG uptake (TBR) and CRP, hsTrop or viral load (Supplemental Table 8 When categorising patients who underwent both CMR and pulmonary 18F-FDG-PET/CT into tertiles, 7/25 patients had 0-5%, 9/25 had 5-25% and 9/25 had >25% inflammation of the total lung volume (Supplemental Table 7). Similarly, 5/25 patients had 0-7%, 10/25 had 7-15% and 10/25 had >15% consolidation of total lung volume. (Supplemental Table 9). . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 22, 2022 DISCUSSION To our knowledge this is the first study to systematically use molecular imaging, alongside anatomical and functional modalities, to explore cardiovascular and pulmonary pathobiology in acute COVID-19 infection. We make some important observations. First, rates of myocarditis (by CMR criteria) and myocardial inflammatory cell infiltration (by 18F-FDG-PET/CT imaging) were significant at 35% and 30% respectively. Second, the median burden of lung inflammation and consolidation was quantified at 17% and 11% of total lung volume respectively. Lung involvement, both inflammation and consolidation, did not correlate with presence of myocarditis or myocardial inflammatory cell infiltration. Third, vasculitis was not present in acute COVID-19. Finally, biochemical evidence of myocardial injury was not common with only two acute COVID- patients showing elevated troponin levels. Our rates of myocarditis, despite recruiting patients with acute COVID-19, were lower than previously reported (1) but similar to other recent studies (2,3). This in part reflects our choice of using the more specific 2018 Lake Louise criteria to define CMR-based myocarditis. Indeed, the prevalence of myocarditis rose from 1 in 3, to 1 in 2 when applying the most sensitive criteria as in previous studies (1). Using 18F-FDG-PET/CT imaging myocardial inflammatory cell infiltration was present in 1 in 3 cases. Surprisingly, neither the presence of myocarditis nor myocardial cell infiltration was associated with biochemical evidence of cardiac injury. Myocarditis may not always result in cell necrosis and troponin release (16,17). Further, troponin release my be dynamic (18,19) and may not be appreciated on single point blood sampling on hospital admission. Finally, studies on myocarditis have generally been restricted to patients with troponin elevations in whom significant coronary disease has been excluded (20). In contrast, our study involved cardiac imaging of an unselected population with an acute viral infection, regardless of troponin concentration. Whilst CMR-based tissue characterisation can indicate myocardial oedema, molecular imaging with 18F-FDG-PET/CT reflects myocardial cellular infiltration -a better indicator of an acute inflammatory process (21,22). Of those patients who had CMR defined myocarditis, only 53% had inflammatory cell presence. This suggests acute myocardial inflammation may have either occurred prior to presentation, or oedema is not always due to direct cellular infiltration. SARS-CoV-2 infection is present in the myocardium in the majority of individuals dying from COVID-19 (22). Furthermore, in vitro studies have shown SARS-CoV-2 cytopathic infection of cardiac myocytes with macrophage and lymphocytic infiltration (1,22,23). However, a cytokine storm . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 22, 2022. ; https://doi.org/10.1101/2022.03.21.22272698 doi: medRxiv preprint has also been implicated in COVID-19 infection (24,25). This process occurs sometime after viral inoculation, and may also result in cardiac pathology without presence of SARS-CoV-2 in the myocardium (26). In this case, systemic cytokines may also cause systemic capillary leak (with resultant oedema) without cellular infiltration of all tissues (27). Therefore cardiac injury may result from a dual injury process; initially from viral infection, followed by a subsequent cardiac insult from a systemic inflammatory response. In keeping with this, we demonstrated that some patients had evidence of prior myocardial fibrosis without associated oedema, but then also had active oedema without fibrosis in other regions (Figure 1). Whilst the pathogenesis of hypercoagulability in COVID-19 remains unclear, vascular thrombosis has been described in hospitalised patients (28). Endothelial injury and vascular inflammation have been postulated to play a central role (9,29,30). In contrast, our study did not find any supporting evidence of arterial inflammation in acute COVID-19. We further found no evidence of coronary thrombosis to explain the myocardial pathology observed on imaging ( Figure 2). Previous studies had demonstrated coronary artery obstruction and ischaemic injury patterns on CMR, however the study population was restricted to those with troponin elevations (3). As such, we can conclude that the mechanism of cardiac pathology in acute COVID-19 infection is unlikely to have occurred secondary to coronary atherosclerosis, and the reported high prevalence of vascular thrombosis is not due to an arterial vasculitic process (28). Macrophages and monocytes are known to be involved in the pathogenesis of acute respiratory distress syndrome, and there is growing evidence of their involvement in COVID-19 related pulmonary injury (31). We showed that the degree of pneumonitis, by 18F-FDG-PET/CT, was variable, correlated degree of lung consolidation but was not associated with presence of myocarditis. This suggests that myocarditis can occur in patients with minimal lung involvement. Our study has some limitations. Firstly, although achieving comprehensive phenotyping this was an observational study in a small COVID-19 population. Almost half of the patients received either dexamethasone or remdesevir which may have supressed the inflammatory response and underestimated myocardial inflammation. Scanning, however, was performed early in the clinical course. Secondly, our assessment of vasculitis was based on 18F-FDG-PET/CT uptake in the large vessels. Vascular inflammation in the smaller vessels, due to limited spatial resolution, may be undetected. However, if vascular inflammation was secondary to a systemic cytokine storm or immune response, it would have been expected that this would have been reflected in the aorta and the medium sized carotids. Thirdly, we excluded patients with severe COVID-19 infection . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 22, 2022. ; https://doi.org/10.1101/2022.03.21.22272698 doi: medRxiv preprint who were unable to tolerate imaging limiting the generalizability of our findings in this population. Finally, we did not perform cardiac biopsy. Although this is the gold standard for the diagnosis of myocarditis, we performed deep phenotyping using three different imaging modalities. The combination of myocardial inflammatory cell identifaction by 18F-FDG-PET, and myocarditis detection by CMR (using the strictest criteria to identify oedema), make our findings robust. In conclusions, for the first time in acute COVID-19 infection and with the use of multi-modality imaging we make the following observations. Myocarditis was present in one in three patients and the majority of these patients had evidence of inflammatory cell infiltration by cardiac 18F-FDG-PET/CT. Pneumonitis was ubiquitous in acute COVID-19 but this inflammation was not associated with CMR myocarditis. The mechanism of cardiac pathology in acute COVID-19 is non-ischaemic, and vascular thrombosis in acute COVID-19 is not due to a vasculitic process that involves large or medium sized vessels. Acknowledgements: We would like to thank Rehema Sunday and Norah Matheka for coordinating the study. Disclosures: Nil. Trial registration: This study has been registered at the ISRCTN registry (ID ISRCTN12154994) on 14th August 2020. Accessible at www.isrctn.com/ISRCTN12154994 . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 22, 2022 There is mid-wall injury at the basal myocardium in the septum (white arrows) shown by CMR (a) native T1, (b) post contrast T1 and LGE (h, blue arrow). There was no increase in T2 values in this basal region (c), but there is gross increase in mid-ventricular septal T2 (d, red arrows) indicating oedema remote to prior myocardial fibrosis. There was minimal lung consolidation (e, red contours) or inflammation (f, blue contours). There is diffuse bi-ventricular 18F-FDG uptake (significantly higher than in the liver) (g). The patient had severe left and right ventricular . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Top panel (blue outline) represents a patient with significant myocardial inflammatory cell infiltration with some pulmonary involvement -17% lung consolidation and 29% inflammation. LEGENDS Cardiac inflammatory cell infiltration (focal on diffuse bright spots in lateral anterior and septal walls). Bottom panel (red outline) represents another patient with no myocardial involvement but with significant lung consolidation (35%) and inflammation (54%). . CC-BY 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted March 22, 2022 is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint
4,354
2022-03-22T00:00:00.000
[ "Medicine", "Biology" ]
All Complete Intersection Calabi-Yau Four-Folds We present an exhaustive, constructive, classification of the Calabi-Yau four-folds which can be described as complete intersections in products of projective spaces. A comprehensive list of 921,497 configuration matrices which represent all topologically distinct types of complete intersection Calabi-Yau four-folds is provided and can be downloaded at http://www-thphys.physics.ox.ac.uk/projects/CalabiYau/Cicy4folds/index.html . The manifolds have non-negative Euler characteristics in the range 0 - 2610. This data set will be of use in a wide range of physical and mathematical applications. Nearly all of these four-folds are elliptically fibered and are thus of interest for F-theory model building. Introduction Calabi-Yau manifolds play an important role in several branches of mathematics and physics. Often one obstruction to progress in a given area is the lack of large data sets of example manifolds. In this paper, we take a step towards rectifying this situation by explicitly constructing and classifying a specific class of Calabi-Yau four-folds. This set consists of Calabi-Yau four-folds which can be realized as complete intersections in products of complex projective spaces (the CICYs), arguably the simplest construction of Calabi-Yau manifolds available. The data set we find consists of some 921,497 configuration matrices describing these Calabi-Yau four-folds and thus provides a large, explicit and easy to manipulate class of such manifolds. For Calabi-Yau three-folds, all possible distinct CICYs were classified in 1988 by Candelas et. al. [1]. By means of a computer algorithm, a list of 7890 configuration matrices was obtained. This data set has been immensely useful, particularly in the context of string theory, and is still used to this day. For example, more recently, freely-acting symmetries for CICY three-folds have been classified [2] and a large class of heterotic string standard models has been constructed based on these manifolds [3,4]. The main purpose of the present paper is to carry out an analogous classification of CICY four-folds. Calabi-Yau four-folds are of particular importance for the construction of four-dimensional N = 1 string vacua based on F-theory [5][6][7][8]. If the success of heterotic model building, where the systematic analysis of large classes of vacua has led to the discovery of many standard-like models [3], is to be emulated in F-theory, large, accessible classes of Calabi-Yau four-folds will be required [9]. Moreover, for the application to F-theory, Calabi-Yau four-folds need to allow for an elliptic fibration structure, where the six-dimensional base manifold corresponds to the "physical" space required in the compactification from ten to four dimensions and the torus fiber describes the variation of the axio-dilaton over this base space. As we will see, practically all of the CICY four-folds which arise from our classification allow for an elliptic fibration and are, therefore, of potential use for F-theory. In order to introduce some basic ideas and discuss elementary properties of CICY four-folds we would like to start with a prototypical example, given by the configuration matrix    (1.1) The notation is to be understood as follows. The first column of the matrix denotes the dimensions of the projective spaces whose product forms the ambient space into which the CICY is embedded. Here, this ambient space is P 1 × P 2 × P 3 . Each of the remaining columns denotes the multi-degree of a polynomial in the ambient projective coordinates. For the present example, we have two polynomials with multi-degrees (1, 1, 0) and (1,2,4), where the three entries refer to the degrees in the coordinates of P 1 , P 2 and P 3 , respectively. The CICY defined is the common zero locus of these polynomials. If we denote the P 1 coordinates by x i , where i = 0, 1, the P 2 coordinates by y a , where a = 0, 1, 2 and the P 3 coordinates by z α , where α = 0, . . . , 3, then these polynomials can be written as where c ia and d iabαβγδ are complex coefficients. Hence, the configuration matrix (1.1) describes a family of CICYs parametrized by the space of coefficients in these polynomials. Fortunately, many of the basic properties, such as the Euler characteristic, do not depend on the specific choice of these coefficients but only on the configuration matrix. This feature is of course one of the strengths of the configuration notation and one of the main motivations for its introduction. For the purpose of applications to F-theory, how do we identify the existence of an elliptic fibration structure for such a CICY four-fold? In fact, the configuration matrix (1.1) represents an example of a CICY with an "obvious" elliptic fibration, that is, a fibration which is consistent with the projective ambient space embedding. To see this we note that the first two rows of the configuration matrix (1.1) are given by and represent a Calabi-Yau one-fold, that is, a torus T 2 . The full configuration (1.1) describes a CICY where this torus is fibered over the base space P 3 . It turns out that this fibration has section. As we will show, all but 477 of our 921,497 CICY configuration matrices have an elliptic fibration of this kind, consistent with the projective embedding. Indeed, many of these have a large number of different such fibrations, many of them with sections. This means the number of physical F-theory compactifications which can be obtained from this data set is, in fact, much larger than 921,497. Our approach for classifying CICY four-folds will broadly follow the algorithm for the classification of CICY three-folds set out in ref. [1]. However, the large scope of the project, reflected in the total number of configuration matrices and their maximal size, means that numerous efficiency improvements had to be made in order to complete the task in a reasonable amount of computing time. Moreover, some of the methods do not generalize from three-to four-folds and had to be modified appropriately. As an example, we mention the operation on configuration matrices referred to as "splitting". It involves increasing the size of the configuration by breaking up a column of the original matrix into several summands and adding a P n factor to the ambient space. A crucial step in the classification algorithm is to decide whether or not a splitting is effective, that is, whether it leads to a topologically different manifold. Unfortunately, the effectiveness criterion for CICY three-folds developed in ref. [1] does not generalize to four-folds and a new criterion had to be found. The details of the classification algorithm, including an effectiveness criterion for four-fold splittings, and the main results of the classification will be described in the remainder of this paper. In a longer, companion paper to this article [10], we will provide additional properties of the manifolds in this data set. This will include information on Hodge numbers, Chern classes, and the structure of elliptic fibrations and sections. The paper is organized as follows. In the next section, we define the data set we will be studying in more detail and explain why a finite number of configuration matrices suffices to represent all CICY four-folds. Essentially, different configuration matrices can describe the same Calabi-Yau manifold, and all CICY four-folds are accounted for by a finite subset of the infinite number of possible configuration matrices. We obtain upper bounds on the size of the matrices that need be considered and provide a table of all possible ambient spaces that can occur in this finite list. To classify the different manifolds it is useful to compute the Euler characteristic χ, which only depends on the configuration matrix. The formula for χ together with expressions for the Chern classes are introduced in section 3. In section 4 different types of possible equivalences, which have been taken into account in the compilation of our list, are discussed. It is explained how they generalize known results for three-folds to four-folds and how they can be dealt with efficiently. In section 5, we describe in detail the algorithm that was used to compile our list. The results of running this algorithm are presented in section 6. We provide a histogram of the different values for the Euler characteristic that occur in the list, discuss the question of how many topologically distinct manifolds are present and how many manifolds have an obvious fibration structure. We conclude in section 7. Definitions and finiteness of the class We begin with a general description of the CICY four-folds classified in this paper. Our notation and conventions largely follow the original papers on CICY three-folds [1,[11][12][13] and ref. [14]. We consider the complete intersection of K polynomials p α in a product of m projective spaces P n 1 × · · · × P nm of total dimension K + 4 = m r=1 n r . In the following, we use indices r, s, . . . = 1, . . . , m to label the projective ambient space factors P nr and indices α, β, · · · = 1, . . . , K to label the polynomials p α . Such manifolds are described by a configuration matrix with non-negative integer entries q r α . The columns q α = (q r α ) r=1,...,m of this matrix denote the multi-degrees of the defining polynomials p α . More precisely, the polynomial p α is of degree q r α in x r,i , the homogeneous coordinates of P nr . In order to ensure that this prescription defines a four-dimensional manifold, we demand that the K-form dp 1 ∧ · · · ∧ dp K (2.2) is nowhere vanishing. The configuration [n|q] describes a family of CICYs redundantly parametrized by the space of coefficients in the polynomials p α . The strength of this notation rests on the fact that key properties of the manifolds defined in this way only depend on the configuration matrix and not on the specific choice of polynomial coefficients. Moreover, it was shown in ref. [12] that for every configuration a generic choice of coefficients defines a complete intersection manifold. In the following, we will not distinguish between the family [n|q] and a specific member thereof. In order for a configuration matrix (2.1) to define Calabi-Yau manifolds we must ensure the vanishing of the first Chern class which is equivalent to the conditions K α=1 q r α = n r + 1 (2.3) on each row of the configuration matrix. The conditions on CICY configuration matrices stated so far are not particularly stringent and it is clear that the set of such matrices is infinite. However, different configuration matrices can describe the same Calabi-Yau four-fold. In order to arrive at a finite list classifying all topological types of CICY four-folds, we need to identify suitable equivalence relations between configurations and only keep one representative per class. The simplest example of such an equivalence relation stems from the following observation. The ordering of ambient space factors and polynomials in the configuration matrix is completely arbitrary. Therefore, two configuration matrices that differ only by permutations of rows or columns describe the same family of CICY four-folds. To reduce the occurrence of such permutations we will, in our algorithm, impose a lexicographic order (with the entries q r α = 0, 1, 2, . . . ordered by value) on the rows and columns [1]. It then suffices to consider only permutations of rows where the corresponding ambient space factors are the same. Another relevant observation is that a polynomial linear in the coordinates of a single P n defines a sub-manifold P n−1 ⊂ P n . This means that a multi-degree q α with a single non-zero entry q r α = 1 can be removed from a configuration matrix while simultaneously reducing the dimension n r to n r − 1. To exclude such cases, we will require the degree of a polynomial to be at least two if it depends on one projective space only. This is equivalent to the condition m r=1 q r α ≥ 2 , ∀α = 1, . . . , K , which we impose on all configuration matrices. Further, we note that we are not interested in block-diagonal configuration matrices of the form The sub-configuration [1|2] describes two points in P 1 and the above configuration is, therefore, equivalent to two copies of [n|q]. Now focus on configuration matrices with a fixed size, (m, K). All such matrices can be generated by a two-step procedure that is well-suited for machine computation [14]. First, one lists all m-dimensional integer vectors n with n r > 0, ordered such that n r ≥ n s if r > s , which satisfy the dimensional constraint m r=1 n r = K + 4. Second, for each n, one lists all matrices q which satisfy eqs. (2.3) and (2.4), excluding matrices of the form (2.5). This is most easily done by starting from an initial configuration and shifting row-wise according to while preserving the lexicographic order of rows and columns. For a given dimension vector n, this procedure clearly terminates. However, it is not clear that the complete algorithm will also terminate and lead to a finite list, since the list of vectors n is, a priori, unbounded. However, it has been observed [12] that beyond a certain upper limit in n, every configuration matrix is equivalent, by the above relations, to a smaller matrix and, hence, does not need to be included. In this sense, only the minimal configuration of a given manifold is kept in the list. More precisely, generalizing the arguments in ref. [12], it can be shown that minimal CICY d-folds satisfy the bounds Here, s is the number of ambient P 1 factors and p the number of ambient P n factors, with n > 1. The quantity α is defined as α := {r | nr>1} (n r − 1), where the sum is over all ambient P n factors with n > 1. Since this bounds the total number, m, of ambient projective spaces as well as the total ambient space dimension from above, the set of minimal configurations is finite. For CICY four-folds, we must set d = 4 and hence the bounds become There are 660 different possible ambient spaces that satisfy these bounds and they are presented in table 1. As will be explained in section 4, it is possible to employ further techniques, beyond those discussed here to remove redundant descriptions of CICYs. This will lead to the Table 1. All possible ambient spaces for CICY four-folds are shown in this table. These 660 ambient manifolds fall into classes according to the number of P 1 -and P 2 -factors. The third column gives the excess number N ex = m r=1 (n r + 1) − 2K. It vanishes when all the columns sum to two which, from eq. (2.4), is the minimal non-trivial value. A large value of N ex generally means that there are many ways to construct inequivalent configuration matrices for a given ambient space. The minimum number of P 1 factors is zero except for (P 1 ) f where f min = 5, (P 1 ) f P 2 where f min = 3, (P 1 ) f P 3 where f min = 2, (P 1 ) f P 4 where f min = 1 and (P 1 ) f (P 2 ) 2 where f min = 1. This table follows the format used in ref. [1]. refined, more efficient algorithm described in section 5. However, as we will see, the simple method outlined in this section still serves a useful purpose as the first, initiating step of the full algorithm. Chern classes and Euler characteristic To implement more advanced methods for redundancy removal, we require explicit expressions for some of the topological properties of complete intersection manifolds. For this reason, we review the explicit formulae for the Euler characteristic, which is of particular importance, and the Chern classes. These formulae will be presented for general complete intersection manifolds with configuration matrix [n|q] which do not necessarily have to satisfy the Calabi-Yau condition (2.3). We begin with the total Chern class which is given by the expression [12] c where J r denotes the Kähler form of the r-th ambient projective space P nr , normalized in the standard way such that P nr Expanding eq. (3.1) yields explicit formulae for the first four Chern classes. They are given by Here, the multi-index Kronecker delta is defined to be δ r 1 ...rn = 1 if r 1 = r 2 = . . . = r n and zero otherwise. For a configuration to describe a family of Calabi-Yau manifolds we need c 1 ([n|q]) = 0 which leads to the Calabi-Yau constraint (2.3) presented earlier. In this case, the above equations for the higher Chern classes simplify substantially since all terms proportional to the first Chern class can be dropped. The fourth Chern class is related to the Euler characteristic χ by a variant of the Gauss-Bonnet formula An integration of a top-form ω over [n|q] is evaluated by pulling it back to an integration over the ambient space A = P n 1 1 × · · · × P nm m using (3.9) where the subscript "top" means that the coefficient of the volume form J n 1 1 ∧ · · · ∧ J nm m of A should be extracted from the enclosed expression. Equivalent configurations and redundancy removal After this preparation, we can now discuss more refined equivalence relations between configuration matrices. It will then be a simple matter, in the next section, to construct an improvement on the "naive algorithm" given in section 2. There are several different ways in which two configuration matrices can be equivalent: I. Permutations of rows and columns. As we have already discussed, two configuration matrices are equivalent if they differ only by a permutation of rows or columns. The resulting redundancy is partially resolved by imposing the aforementioned lexicographic order on the rows and columns [1]. However, a residual redundancy remains. A "brute force" procedure to remove this redundancy is to generate all row and column permutations of a matrix and compare with the candidate equivalent configuration. For the larger CICY configuration matrices which appear in our classification, this eventually gets out of hand, due to the exponential growth of the number of permutations with matrix size. An alternative method which is more efficient, particularly for large matrix size, works as follows. Consider two configurations, [n|q] and [n|q], of the same size. First we impose a sequence of necessary conditions for equivalence in order to identify inequivalent configurations efficiently. The algorithm is stopped as soon as non-equivalence is established. The first necessary condition is that the tallies of numbers in each row and column should coincide for two matrices related by row or column permutations. Hence, if the tally disagrees the matrices are inequivalent. In the second step, we compare the trace and eigenvalues of the m × m square matrices M = qq T andM =qq T . If either disagrees the matrices are inequivalent. For configurations which pass these tests we have to find a necessary and sufficient criterion for equivalence. To this end consider O(m) matrices R andR diagonalizing M andM, that is, R T MR =R TMR = diag(a 1 , . . . , a m ). In addition, we assume that the eigenvalue spectrum {a r } is non-degenerate. 1 The crucial observation is then that, given a fixed order of the eigenvalues, the matrices R andR are essentially unique apart from a 1 If the spectrum happens to be degenerate we can either modify the configuration matrices q andq in a way that does not affect equivalence but may change the spectrum, for example by adding the same constant to each entry, or use the brute force method described earlier. Given these sign conventions we then compute the matrix P =RR T and check if it is a permutation matrix. If it is not, the configurations are inequivalent. If it is, we compute q = P Tq and check if it has the same column vector set as q. If it does, the two configurations are equivalent, otherwise they are not. All of the above can be efficiently implemented in Mathematica. The full proof that this procedure is indeed necessary and sufficient for deciding the equivalence of two configurations will be given in the forthcoming longer publication [10]. II. Ineffective splittings. The splitting principle [1] provides an efficient method of generating new configurations from old ones. It plays a key role in the algorithm to generate the full list of CICY configurations, as will be explained in section 5. As we shall see in what follows, deciding whether or not a four-fold splitting is effective, that is, whether it leads to a new manifold, cannot be accomplished by a simple generalization of the three-fold criterion and requires some new ideas. A general P n splitting is defined as a relation of the form n n+1 a=1 u a q ←→ n 1 1 · · · 1 0 n u 1 u 2 · · · u n+1 q . (4.2) Read from left to right this correspondence is termed splitting while its inverse is called contraction. When the two configurations describe the same underlying manifold, the splitting is called ineffective, otherwise it is referred to as an effective splitting. To decide whether or not the two configurations in (4.2) describe the same underlying manifold, we first note that these two manifolds share common loci in their complex structure moduli space, the so called determinantal variety. To see this, introduce homogeneous coordinates x = (x i ) i=0,...,n for the additional P n which arises in the splitting and a matrix F = (f ai ) of polynomials f ai with multi-degrees u a . Then, the zero locus of the first n + 1 polynomials in the split configuration in (4.2) can be written as Fx = 0. Evidently, this equation has a solution in P n if and only if p ≡ det(F) = 0. The polynomial p has multi-degree u = n+1 a=1 u a and is a specific instance of the first defining polynomial of the contracted configuration in (4.2). Together with the polynomials specified by q it defines the determinantal variety. The question then becomes whether or not this determinantal variety is smooth. If it is, the two configurations can be smoothly deformed into each other and, hence, represent the same topological type of Calabi-Yau manifolds. In this case, the splitting is ineffective. Otherwise, that is, when the determinantal variety has a non-trivial singular locus, they describe different manifolds and the splitting is effective. For CICY three-fold splittings, the singular locus of the determinantal variety is a zero-dimensional space. That is, it can either be the empty set or a collection of points. It turns out that the number of singular points is counted, up to a non-zero numerical factor, by the difference of Euler characteristics between the original and the split configuration. This leads to the simple rule that two three-fold configurations, related by splitting as in (4.2), are equivalent if and only if they have the same Euler characteristic [1]. For a CICY four-fold, the singular locus of the determinantal variety has a more complicated structure. As was first noted in ref. [16], four-fold splittings have a different local degeneration structure than three-fold splittings. The determinantal variety of a CICY four-fold splitting becomes singular on a complex curve. The Euler characteristic of this curve is still proportional, with a non-zero factor, to the difference of Euler characteristics between the two configurations involved. This means that a four-fold splitting which changes the Euler characteristic is definitely effective. If the splitting preserves the Euler characteristic, however, then we only know that the singular locus must have vanishing Euler characteristic. This means that the singular locus could either be the empty set or a collection of tori. In the case of CICY four-folds, therefore, it is possible to have effective splittings at constant Euler characteristic. Clearly, to detect such effective splittings which preserve the Euler characteristic we need additional criteria. For P 1 splittings between CICY four-folds, a necessary and sufficient criterion can be obtained as follows. In this case, the one-dimensional singular locus of the determinantal variety can be described as a complete intersection, associated to the configuration matrix S ≡ n u 1 u 1 u 2 u 2 q . We denote by µ S the form Poincaré-dual to this singular locus in the ambient space A, defined analogously to eq. (3.8), and by J a Kähler form on A. A convenient choice for this Kähler form is J = m r=1 J r . Then, the volume of the singular locus can be calculated by where the subscript "top" refers to the coefficient of the top form J n 1 1 ∧ · · · ∧ J nm m of A, as before. With the expressions for J and µ S readily available, this allows for an explicit calculation of the volume, using the normalizations (3.2). Clearly, the singular set S is empty and, hence, the splitting ineffective, if and only if this volume vanishes. There is a trivial but helpful re-formulation of this criterion in terms of the associated zerodimensional configuration S ≡ n u 1 u 1 u 2 u 2 q 1 , where 1 denotes a column with all entries 1. Then, for the choice of Kähler form J = m r=1 J r it follows that Hence, the splitting is effective if and only if χ(S ) = 0. Unfortunately, for higher P n splittings, n > 1, the singular locus cannot be described as a complete intersection. Hence, the above method cannot be applied and we have to rely on a different approach. As before, the first step is to compute the change of the Euler characteristic using eq. (3.9). If the Euler characteristic changes, we have an effective splitting. Otherwise, we consider the following splittings between non-Calabi-Yau three-folds n n+1 a=1 u a q e i ←→ n 1 1 · · · 1 0 0 n u 1 u 2 · · · u n+1 q e i . (4.5) They are related to the original four-fold splitting (4.2) by adding one additional column, given by a standard m-dimensional unit vector e i , to both configuration matrices. The singular locus of these three-fold splittings consists of points whose number is proportional to the change in Euler characteristic. With the equations provided in section 3, we find that the change of Euler characteristic for each e i is given by whereû a := m r=1 u r a J r . Of course, the singular points associated to the three-fold splittings (4.5) are precisely the intersections of the four-fold singular locus (a complex curve) with the hyperplanes defined by the additional e i column. Hence, if the Euler characteristic changes for at least one e i the four-fold singular locus must be non-empty and the splitting is effective. Conversely, if the difference of Euler characteristics vanishes for all e i , that is, none of the hyperplanes intersects the four-fold singular locus, then this locus must be empty and the splitting is ineffective. In general, if two configurations are found to be related by an ineffective splitting, they describe the same underlying manifold and only the contracted matrix (that is, the matrix on the left hand side of (4.2)) will be kept in our list. III. Identities. Numerous identities between sub-configurations of CICYs have been uncovered and discussed in ref. [1]. For a few of them, only heuristic arguments exist. In the compilation of our list, we have only used those identities that have been proved rigorously and that commute with splitting, namely: (II) (i) [2|2] = P 1 2 2 a n 0 q = 1 2a n q (II) (ii) 1 1 The first column provides the labeling of the identities used in ref. [1]. The second and third columns state the basic identity and its application to the full configuration matrix, respectively. The identities are used from left to right, that is, whenever a matrix matches the pattern on the left hand side, it is replaced by the matrix on the right hand side. The proof of the basic identities in the second column is facilitated by the fact that these are either identities between one-folds or between two-folds of positive first Chern class. Both sets of manifolds are classified by their Euler characteristics, which can be computed straightforwardly by using the formulae of section 3. This concludes the list of equivalence relations we will be using in our classification algorithm. Their application greatly reduces the number of repetitions in our final list of CICY four-folds. However, they do not represent an exhaustive list of identities. It is to be expected that our list of CICY four-folds still contains some repetitions. This is indeed the case for the list of 7890 CICY three-folds and has been explicitly checked in ref. [15], using Wall's theorem [14]. For our CICY four-fold list the obvious course of action is to compute topological quantities in order to discriminate between inequivalent configurations and to determine a lower bound for the number of inequivalent four-fold CICYs. Useful topological quantities in question include the Euler characteristic, Chern classes, Hodge numbers and intersection numbers. In the present paper, we will only explicitly use the Euler characteristic for this purpose. A more complete discussion which includes the other quantities will be presented in the companion paper [10]. However, the experience with CICY three-folds suggests that the number of inequivalent configurations is of the same order of magnitude as the total number of configurations in the list. The algorithm In section 2, we have described a simple and finite algorithm to directly generate all possible configuration matrices. It turns out that this naive algorithm is prohibitively slow and requires a computation time which is unfeasibly long. In this section, we use an adapted version of an algorithm first devised by Candelas et. al. [1] for CICY three-folds. The basic idea is to employ the splitting principle in order to generate new CICY configuration matrices starting from a relatively small initial set. In the first step of the algorithm, we compile a list of all configuration matrices in ambient spaces that do not contain any P 1 factors. This is done using the naive algorithm of section 2. There are 62 such ambient spaces out of the 660 listed in table 1. A new matrix is only added to the list if it is not related by row or column permutations to a matrix already contained in the list. After about 987 CPU hours, 2 a list L 0 consisting of 9522 configuration matrices in ambient spaces without P 1 factors is produced. This list is then subjected to a routine we will refer to as the second filter. This filter takes a list of matrices and removes the three different types of redundancies described in sections 4.I-III as well as matrices of the form (2.5). The second filter routine thus produces a minimal version ("minimal" in the sense of both the number of matrices and the size of each individual matrix) of the input list. When applied to L 0 , it yields a reduced list L 0 containing 4898 matrices. Since the identities listed in section 4.III have been applied, the list L 0 does contain some matrices with P 1 factors in their ambient spaces. In particular some matrices with rows of the form 1 2 0 · · · 0 are present. The only type of matrices missing from this list are those that contain one or more rows of the form 1 1 1 0 · · · 0 . According to the splitting relation (4.2), these matrices must be related to the matrices in L 0 by contraction. Conversely, the full list can be produced by repeatedly performing P 1 splittings in all possible ways on the matrices in L 0 . The first complete P 1 splitting of L 0 yields a list L 1 consisting of 28823 matrices. The union of L 0 and L 1 is then subjected to the second filter routine. The output is a list L 1 . It contains L 0 plus 25222 new matrices making a total of 30120. Afterwards, the set difference ∆ 1 = L 1 \ L 0 is split in all possible ways to obtain a list L 2 and the union L 1 ∪ L 2 is subjected to the second filter routine to yield a list L 2 . This is repeated until no more new matrices are produced. The inequality (2.8) guarantees that the algorithm terminates after L 12 at the latest. In the actual execution of the algorithm, it turns out that already after L 11 , all splittings become ineffective. Hence, L 11 represents the final result. A logic flowchart depicting the steps of the algorithm is shown in figure 1. Results Before we describe the results of our CICY four-fold classification, we first check that our implementation of the algorithm described in section 5 successfully reproduces the known list of CICY three-folds. The original CICY three-fold list compiled in ref. [1] can be obtained from [17]. It consists of 7890 CICY three-fold configuration matrices which include 22 direct product manifolds and 7868 spaces that cannot be written as direct products. A comparison with the list produced by our code shows a perfect match. The total CPU time to compile this list was just 72 minutes. We now present our main result, a complete classification of CICY four-folds. The list contains 921,497 configuration matrices ranging up to a matrix size of 16 × 20. The total start "naive algorithm" 2ndfilter(L0) splitting(L 0 ) . . . Figure 1. Logic flowchart of the algorithm described in section 5. The boxes label the routines executed at each step and the arguments in parentheses are the input for the routines. The "naive algorithm" is presented in section 2. The second filter routine is denoted "2ndfilter" for brevity. By "splitting", we refer to a routine which carries out all possible P 1 splittings on the matrices of the input list. The output lists are displayed above the arrows. The sets ∆ i are defined as ∆ i := L i \ L i−1 . The algorithm terminates after 11 consecutive splittings with the routine 2ndfilter(L 10 ∪L 11 ), which produces the final output L 11 . required CPU time was 7487 hours, that is about 312 days on a single CPU. 3 A subset of 15813 matrices corresponds to product manifolds. These fall into four types as listed in the following table: Type Number of matrices Euler characteristic χ T 8 5 0 The Euler characteristic of these direct product manifolds follows from χ(M ×N ) = χ(M )· χ(N ) together with χ(T n ) = 0 and χ(K3) = 24. The numbers of these different types of direct product matrices in the second column can be explained as follows. The algorithm produces two different configuration matrices for T 2 , namely The Euler characteristic for each of the 921,497 matrices was computed and found to be in the range 0 ≤ χ ≤ 2610. As mentioned above, all configurations with Euler characteristic 0 correspond to direct product manifolds and the non-zero values for the Euler characteristic are found to be in the range 288 ≤ χ ≤ 2610. A logarithmic plot of the distribution of Euler characteristics is shown in figure 2. About 25% of all matrices have Euler characteristic equal to 288, the smallest non-zero value in the list. This huge peak at a single value might indicate non-trivial residual redundancies in the list. The full list of configuration matrices with Euler characteristics can be downloaded from [18]. In total, the list contains 206 different values of χ and, hence, this provides a weak lower bound on the number of inequivalent CICY four-folds. As already mentioned, this bound can be significantly strengthened by computing additional topological data, such as Hodge numbers, Chern classes and intersection numbers. A detailed analysis will be presented in ref. [10], but a preliminary calculation shows that the data set contains at least 3737 different sets of Hodge numbers. Computing even finer topological invariants will strengthen this bound further. Finally, we should address the question of how many CICY four-folds in our list have an elliptic fibration structure. We will not attempt to answer this question in full generality since a necessary and sufficient criterion for the existence of such an elliptic fibration which is suitable for practical computations is currently not known. Fortunately, for CICYs there is a particularly simple type of elliptic fibration which is consistent with the embedding in the projective ambient space. Suppose a configuration matrix [n|q] for a CICY four-fold can be brought, by a combination of row and column permutations, into the equivalent form n F F 0 n B C B , (6.2) such that the sub-configuration [n F |F ] is a one-fold. Then, the CICY four-fold is elliptically fibered with [n F |F ] representing the T 2 fiber and [n B |B] the three-fold base while the entries C describe the structure of the fibration, that is, the way in which the fiber is twisted over the base. We have checked how many CICY configuration matrices from our list can be brought into the form (6.2). It turns out that this is possible for all but 477 of the 921,497 matrices. Moreover, in many cases a given configuration matrix can be brought into the form (6.2) in many different, inequivalent ways, indicating the existence of inequivalent fibrations. Unfortunately, an elliptic fibration structure of this kind does not automatically imply the existence of a section. However, a preliminary analysis shows that the vast majority of manifolds indeed admit fibrations which do have sections. Details of this analysis will be presented in ref. [10]. Summary and outlook In this paper, we have classified all complete intersection Calabi-Yau four-folds (CICYs) in ambient spaces which consist of products of projective spaces. We have found a list of 921,497 configuration matrices which represent all topologically distinct CICYs. This is to be compared with 7890 configuration matrices which were found in the analogous classification for CICY three-folds carried out in ref. [1]. A total of 15813 configuration matrices from our four-fold list describe direct product manifolds of various types but all other matrices represent non-decomposable CICY four-folds. Discarding the cases with Euler characteristic 0 which all correspond to direct product manifolds, the Euler characteristic is in the range 288 ≤ χ ≤ 2610. The list contains 206 different values for the Euler characteristic, a weak lower bound for the number of inequivalent CICY four-folds. This bound can be strengthened by considering additional topological invariants. For example, a preliminary analysis shows that the list contains at least 3737 different sets of Hodge numbers. We have also studied the existence of a particular class of elliptic fibrations, consistent with the projective embedding of the manifolds, and have found that almost all manifolds in our list are elliptically fibered in this way. Often, a given CICY four-fold allows for many fibrations of this kind. A preliminary analysis shows that most of these manifolds admit such fibrations which have sections. We hope that the data set compiled in this paper will be of use in various branches of mathematics and physics. Due to their embedding in projective ambient spaces, CICYs are particularly simple and many of their properties are accessible through direct calculation. In the context of string theory, Calabi-Yau four-folds can be used for string compactifications, for example of type II or heterotic theories to two dimensions or, perhaps most importantly, of F-theory to four dimensions. F-theory compactifications require elliptically fibered Calabi-Yau four-folds, preferably with a section, and we have seen that our manifolds support these properties. We have left a number of more advanced issues for a longer companion paper [10] which is currently in preparation. These include the calculation of Hodge numbers, Chern classes and intersection numbers as well as a more detailed analysis of elliptic fibrations. This additional data will allow us to place a more realistic lower bound on the number of inequivalent CICY four-folds. It will also facilitate applications, particularly in the context of F-theory.
9,501.8
2013-03-07T00:00:00.000
[ "Mathematics", "Physics" ]
Empirical analysis of spatial heterogeneity in the development of China’s National Fitness Plan Purpose The National Fitness Plan (NFP) is a vital initiative aimed at realizing Healthy China 2030. This study assessed spatial heterogeneity in the NFP development and the socioeconomic factors contributing to this inequality. Methods Data from 31 administrative regions in 2021 were analyzed using four NFP development metrics. Spatial autocorrelation was evaluated using global Moran’s I, followed by global and local regression models for non-random spatial patterns. Results National physical fitness exhibited significant clustering (z = 5.403), notably a high-high cluster in East China. The global regression model identified three socioeconomic factors in the geographically weighted regression model: per capita disposable income and the number of public buses positively affected national physical fitness, while general public budget expenditure had a negative impact. Conclusions Persistent unequal NFP development is projected due to income disparities in economically backward regions. To promote the NFP effectively, a cost-efficient strategy includes creating 15-minute fitness circles, especially by establishing public sports facilities in Western China communities. These findings inform policy priorities for advancing the NFP towards Healthy China 2030. Introduction In 2021, the National Development and Reform Commission issued the "the Outline of the 14th Five-Year Plan (2021-2025) for National Economic and Social Development and Vision 2035 of the People's Republic of China" (thereafter, 14th Five-Year Plan) [1].This plan underscores the prioritization of improving public health and advancing the sports industry to achieve Healthy China 2030.Needless to say, the promotion of high-quality fitness-for-all campaigns is a crucial objective of the 14th Five-Year Plan, integral to the national modernization [2].Since the implementation of the National Fitness Program Outline in 1995, China has made initial progress in enhancing public health through sports.According to the National Physical Fitness Monitoring Bulletin, 37.2% of individuals aged seven and above engaged in regular physical activity in 2020 [3].Meanwhile, there remain several shortcomings in establishing all-around public services for national fitness.The latest National Fitness Plan (NFP) acknowledges "problems such as the unbalanced development of the national fitness area and the insufficient supply of public services still exist" [4], highlighting key objectives for the NFP going forward.Therefore, investigating regional imbalances and underlying mechanisms holds immense theoretical and practical importance in attaining a superior level of Healthy China 2030. Many studies have provided valuable recommendations to facilitate the high-quality development of the NFP.For example, Han's analysis of the NFP (2016-2020) across 31 administrative regions revealed that current NFP-related public policies often lack sport-specific guidelines due to reliance on conventional public administration methods [5].Based on a review of the NFP's achievements and shortcomings during the 13th Five-Year Plan, Lu and Wang have proposed a set of policy suggestions for the NFP [6], including developing an advanced public sports service system.The primary gap in policy recommendations for the NFP development, compared to existing research, is the predominance of qualitative reviews focusing on macro environments and policy decisions.This leads to a lack of quantitative research that can objectively identify the NFP's strengths and weaknesses, especially regarding regional imbalances [4].For instance, the latest NFP sets a goal of having 2.16 social sports instructors per 1,000 people by 2025, but it's unclear if there's a regional imbalance in meeting this target.Addressing this requires more quantitative studies, particularly employing geographical analysis methods. The multifaceted nature of the NFP poses challenges in evaluating its effectiveness and understanding the factors influencing its development.Geographical research has predominantly focused on two aspects of the NFP: spatial disparities in public sports venues and facilities, and spatial heterogeneity in national physical fitness.First, there appear to be spatial disparities in public sports venues and facilities.Wei et al. conducted a spatial-temporal analysis spanning 1995 to 2013, revealing significant spatial autocorrelation in the numbers of public sports venues and facilities, with regions of high density aligning with stronger provincial economies [7].Similarly, Song et al., using Moran's I and cluster analysis in 2014, found fewer public sports venues and facilities in rural areas compared to urban regions, indicating economic influences on this geographical disparity [8].Second, research on national physical fitness across various years (2005, 2010, 2014, and 2015) has consistently demonstrated significant heterogeneity among the 31 administrative regions [9][10][11][12]. There are two critical shortcomings in the existing geographic research on the NFP.First, most studies provide data only up to 2015, making policy evaluation for the NFP's regional development in the 2020s uncertain and potentially less useful.Second, whereas traditional geographical analyses such as Moran's I can detect spatial heterogeneity, they do not explain the underlying mechanisms.In this regard, only one group of researchers has delved into the factors contributing to geographic heterogeneity.In their three separate studies on the provincial-level reports of national physical fitness in 2015, Wang and Hu identified environmental variables and per capita regional GDP as primary determinants [10][11][12].However, these studies have not examined various socioeconomic factors, such as the correlation between accessibility of public sports venues and facilities and individuals' physical activity time [13], which directly influences physical fitness.Hence, it is necessary to conduct a broad analysis of various socioeconomic factors related to the NFP. Therefore, the purpose of this study was to explore spatial heterogeneity and socioeconomic factors contributing to the NFP's unequal development.By identifying key socioeconomic determinants, this study aims to establish a theoretical foundation for the coordinated development of the NFP. Evaluation variables Based on the premise of using the most recent data available, this study analyzed the 2021 data from the 31 administrative regions of China Mainland.According to the eight developmental goals outlined in the NFP [4], we chose the following four variables to serve as the development metrics for the NFP: (i) the number of people who regularly participate in physical activity, (ii) the passing rate of national physical fitness standards, (iii) the number of public sports venues and facilities, and (iv) the number of social sports instructors.These development metrics serve as dependent variables in the analysis. This study investigates the socioeconomic factors influencing the development of the NFP.First of all, the sports industry has developed in tandem with public expenditure on culture, tourism, sports, and media (x1).The industry development, in turn, enhances the physical and mental wellbeing of Chinese society [14].Regardless of sociopolitical structures, the healthcare sector and public services predominantly depend on funding from general public budget expenditure (x2).Likewise, the growth in per capita disposable income (x3) drives the expansion of the sports industry [15].Generally, when individuals have higher discretionary income, their engagement in sports consumption tends to rise [16], potentially boosting sports participation and physical fitness.Meanwhile, economic progress, such as the GDP expansion (x4), and the sports industry can be viewed as a coupling system [17].Additionally, the sports industry falls within the tertiary sector, and the value added by this sector (x5) can reflect the development of sports workers and infrastructure [18].Collectively, these five factors represent the economic-related independent variables. In terms of social factors influencing the NFP's development, we analyzed six factors.Population density directly influences participation in leisure-time physical activity.Spatially, East China, representing about one-third of the national population and GDP output, shows a higher prevalence of regular physical activity compared to other regions [19].Similarly, individuals in urban areas tend to participate in physical activity more frequently than those in rural areas [19].Hence, the number of residents (x6) can act as a geosocial predictor influencing both the number of people who regularly participate in physical activity (y1) and the passing rate of national physical fitness standards (y2).In market economics, wages are determined by the interplay of supply and demand.The number of social sports instructors (y4) can be influenced by the average wage of sports professionals (x7), especially considering the significant shortage of skilled practitioners in many sectors of the sports industry [20].Sports-related consumption falls under discretionary expenses and can be influenced by the consumer price index.An increase in the consumer price index may lead to reduced spending on non-essential items.In this context, we investigate the potential impact of the regional consumer price index (x8).An abnormal index could potentially alter consumer behaviors in sports consumption, thereby affecting the development of the NFP.In Chinese contexts, the availability of public transportation can significantly influence the frequency of regular physical activity [21].Hence, we consider the number of public buses in operation (x9) as a common indicator in our study.Urban areas (x10) and per capita public recreational green space (x11) are suggested factors that influence the physical activity patterns of urban residents.Growing urbanization is often accompanied by increased health consciousness [22], expanded coverage of 15-minute fitness circles and greenway trails [23], and a more advanced regional sports industry [17], all contributing to the advancement of the NFP. Table 1 outlines the variables utilized in this study.Data pertaining to the development metrics of the NFP can be sourced from provincial and municipal physical fitness monitoring bulletins, while socioeconomic statistics are available in the China Statistical Yearbook.The raw data that support the conclusions of this study are available on figshare (DOI: 10.6084/m9.figshare.25152299.v1). Spatial analyses The spatial analyses were conducted using ArcGIS Pro 3.2.1 (Esri Inc., Redlands, CA, USA).Initially, we assessed whether the dependent variables displayed random spatial patterns by calculating the global Moran's I.According to Tobler's First Law of Geography, "everything is [geographically] related to everything else, but near things are more related than distant things" [24].Hence, for the computation process, we opted for the common inverse distance method using Euclidean distance metrics.Upon identifying significant spatial autocorrelation, we utilized the Anselin local Moran's I to examine clusters and outliers. Then, the identified spatial heterogeneity was quantified by fitting regression equations.We fitted a global regression model using ordinary least squares (OLS) after transforming all variables by taking their common logarithmic values to account for measurement dimension differences.The purpose of the OLS analysis was to exclude non-significant variables for the subsequent local regression model, following a criterion-based approach for objective selection [25].Our initial steps involved conducting a correlation analysis between the dependent and independent variables.Non-significant variables resulting from the correlation analysis were excluded from subsequent analysis.Next, all remaining independent variables were included into the OLS model.We then iteratively removed the independent variable (typically the one with the smallest absolute t-statistic) that yielded the most substantial improvement in the Akaike Information Criterion (AIC) value when eliminated from the model.This process continued until no further removal of independent variables led to a favorable change in the AIC value compared to the previous iteration.At this point, all remaining independent variables were statistically significant, and no issues of multicollinearity (i.e., variance inflation factor < 10) were detected.Effectively, this model selection process argues for a theory that fits the facts (i.e., empirical data), not the contrary.Subsequently, we fitted a local geographically weighted regression (GWR) model, which can be represented mathematically as Eq (1): where y i is the dependent variable; i is the analyzed variable; β 0 (u i, v i ) is the location of the ith variable; β j (u i , v i ) is the regression parameter, j, of the ith observation, which is a function of geographical location; x ij is the independent variable; and ε i is the random error of the ith dependent variable.At last, the residual of the GWR model was examined using the global Moran's I. Spatial autocorrelation The results of the spatial autocorrelation analysis are summarized in Table 2. Based on the zscores, only the passing rate of national physical fitness standards (y2) showed significant positive spatial autocorrelation (p < 0.001), while the other dependent variables exhibited complete spatial randomness (p > 0.05).This finding suggests that, in 2021, the passing rate of national physical fitness standards was autocorrelated with the passing rates in nearby regions.We proceeded with a local cluster and outlier analysis on y2, as illustrated in Fig 1 .The analysis revealed statistically significant z-scores for Xinjiang (z = 4.44), Jiangsu (z = 2.36), Anhui (z = 2.33), Zhejiang (z = 2.23), Shanghai (z = 1.88),Fujian (z = 1.65),Henan (z = 1.62),Shanxi (z = -1.75),and Shandong (z = -2.18)at the p < 0.05 level.These results indicated three distinct spatial patterns.First, Xinjiang formed a low-low cluster of the NFP, surrounded by provinces with low passing rates of national physical fitness standards.Second, spatial heterogeneity was observed in Shanxi and Shandong, suggesting that neighboring provinces' NFP development was influenced by specific factors.Third, the high-high cluster was predominantly situated in East China, including Jiangsu, Anhui, Zhejiang, Shanghai, and Fujian, indicating a prominent zone for NFP development. OLS model Based on the results of the correlation analysis, variables x2, x3, x4, x5, and x9 merit further investigation.Table 3 summarizes the model selection process.We proceeded by removing one variable at a time over four iterations until all three independent variables showed statistical significance without multicollinearity.Surprisingly, x2's coefficient was negative, indicating that higher general public budget expenditure correlated with a decrease in the passing rate of national physical fitness standards. GWR model Based on the OLS model, three independent variables-x2, x3, and x9-were included in the GWR model.The z-score of the residuals from the GWR model was 0.026 (p = 0.979), indicating random distribution of residuals.) negative impact for the next section.Per capita disposable income had a positive effect, albeit diminishing gradually from southeast to northwest regions.Certain regions exhibited lower effects compared to the national average, including Sichuan, Ningxia, Inner Mongolia, Liaoning, Jilin, Heilongjiang, Qinghai, Gansu, Tibet, and Xinjiang.Notably, Tibet and Xinjiang demonstrated minimal influence of per capita disposable income on the NFP.Regarding the influence of public buses, the mean coefficient for this factor, as shown in Table 5, was 0.048.However, individual coefficients varied significantly across the 31 administrative regions, ranging from -0.0172 to 0.144.This variability indicates a substantial regional difference in the impact of public buses on the NFP.Notably, regions like Xinjiang, Tibet, Gansu, Qinghai, Sichuan, and Yunnan relied heavily on public buses, highlighting the importance of the public bus system in Western China's NFP development.Conversely, coastal regions showed a decreasing influence of public buses. Discussion This study makes two original contributions to the policy evaluation of the NFP.First, we assessed four key metrics of NFP development as of 2021, revealing unequal development in the passing rate of national physical fitness standards.Accordingly, future NFP objectives should prioritize enhancing physical fitness levels in underdeveloped regions of Western China.Second, the GWR model introduces a novel perspective, emphasizing that addressing unequal development requires more than efforts from sports regulators and the sports industry alone.This marks the first-of-its-kind viewpoint on the matter.Both central and local governments must increase fiscal expenditures to build public infrastructures and ensure sustained economic growth for advancing the NFP. Based on the global Moran's I, there was no regional imbalance observed in the number of people who regularly participate in physical activity, public sports venues and facilities, or social sports instructors.To our knowledge, no previous study has examined the geographic features of the numbers of people who regularly participate in physical activity.A study did, however, identify significant spatial heterogeneity in physical activity participation levels across the 31 administrative regions, although the overall trend indicated a consistent increase from 2010 to 2018 [26].Taken together, the present result indicates that the nationwide fitness-for-all campaigns in recent years, including the annual National Fitness Day on August 8th, have effectively heightened interest in physical fitness [2].Similarly, no spatial heterogeneity was found in the number of public sports venues and social sports instructors.The implementation of China's New Urbanization Plan and Document No. 46 in 2014 notably spurred the growth of the sports industry [17].Between 2015 and 2021, the annual construction rate of sports facilities surged impressively by 17.48% [17].These national policies have facilitated the construction of sports facilities not only in affluent areas of East China but also in less developed regions of Western China.However, while the provincial-level data didn't reveal spatial heterogeneity, disparities exist within cities [27][28][29] and between urban and rural areas [8].Addressing these localized disparities requires targeted urban planning strategies. The passing rate of national physical fitness standards in 2021 exhibited a distinct regional clustering pattern, persisting from the early 2010s [9,10].As a matter of fact, the z-score of Moran's I reveals a noticeable increase in clustering from 2014 (z = 2.794) to 2021 (z = 5.403) [9].This escalation in clustering signifies a deepening divide in the development of national physical fitness standards, highlighting a concerning trend of worsening regional disparities.These findings underscore the critical policy implications addressed by this study. In 2021, regions in East China exhibited the highest passing rates for national physical fitness standards.Conversely, Northwest China exhibited a noticeable clustering pattern with lower passing rates, while Southwest China as a whole displayed passing rates below the national average.The GWR model sheds light on these regional inequalities through three distinct socioeconomic pathways.First, per capita disposable income was found to potentially promote higher passing rates for national physical fitness standards.GWR coefficients indicated that in 2021, per capita disposable income had minimal impact on national physical fitness in Gansu (β = 0.030), Xinjiang (β = -0.005),and Tibet (β = 0.001).However, this economic factor significantly influenced Central China, particularly in East and Southern China.Despite ongoing economic inequities, underdeveloped regions in Western China have shown consistent improvement in economic conditions.Government data reveals that Tibet's per capita disposable income increased from 10,730 CNY in 2014 to 24,950 CNY in 2021.Yet, despite these economic gains, national physical fitness levels in Western China remain stagnant.This suggests a potential threshold for economic development: once surpassed, economic growth may significantly impact national physical fitness.This notion finds support in Niu et al.'s work [30], which explored the association between per capita GDP and public health in 30 administrative regions from 2000 to 2017.Their study identified heterogeneous threshold effects of economic growth, with Central and Western China reaching thresholds at 14,691 and 12,683 USD per capita GDP, respectively.Above these thresholds, economic growth notably influences public health outcomes.In 2021, Tibet's per capita GDP stood at 8,809 USD, suggesting that despite economic progress, it may not have crossed the critical threshold for substantial impacts on public health.Thus, economic developments, such as per capita disposable income, may not have significantly affected Tibet's national physical fitness as examined in this study. Considering the current domestic economic landscape, it is crucial for policymakers to acknowledge the ongoing unequal development of national physical fitness.Projections indicate that this inequality will persist in the foreseeable future.For instance, assuming a 5% annual growth rate in regional GDP, it is estimated that Tibetans will have to wait until 2029 for their per capita GDP to surpass the threshold associated with positive impacts on public health.To align with the physical fitness objectives outlined in the 14th Five-Year Plan, we recommend that the central government takes proactive measures.This includes allocating additional fiscal resources and implementing supportive policies in economically backward regions such as Northeast, North, Southwest, and Northwest China.These initiatives are vital for enhancing GDP output and income growth, ultimately contributing to improved national physical fitness outcomes. Second, the number of public buses played a critical role in advancing the NFP with a particularly pronounced impact in the expansive territories of Northwest and Southwest China.The correlation between accessible public transportation and increased leisure-time physical activity, enhanced physical fitness, and improved health is evident across various sociodemographic groups [31][32][33].The transportation-fitness dynamics were notably accentuated in the Northwest and Southwest regions due to two primary reasons.Geographically, Western China's intricate topography and diverse climate present challenges to effective urban planning and comprehensive coverage of bus transit networks.Additionally, the absence of subway networks and light rail transit in Western China due to geographical constraints further underscores the reliance on public buses among residents, which in turn affects their engagement in physical activity [34].This regional disparity in transportation infrastructure can also be attributed to economic development strategies and arguably, a result of human selection.The closer one is to East China, the greater the socioeconomic opportunities, which cannot be denied. Conversely, it is easy to mistake a coefficient close to zero as indicating a lack of relevance for public transportation overall.According to the GWR coefficients, one might infer that public bus transit in East China did not significantly impact national physical fitness.However, the availability of various transportation options, such as private cars and light rail systems, especially in urban East China, provides more choices.These additional options can offset the reliance on public bus transportation.As studies in developed cities in East China have demonstrated, access to public transportation or convenient and timely access to sports venues and facilities continues to play a significant role in promoting physical activity [35,36].Therefore, it will remain a pivotal factor in the development of the NFP. In our view, there are two policy solutions to enhance access to physical activity.Central and local governments can either increase investment in public transportation or prioritize the establishment of 15-minute fitness circles to address residents' physical activity needs.We advocate for the latter, especially in Western China.The rationale behind this preference is that while convenient access to sports venues via public transit is important for promoting leisure-time physical activity, the underlying mechanism is the proximity of one's residence to gyms and the time required to reach them.Wang and colleagues' study revealed that residents who perceived access to public transport stations within a 10-15 minute walk from their homes were 3.18 times more likely to meet the physical activity guidelines recommended by the WHO [35].An efficient and cost-effective approach could involve constructing public or privately run gyms within a 15-minute radius of neighborhoods.Initial empirical data support the efficacy of this approach in fostering active lifestyles [13]. Third, it was observed that general public budget expenditure had a negative impact on the NFP.This finding, although unexpected, can be attributed to the general nature of the indicator.According to Chinese legislation, general public budget expenditure encompasses a wide range of expenses such as general public service, foreign affairs, national defense, agriculture, environmental protection, education, science and technology, culture, health, and sports.Therefore, it does not directly measure expenses specifically related to sports.Similarly, public expenditure on culture, tourism, sports, and media (x1) was included in the modeling despite not directly reflecting sports-related expenses.These variables were incorporated based on theoretical considerations of their contribution to public sports services.The negative correlation suggests that the expenditures did not lead to the expected outcomes in public sports services, an issue noted in previous studies [37,38].This aspect warrants further investigation. Due to data availability, the socioeconomic factors examined in this study do not encompass all potential contributors to the NFP's development.For example, the rising popularity of marathons has led to a notable increase in the participation in running activities [39].Particularly, the expanding middle class, projected to become the largest social stratum in the future, views marathon running as an ideal activity aligned with their socioeconomic status [40].This trend is characterized by a heightened awareness of health and wellness.Hence, the number of marathon events could serve as a distinctive metric of the NFP's impact and may be associated with the passing rate of national physical fitness standards.However, the absence of data precludes the inclusion of this factor in our analysis.Consequently, there is clear room for enhancing the current model in future studies. The results from China's national fitness campaign may also hold value for other developing nations seeking to advance public health goals through sports.In our statistical analysis, we employed log transformations on the data to accommodate the varying dimensions of the included variables.This log-log model offers a clear perspective on the magnitude (i.e., coefficient in the regression model) of each contributing variable.In essence, our findings reveal that the impact of public transit in China is just as crucial as economic growth.This implies that making sports venues easily accessible could promote physical fitness.For other developing nations, the key takeaway is that investing in fiscal expenditure or encouraging private investment in close-to-community sports facilities could prove to be an effective strategy for enhancing public health within a constrained fiscal budget. In conclusion, this study offers an up-to-date assessment of the NFP's development across 31 administrative regions.Among the NFP's three key development metrics, only the passing rate of national physical fitness standards in 2021 exhibited spatial heterogeneity.Through the GWR analysis, we identified two significant positive socioeconomic dynamics.From an economic standpoint, the persistent income gap between Western and Eastern China indicates a continued regional imbalance in the NFP's progress in the foreseeable future.Addressing the socioeconomic disparities affecting NFP development requires comprehensive measures beyond the sports sector.It necessitates proactive involvement from the central government, offering both fiscal support and policy direction.In our view, constructing 15-minute fitness circles, especially in economically backward regions, can be a viable strategy to enhance the NFP participation.Ultimately, these policy suggestions offer fresh perspectives toward achieving the long-range objectives outlined in the 14th Five-Year Plan. Attempting to remove x2 in model 3 resulted in a model performance decline, reflected in a 12-point difference in AIC values.Following our model selection criteria, we chose model 3 as the final OLS model.Table 4 presents the regression estimates.Overall, the model was statistically significant (Joint F-Statistic = 15.461) and could explain 63.2% of the variance in the passing rate of national physical fitness standards in 2021.The error term (i.e., p-value of Jarque-Bera Statistic > 0.05) suggests that the residuals of the OLS model were normally distributed, indicating that the model met the statistical assumption. Table 2 . Global spatial autocorrelation. https://doi.org/10.1371/journal.pone.0305397.t002 Table 5 summarizes the effect of each independent variable on the passing rate of national physical fitness standards in 2021.The coefficients in the GWR model varied widely across regions.Overall, the model explained 81.8% of the variance in the passing rate, showcasing better model fit than the global model.Fig 2 illustrates the spatial influence of each independent variable on the NFP's development as of 2021.Detailed statistics are provided in S1 Table.We delve into specific effects of x3 (Fig 2B) and x9 (Fig 2C) in this section, reserving further discussion on x2's (Fig 2A
5,978
2024-06-13T00:00:00.000
[ "Geography", "Economics", "Sociology" ]
An algebraic topological method for multimodal brain networks comparisons Understanding brain connectivity is one of the most important issues in neuroscience. Nonetheless, connectivity data can reflect either functional relationships of brain activities or anatomical connections between brain areas. Although both representations should be related, this relationship is not straightforward. We have devised a powerful method that allows different operations between networks that share the same set of nodes, by embedding them in a common metric space, enforcing transitivity to the graph topology. Here, we apply this method to construct an aggregated network from a set of functional graphs, each one from a different subject. Once this aggregated functional network is constructed, we use again our method to compare it with the structural connectivity to identify particular brain Introduction In the last decade, the use of advanced tools derived from neuroimaging and complex networks theory have significantly improved our understanding of brain functioning (Sporns, 2011). Notably, connectivity-based methods have had a prominent role in characterizing normal brain organization as well as alterations due to various brain disorders (Varela et al., 2001;Stam and van Straaten, 2012;Stam, 2014). Most of the recent works aim to quantify the role of connectivity in the communication abilities of neural systems. However, the very same notion of connectivity is controversial since data used in brain connectivity studies can reflect functional neural activities (electrical, magnetic or hemodynamic/metabolic) or anatomical properties (Varela et al., 2001;Bullmore and Sporns, 2009). Neuroanatomical connectivity is meant as the description of the physical connections (axonal projections) between two brain sites (Bullmore and Sporns, 2009), whereas functional connectivity is defined as the estimated temporal correlation between spatially distant neurophysiological activities such as electroencephalographic (EEG), magnetoencephalographic (MEG), functional magnetic resonance imaging (fMRI) or positron emission tomography (PET) recordings (Varela et al., 2001). In recent years, the concept of "brain networks" is becoming fundamental in neuroscience (Stam and Reijneveld, 2007;Bullmore and Sporns, 2009;Stam and van Straaten, 2012;Stam, 2014). Within this framework, nodes stand for different brain regions (e.g., parcelated areas or recording sites) and links indicate either the presence of an anatomical path between those regions or a functional dependence between their activities. In the last years, this representation of the brain has allowed to visualize and describe its non-trivial topological properties in a compact and objective way. Nowadays, the use of network-based analysis in neuroscience has become essential to quantify brain disfunctions in terms of aberrant reconfiguration of functional brain networks (Stam and Reijneveld, 2007;Stam and van Straaten, 2012;Stam, 2014). Experimental evidence has revealed, for instance, alterations in functional and anatomical brain networks in normal cognitive processes, across development, and in a wide range of neurological diseases (see Bullmore and Sporns, 2009;Stam, 2014; and references therein). Despite its evident interplay, comparison between anatomical and functional brain networks is not straightforward (Deco et al., 2011;Nicosia et al., 2014). Theoretical studies provide support for the idea that structural networks determine some aspects of functional networks (Deco et al., 2011), but it is less clear how the anatomical connectivity supports or facilitates the emergence of functional networks. Although nodes with similar connection patterns tend to exhibit similar functionality, the functionality of an individual neural node is strongly determined by the pattern of its interconnections with the rest of the network (Nicosia et al., 2014). Correspondences between functional and structural networks remains thus an active research area (Honey et al., 2007(Honey et al., , 2009(Honey et al., , 2010. A better understanding of how anatomical scaffolds support functional communication of brain activities is necessary to better understand normal neural processes, as well as to improve identification and prediction of alterations in brain diseases. In this paper we address this relationship between anatomical and functional connectivity. In previous studies, the correspondence of these networks has been often assessed by the difference in an Euclidean space of vectors containing connectivity measures such as the clustering coefficient, shortest path length, degree distribution, etc. Here, we propose a radically different framework for studying brain connectivity differences. Instead of extracting a vector of features for each network (anatomical or functional), we jointly embed all of them in a common metric space that allow straightforward comparisons. Before embedding the functional and the anatomical networks into the common metric space, we aggregate a group of subjects (e.g., functional networks) according to Simas et al., (submitted) to obtain a group representation network. The method employed in this work allows to preserve connected components and to identify, among different subjects, a common underlying network structure. Our approach may provide a useful insight for the analysis of multiple networks obtained from multiple brain modalities or groups (healthy volunteers vs. patients, for instance). fMRI and DTI Data In this study we consider anatomical and functional brain connectivities-extracted from diffusion-weighted DW-MRI and fMRI data, respectively-defined on the same brain regions. Brain images were partitioned into the 90 anatomical regions (N = 90 nodes of the networks) of the Tzourio-Mazoyer brain atlas (Tzourio-Mazoyer et al., 2002) using the automated anatomical labeling method. The anatomical connectivity network is based on the connectivity matrix obtained by Diffusion Magnetic Resonance Imaging (DW-MRI) data from 20 healthy participants, as described in Iturria-Medina et al. (2008). The elements of this matrix represent the probabilities of connection between the 90 brain regions of interest. These probabilities are proportional to the density of axonal fibers between different areas, so each element of the matrix represents an approximation of the connection strength between the corresponding pair of brain regions. The functional brain connectivity was extracted from BOLD fMRI resting state recordings obtained as described in Valencia et al. (2009). All acquired brain volumes were corrected for motion and differences in slice acquisition times using the SPM5 1 software package. All fMRI data sets (segments of 5 min recorded from healthy subjects) were co-registered to the anatomical data set and normalized to the standard MNI (Montreal Neurological Institute) template image, to allow comparisons between subjects. As for DW-MRI data, normalized and corrected functional scans were sub-sampled to the anatomical labeled template of the human brain (Tzourio-Mazoyer et al., 2002). Regional time series were estimated for each individual by averaging the fMRI time series over all voxels in each of the 90 regions. To eliminate low frequency noise (e.g., slow scanner drifts) and higher frequency artifacts from cardiac and respiratory oscillations, time-series were digitally filtered with a finite impulse response (FIR) filter with zero-phase distortion (bandwidth 0.01-0.1 Hz) as in Valencia et al. (2009). A functional link between two time series x i (t) and x j (t) (normalized to zero mean and unit variance) was defined by means of the linear cross-correlation coefficient computed as where · denotes the temporal average. For the sake of simplicity, we only considered here correlations at zero lag. To determine the probability that correlation values are significantly higher than what is expected from independent time series, r ij (0) values (denoted r ij ) were firstly variance-stabilized by applying the Fisher's Z transform. Under the hypothesis of independence, Z ij has a normal distribution with expected value 0 and variance 1/(df ij − 3), where df is the effective number of degrees of freedom (Bartlett, 1946;Bayley and Hammersley, 1946;Jenkins and Watts, 1968). If the time series consist of independent measurements, df ij simply equals the sample size, N. Nevertheless, autocorrelated time series do not meet the assumption of independence required by the standard significance test, yielding a greater Type I error (Bartlett, 1946;Bayley and Hammersley, 1946;Jenkins and Watts, 1968). In presence of auto-correlated time series df must be corrected by the following approximation 1 Networks Normalization From Equation (1) our networks weights are in a non-normalized interval Z ij ∈ [a, b] ⊂ R. In order to apply the framework described in Simas and Rocha (2015), we normalize our networks weights into the unit interval I = [0, 1] by means of a unique linear function: where ǫ in general is set to 0.01 in order to avoid merging and isolate vertices with weights at the boundaries of Z ij ∈ [a, b]. As proved in Simas and Rocha (2015), since the normalization is done by a unique linear function this does not affect networks properties. fMRI Networks Aggregation and Embedding Among many ways to aggregate a group of networks here we employ a topological algebraic way to aggregate a group of networks. The networks group possess the same nodes but different edges values and can mathematical be represented by a weighted graph G = (N, E). N is the set of nodes representing the brain ROI's (N = 90 in this study) and E is the set of edges values (connections) between ROI's, e.g., ∀e i,j ∈ E : e i,j ∈ [0, 1] in the proximity space or ∀d i,j ∈ E : d i,j ∈ R + 0 ∪ {+∞} in the distance space. For the sake of simplicity, we denote a network with the same notation we use for the set of nodes N, i.e., a set of n networks (e.g., group of subjects) is represented by {N k } with k ∈ {1, 2, 3. . . . , n}. One possible way to aggregate a group of n networks is simply by averaging the homologous edges values. Obtaining in this way a group representative network, N * . i,j is the edge e i,j from network N k . Another way to aggregate networks, as explained in Simas et al., (submitted) is by considering all networks as a multilayer network (often called multiplex), which can be represented as a fourth-order tensor Simas et al., (submitted). This tensor can be represented as a extended matrix (Sole-Ribalta et al., 2013). The work of Simas and Rocha (2015), introduces a framework to aggregate networks in an algebraic way, relating it with fuzzy logic reasoning, and in Simas et al., (submitted) this work was extended to multilayer networks. In order to work algebraically with networks we have to set an algebra (defined as a vector space equipped with a bilinear product). This algebra allows us to perform algebraic operations with networks in the same way we perform algebraic operations in other contexts with other algebras (such as adding and multiplying real numbers). In short, a network can be represented by an adjacency matrix and a multilayer network by a tensor. Considering a set of tensors working under the algebra L = (I, ⊕, ⊗), where the weights (tensor entries) of the tensors in I ⊆R (subset of extended real line) and ⊕ and ⊗ two binary operators, we can represent a multilayer network with tensor T in this algebra. In Simas et al., (submitted) we have shown the particular case of multiplex networks, where layers are connected with weights w i,i,L k ,L j = 1 (in the proximity space), that the representative group network (e.g., functional) can be represented by N * in the distance space (see below and Equation 6), as: and the respective embedding by the following equation: where N * is defined in Equation (4) and r, is the convergence parameter (Simas and Rocha, 2015;Simas et al., submitted). Figure 1 summarizes the metric embedding of a multiplex network described above. Embedding a network of networks or, in our specific case, a multiplex fMRI network, allows us to determine which edges in the several layers contribute to the aggregation. We can therefore determine the subjects that contribute more/less or none to the aggregated network, and identify in each subject the sub-graphs for which they may have the highest contribution. For our particular case, we embed our networks using the Metric Closure (Simas and Rocha, 2015) defined by the algebra L = (R + 0 ∪ {+∞}, min, +), where ⊕ = min and ⊗ = +. The metric closure or metric embedding of a given network into FIGURE 1 | Schematic representation of the main steps for the described networks aggregation and metric embedding (defined here for the algebra L). a metric space, is a generalization of All Pairs Shortest Paths Problem (APSP) as shown in Simas and Rocha (2015). In this case the Johnson algorithm can be used to calculate the metric closure (Johnson, 1977). Note that to calculate the metric closure (based on the Johnson algorithm) of a network we have to translate our networks from a proximity space into a distance space. There are many possible mappings to map a similarity space into a distance space, see (Simas and Rocha, 2015). Applying Equation (6) to all network weights, w i,j ∈ [0, 1] (for more details see Simas and Rocha, 2015), we obtain the isomorphic distance network with weights The formalism behind the metric closure should not be confused with the formalism in Tropical Algebra geometry (Pachter and Sturmfels, 2004;Theobald, 2006). Both formalisms employ the same algebra for the isomorphism d i,j = ϕ(x) = −log(x), which corresponds to a Schweizer-Sklar or Frank t-norm generator with λ = 0 or 1 (see Equation 6 and Simas and Rocha, 2015) under the formalism in Simas and Rocha (2015) . The formalism in Simas and Rocha (2015) uses any isomorphism ϕ to set a specific metric into a weighted graph when translated to the isomorphic distance space. A more detailed discussion on this relation between the work (Simas and Rocha, 2015) and Tropical geometry can be found in Simas et al., (submitted). Embedding networks or multilayer networks allows us: (a) to detected clusters of nodes in a high-dimensional topological spaces, and by projecting the algebraic high-dimensional embedding into 3D, (b) it allows to perform exploratory networks analysis (c) to preserve the multilayer sub-structures across layers/subjects, better than other aggregations methods, as compared with the specific case of "simple" averaging (Equation 3). Next we compare both methods of aggregation, "simple" averaging (Equation 3) and algebraic aggregation (Equation 5 according to Simas et al., submitted) of our fMRI networks, respectively, using our proposed method of embedding and comparing networks. Multimodal Networks Comparison In general, networks have been compared using statistical measures of local and global properties, such as: clustering coefficient, small-worldness, degree distributions, etc. We can find in the literature some examples of such techniques to compare multiple networks (Bullmore and Sporns, 2009;Stam, 2014). Our approach in this work is different. After embedding networks into the same metric space defined by the applied algebra, in our case L = (R + 0 ∪ {+∞}, min, +), we are able to compared them topologically. However, since networks generally come from different modalities (e.g., fMRI and DTI) it requires a previous step. We need to normalize the embedded edge weights distributions from the different modalities to the same average and variance to remove scale factors. One possible way to normalize both distributions, if we assume normality, is by calculating the z-score of the edge weights distributions (zero average and standard deviation set to the unit). The embedded networks represent a hyper-grid in a multidimensional space with dimension equal or below to the number of nodes. In order to simplify and have some visual insight we can downgrade linearly this multidimensional grid into a 3D grid. This can be achieved applying to the embedded networks any technique for dimensionality reduction such as linear/nonlinear Multi-Dimensional Scaling (MDS). MDS procedures refer to a set of related ordination techniques used in information visualization, in particular to display the information contained in a distance matrix (Borg and Groenen, 2005). These techniques guarantee, with a given distortion, that the relative distance between nodes is preserved in both multi-dimensional and low-dimensional reduction space. Plotting this low-dimensional grid (e.g., in 3D) we can use any statistical technique to fit a continuous surface into the data (see below Figures 3, 4). Its is natural to think that the difference between two surfaces obtained from different networks will emphasize topologically differences between the two connectivities. In this work we performed this operation in the multi-dimensional space by subtracting homologous embedded edges weights and take the absolute value of both embedded hyper-grids. This is, we subtract homologous embedded edges pairwise according to the formula: M is the difference grid in the multi-dimensional space. Because the M-grid represents the difference between the two grids from different modalities (see above), the relative distance between nodes in M (given by Equation 7) should be concentrated at the origin if they are topological similar, otherwise widely distributed in the multi-dimensional space. Nodes at a distance from the origin of s-standard deviations are statistical different. Moreover, since we z-scored both embedded edge distributions this give us some degree of statistical significance when we compare both networks. All nodes that lay outside of a hyper-sphere with center at the origin with radius R = s, are statistical different. Here we had set σ = 1 for both distributions (z-score variables are estimated from the distributions of the embedded weights). Figure 2 illustrates this process. After applying to both fMRI * and DTI networks the same algebra and the metric embedding described above, both networks rely on the same metric space, therefore comparable. Topological differences can be visually seen in a linearly downgraded to 3D dimensions using a MDS technique, which preserves the relative distance between points in the grid (nodes or brain areas). Results In Figure 3 we illustrate the results of different aggregation procedures on the ensemble of fMRI networks. Compared with a fMRI connectivity matrix from a single subject (Figure 3A), one can notice the difference of a single averaging across subjects ( Figure 3B) and our proposed algebraic topologically aggregated connectivity network ( Figure 3C). It is clear that the averaging procedure tends to blur connectivity values between nodes. In contrast, the topologically algebraic aggregation can preserve components that are common across subjects. As other multilinear algebra or tensor-based analysis, our approach provide a natural mathematical framework for studying connectivity data with multidimensional structure. For illustrative purposes, we also show the DTI connectivity matrix in Figure 3D). It worths noticing the similarity of the anatomical connectivity structure with the aggregated (multiplex) connectivity obtained in Figure 3C. Moreover, since each layer encodes the functional network for a given subject, each subject contributes to the tensor aggregation/embedding with some or none connections (edges), as depicted in metric closure, Equation (5). If a layer do not contribute for the FIGURE 2 | Topological algebraic networks comparison. Connectivity from different modalities (here fMRI and DTI) are firstly embedded (black dot points on the manifolds indicate the brain nodes) and then compared in a low-dimensional space. Black points outside the sphere correspond to nodes with a topological difference (at a given threshold) in the two modalities. aggregation/embedding, we may consider this layer (subject network) as an outlier. Moreover, we are also able to identify the specific sub-network contribution (edges) of a given layer to the aggregation/embedding. Low-dimensional embeddings of different aggregated networks are illustrated in Figure 4. High-dimensional data, such as the information contained in the distance matrix obtained for the different networks, can be difficult to interpret. Here, multidimensional scaling (MDS) was used for visualizing the level of similarity of individual nodes of each -aggregatednetwork. The MDS algorithm aims to place each node in a low dimensional space such that the between-nodes distances are preserved as much as possible. This representation into a low-dimensional space enables an exploratory analysis and makes data analysis algorithms more efficient. Indeed, from the different plots of Figure 4 one can identify brain areas that are topologically close in the aggregated network as those points that are close on the 3D grid. This is clearly illustrated by the MDS representation of the multiplex functional network ( Figure 4C). Nodes from the occipital regions form a compact group of nodes topologically close (with similar connectivity structure), as revealed by the blue points depicted on Figure 4C. We also notice that a compact group of nodes is formed by regions of the temporal lobe, putamen and insula, which are indicated by the red circle. Similarly, the anatomical network in Figure 4D clearly displays a natural organization, i.e., nodes of the two hemispheres lie on both sides of the dotted black line. Further, nodes from occipital regions in the anatomical network, indicated by the blue circles (including calcarina, cuneus, precuneus, . . . ), are distantly located from the group of frontal brain areas indicated by the red marks. Finally, Figure 5 displays the difference grid M in a lowdimensional space. As defined in Equation (7), M corresponds to the relative distance between nodes in networks from different modalities. Differences between brain areas are represented as points widely distributed in the low-dimensional space. Those nodes from different modalities (fMRI and DTI) that share an identical topological structure are located at the origin. The larger the difference in the connectivity structure, the larger the distance from the origin. By setting a threshold s, one can identify brain areas with similar connectivity as those points that lie inside of the hyper-sphere of radius s with center at the origin. and green points and curves, respectively. We consider the regions statistically different for s > 1 and statistically equal for s < 1. This shows that the multiplex algebraic aggregation (green) is more similar algebraically to DTI then average aggregation (blue) and single subject fMRI network (red). The number of brain regions (ROIs) with similar anatomical and functional connectivity are given in Figure 5B as a function of the threshold s. Curves correspond to the number of regions inside a hyper-sphere of various radius. We notice that the number of regions differ as a function of the aggregated network's type. It is worthy to mention that the differences above s-standard deviations are the important ones, since is above this threshold that the ROI's or nodes become statistical different when compare networks. In our example, the fluctuations below one standard deviations may give us some trend but all nodes in the networks are statistical equal for all types of aggregation. For our specific case, as an example, the brain areas located outside the hypersphere of radius s = 1.2 for the two types of aggregation, are listed in the Table 1. Discussion The recent prevalence of applications involving multidimensional and multimodal brain data has increased the demand for technical developments in the analysis of such complex data. Indeed, the discrepancy between structural and functional brain connectivity is a current challenge for understanding general brain functioning. In this paper, we presented a method for characterizing the correspondences between functional and anatomical connectivity. To summarize, the main steps of our method are: 1. Metric network embedding: This procedure embed a group of connectivity graphs in a common space allowing straightforward comparisons. In contrast with simple averaging of connectivity matrices, the topologically algebraic aggregation can preserve components that are common across different subjects or different neuroimaging modalities. This tensor-based aggregation allows enhancing the common underlying structures providing a natural mathematical framework for studying connectivity data with multidimensional structure. 2. Multimodal Networks comparison: the differences between the embedded networks are calculated and represented in a low-dimensional space. Multi-Dimensional Scaling simply enables to display the information contained in the resulting distance matrix allowing thus an exploratory analysis of the data. 3. Detection of nodes (ROI's) with different connectivities: from points widely distributed in the low-dimensional space one can detect brain nodes that share a similar topological structure as those points are located close to the the origin. One can identify brain areas with the largest difference between anatomical and functional connectivity as those points located outside an imaginary hyper-sphere of a radius given by a threshold (Table 1). Our findings suggest that embedding a brain network on a metric space may reveal regions that are members of large areas or subsystems rather than regions with a specific role in information processing. This is clearly illustrated for the anatomical network in Figure 5D, where frontal and occipital brain areas of both hemispheres are situated at distantly and located points of the space. Contrary to a classical averaging of connectivity matrices, the embedding of the multiplex functional network reveals brain areas that play a role in large brain system such as the occipital regions, known to be active when the subject is at wakeful rest. Although experimental evidence suggests that functionally linked brain regions have an underlying structural core, this relationship does not exhibit a simple one-to-one mapping (Wang et al., 2014). These correspondences have also been investigated in specific subsystems, must of them focused on the default mode network (DMN), which is a group of brain regions that preferentially activate when individuals engage in internal tasks, i.e., when the subject is not focused on the outside world but the brain is at wakeful rest. Several studies report that the DMN exhibits a high overlap in its structural and functional connectivity (Honey et al., 2009;Wang et al., 2014). Nevertheless, strong discrepancies have been reported and strong functional links can be found between regions without direct structural linkages (Honey et al., 2009). At a group level, one of the reasons for this discrepancy between structural and functional connectivity has been suggested to be the functional variability across subjects (Skudlarski et al., 2008;Honey et al., 2009;Wang et al., 2014). Indeed, clinical studies have provided evidence for a large heterogeneity of the functional connectivity, particularly in groups of patients with brain disorders such as neuropsychiatric disorders, which strongly alters the structural-functional relationships (Wang et al., 2014). Analytical tools are therefore required to account for this variability in order to enhance the common underlying network structure. Results suggest that averaged aggregation captures the general differences in regions that play a role in visual, auditory and body self-awareness processes, but fails to identify in detail other specific areas across the subjects/groups. In Table 1 we observed that the average aggregation essentially captures part of visual (calcarine, cuneus, lingual, occipital), auditory (superior temporal gyrus), and insula regions that are associated to visual process and body self-awareness. Detection of visual and auditory regions suggest that the averaged aggregation mainly capture regions activated by the resting state condition of the recordings. From the multiplex aggregation (or algebraic aggregation) shown in Table 1, we observed that besides capturing the well-known visual (occipital areas), primary sensory cortex (postcentral), and auditory regions (Heschl gyrus, superior temporal, thalamus), this approach also captures some other network sub-structures involved in touch activation (postcentral gyrus, thalamus) and emotional state activations (amygdala, thalamus, posterior cingulate). This alines with our claim that algebraic aggregation preserves better the multilayer substructures across a group of subjects (multilayers) accounting for as much of the variability in the data as possible. Although we cannot definitively provide a one-to-one mapping of the structural and functional connectivity, we think that our method could provide new insights on the organization of brain networks during diverse cognitive or pathological states. We therefore hope that our approach will foster more principled and successful analysis of multimodal brain connectivity datasets. For all the methods described in this article we provide the corresponding MATLAB software code. Data and code are freely available at the website https://sites.google.com/site/fr2eborn/ download.
6,297.2
2015-01-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
Landslide susceptibility mapping of Penang Island, Malaysia, using remote sensing and multi-geophysical methods ABSTRACT Malaysia is one of the countries in the world that experiences landslides yearly due to natural events and human activities.Penang Island is Malaysia's second most developed state and the largest by population.It is prone to landslides with devastating environmental impacts.Hence, the need to characterize its near-surface soil-rock conditions.This study uses remotely sensed data, with frequency ratio (FR) analysis, to identify landslide-prone areas based on different categories of landslide causative factors.To further understand the conditions and hydrodynamics of the soil-rock profiles causing landslides, electrical resistivity tomography and seismic refraction tomography were carried out at a landslide-suspected section in the study area.Also, satellite-derived Bouguer gravity anomaly modeling was performed to map the varied gravity anomalies associated with landslide-triggering factors in lithologic units.The multi-geophysical models offer strongly correlated results with the remote sensing causative factor maps and the landslide susceptibility index (LSI) map.The likelihood of landslides occurring in the area, as suggested by the area under curve modeling of LSI data, yielded a high predicted success rate of 83.47%.Hence, prospective landslides were identified in the hilly and elevated sections, while the less susceptible sections were identified on flat reliefs.Landslides may also be triggered, for instance, at steep sections with varied contractive soil bodies and shallow structures.Most importantly, leveraging the LSI map would help the necessary agencies forestall and mitigate future landslide occurrences in the area. Mapeo de la susceptibilidad de deslizamientos de tierra en la Isla de Penang, Malasia, a través de teledetección y métodos multigeofísicos Introduction A landslide is a sliding movement of the earth's mass or rocks down the slope along a plane that can be seen and recorded (Habibu & Lim, 2016;Zakaria et al., 2022).Just like some Malaysian states, Penang Island, Malaysia's second most developed state and the largest by population, is prone to landslide occurrences with devastating environmental impacts (Fig. 1).These effects range from the collapse of infrastructures, loss of arable land, and sometimes deaths.Penang has a 90.8% urbanization rate, with an increased population from 722,384 (2465 per sq.km) in 2010 to 808,900 (2760 per sq.km) in 2018 (Gao et al., 2021).The landslide problem in this area with its environs is associated with excessive rainfall, particularly during the southwest and northeast monsoon seasons (Tay et al., 2015).In addition, uncontrolled urbanization and deforestation are two causes of multi-slope instabilities (Abd Majid & Rainis, 2019).However, disregarding geological, topographical, and human causes for sustainable development causes landslides (Lee & Pradhan, 2006;Pradhan & Lee, 2010;Tay et al., 2015;Achour et al., 2017;Bounemeur et al., 2022). Consequently, the topographic relief and the nature of the soil-rock conditions of Penang Island, which is underlain by granitic rocks, with extensive mountainous hills in the central part are issues of concern (Ong, 1993;Lee & Pradhan, 2006).In addition, the area's yearly high rainfall and hot weather aid the progressive weathering of the soil-rock profiles (Nordiana et al., 2013;Bery, 2016;Akingboye & Bery, 2022).Hence, landslides could be triggered due to oversaturation under heavy downpours, particularly in places close to the hills without proper land mitigations (Ab Talib, 2003;Alkhasawneh et al., 2013).As urbanization and land use practices are a progressive phenomenon in the study area and its environs, adequate information on surficial and subsurface landslide-prone zones would prevent landslides catastrophic events.This is because studies have proven that many landslides are triggered by climate change and human activities due to changes in land use and landforms (Alkhasawneh et al., 2013). Over the last few decades, many studies on landslides using Geographic Information Systems (GIS), remote sensing, and multi-geophysical methods have been conducted in Penang Island, Malaysia.For instance, Ab Talib (2003), Lee &Pradhan (2006), andTay et al. (2015), among others, have applied probabilistic and statistical methods, such as the analytical hierarchy process (AHP), ordinary least square (OLS), neuro-fuzzy and more.Remote sensing data could be used to detect geological structures associated with landslides (Lee & Pradhan, 2006;Pradhan & Lee, 2010).Application of optical remote sensing data for structural geology mapping has been much more limited in tropical environments, due to the persistent cloud coverage, scarcity of bedrock exposures, and vegetative ground cover (Pour & Hashim, 2015).Most of the geological structures and those associated with suture zones, even under dense vegetation cover for large inaccessible regions, could be detected via remote sensing data.This is because the advances in remote sensing technology have made it possible to obtain more useful geological structural information from the ground in dense tropical rainforests due to its penetration capability (Shimada & Isoguchi, 2002;Faruwa et al., 2021). Moreover, some studies on landslides in the area used different analytical techniques, such as statistical index (SI) and landslide nominal susceptibility factor (LNSF), e.g., (Ab Talib, 2003;Lateh et al., 2011;Alkhamaiseh et al., 2018;Gao et al., 2021).However, Tay et al. (2015) proved that the frequency ratio (FR) model has a high accuracy compared to those methods.Additionally, in understanding the spatial variability of surficial and subsurface soil-rock conditions of Penang Island, many geophysical methods such as electrical and seismic refraction methods, with soil and rock mass test analyses, have also been conducted, e.g., (Pradhan & Lee, 2010;Bery, 2016;Abdullah et al., 2020;Akingboye & Bery, 2022, 2023).These studies indicate that soil behavior is a complex phenomenon related to various causative geologic hazards.Consequently, a single technique would not be sufficient to provide the required diagnostic subsurface information on soil behaviors in such landslide-prone areas.An integrated data-and image-driven approach is expected to provide detailed information on the landslide-triggering factors as related to soilrock conditions, especially in wet tropical terrains with evidence of repeated landslides. Given the above, remote sensing data on landslides in Penang Island from 2001 to June 2021 were acquired and used for this study to update the landslide susceptibility map of the area.Moreover, geophysical methods involving electrical resistivity tomography (ERT) and seismic refraction tomography (SRT), including time-lapse ERT (TLERT), were carried out at Penang Island's suspected landslide susceptible area.In addition to these methods, satellitederived gravity data were also used.This integrated approach has not been adopted for landslide studies in the environment.It is expected that these methods would provide detailed insights into understanding the near-surface soil-rock profile, structures, and geohydrodynamic systems that could induce landslides in the study area.The study will also serve as a crucial scientific report to prevent, mitigate, and curtain the occurrence of landslides in the area and other tropical regions with comparable topographic and geologic features. Study area Penang is one of the 13 states of the Federal Territory of Malaysia.Penang Island (Fig. 2a), also known as Pulau Pinang, and the Seberang Perai (formerly the Province of Wellesley), are the two main subdivisions of Penang.Penang Island, the study area, is connected to Peninsular Malaysia by a 13.5 km bridge.It falls within latitudes 5 0 15´ -5 0 30´ N and longitudes 100 0 10´-100 0 21´ E (Fig. 2a-b), covering an approximate area of 293 km 2 .It has an elevation of 0 to 820 m above mean sea level, as shown by the Digitized Elevation Model (DEM) in Fig. 2b, with a gradient from 0° to 67°.From the yearly precipitation, data records gathered from 2001 to 2021, the average temperature of Penang ranges from 25 °C to 32 °C (Fig. 2c), and annual precipitation of 54.7 mm to 240 mm per month, e.g., Fig. 2d (World Weather & Climate Information, 2022). Geological settings Malaysia Peninsula is of two conjugated Late Triassic tectonic terranes, viz., the Western Sibumasu Terrane and the Eastern Sukhothai Arc (Metcalfe, 2013a(Metcalfe, , 2013b;;Pour & Hashim, 2015).These tectonic terranes are located at the southeast margin of the Eurasian continent, and they were formed during the Permian-Triassic period by the merging of the Sibumasu and Indochina Blocks along the Bentong-Raub Suture Zone (Metcalfe, 2000(Metcalfe, , 2001(Metcalfe, , 2013b)).The suture zone and Lebir faults divide Peninsular Malaysia into the Western Belt, Central Belt, and Eastern Belt (Metcalfe, 2001).The Central Belt consists of limestone, shale with subordinate sandstone, siltstone, and conglomerate, with Late Carboniferous-Early Permian age (Pour & Hashim, 2015).The granitoids, typically granite, make up about 90% of the plutonic rocks in Peninsular Malaysia (Ng et al., 2015). Penang is a mountainous granitic island centrally in a north-south direction, with a low-lying plain situated on both the eastern and western coasts of the island.The granitic outcrops are exposed due to tectonic uplift during the Oligocene-Miocene period (Ong, 1993).The author reported a major fault system striking roughly N-S that is evident in the island's two granitic plutons, namely the North Penang Pluton (NPP) and the South Penang Pluton (SPP), Figure 2a.The classifications were based on the differences in the proportion of alkali feldspar to total feldspar (Ong, 1993). Penang Island is characterized by the Gula, Beruas, and Simpang Formations (Hassan, 1990), which are the derivatives of the underlying granitic rocks (Ong, 1993), Fig. 2b.The NPP rocks are subdivided into different groups, namely the Tanjung Bungah, Paya Terubong, and Batu Ferringhi groups, as well as the Muka Head micro granite (Ong, 1993).The medium to coarsegrained biotite granites of Tanjung Bungah was formed in the Early Jurassic. The medium to coarse-grained biotite granite of the Paya Terubong group was formed in the Early Permian to Late Carboniferous.The Early Jurassic Batu Ferringhi group comprises medium to coarse-grained biotite granites.The SPP is the Batu Maung granite, with microcline feldspar, and the Sungai Ara granite. Methods This research aims to produce, validate, and improve the landslide susceptibility map of Penang Island, using different landslide inventories and itemized causative factors from remote sensing and multi-geophysical datasets.The methodological approaches are geared toward mapping the landslide susceptibility indices via 2-D spatial remote sensing mapping.The remotely sensed causative factors datasets were analyzed using the frequency ratio (FR) approach to develop a landslide susceptibility map, and the results were cointerpreted with the multi-geophysical models.The employed multi-geophysical methods include ERT and SRT, with the TLERT, and satellite-derived Bouguer gravity modeling.The results derived from the use of remotely sensed causative factors are supported by strong surficial-to-subsurface soil-rock conditions and their integrity as well as water-rock interaction and soil-water/moisture monitoring from geophysical modeling.This combined help to delineate subtle near-surface and deep crustal features to target landslide-induced mechanisms. Remote sensing data acquisition procedures, correction, and processing Landslide susceptibility evaluation is very crucial for probabilistic analysis of landslide indices and their occurrence.To this end, remote sensing is one of the approaches used for mapping landslides.According to Gomes et al. (2013) and Tien Bui et al. (2018), datasets such as lithological (including soil) maps, rainfall data, DEM, and satellite imageries are essential in preparing landslide inventories, selecting landslide causative factors, and constructing the spatial database for landslide predictions.Hence, the lists of datasets acquired over two decades (2001 to 2021), as presented in Table 1, including aerial photographs and satellite images, were used and analyzed to map landslide indices at Penang Island (the study area).The datasets used comprise 100 satellite imageries retrieved from Landsat 7 ETM+ and Landsat 8 Operation Land Imager (OLI) satellite imagery websites, including Google Earth (Table 1).Landsat datasets can provide high-resolution spectral bands for monitoring land use and capturing land change due to climate change, urbanization, drought, wildfire, biomass changes, and different natural and human-induced changes at drastically reduced costs. To effectively map landslides in the study area, landslide causative factors, namely geology (lithologies and structures), topography (slope steepness, slope angle, curvature, and orientation), vegetation cover (NDVI), and land use datasets (Table 2), were leveraged.This is because the causative factors, particularly topography, geology, and land use, including weather conditions and anthropogenic factors, control the degree and distribution of landslides (Khan et al., 2019).After acquiring the remote sensing datasets, they were grouped as thematic maps and geological hazard maps (Table 2).Some of these datasets are on the scale of 1:10,000 to 1:150,000, while some are in spatial resolution dimensions of 30 m x 30 m based on the numbers of grids, polygons, lines, and points. The modeling of the study area's topographic features was based on slope, surface curvature, stream/river density, and distance to lineament data, including DEM, acquired from the Japan Aerospace Exploration Agency (JAXA).On the other hand, the geological map that depicts lithologies and faults/fractures in the study area was acquired from the Mineral and Geoscience Department of Malaysia (JMG).Upon acquisition, the maps were correlated with existing geological maps of the study area, e.g., (Ong, 1993).It is important to note that DEM serves as a foundation upon which many landslide causative factors could be determined (Komac, 2006). Afterward, Landsat data corrections and reductions were performed using the FLAASH Atmospheric Correction Module in ENVI software.FLAASH is an acronym for Fast Line-of-sight Atmospheric Analysis of Hypercubes.It is a first-principles atmospheric correction tool that corrects wavelengths in the visible through near-infrared and shortwave infrared regions, up to 3 μm (Yuan & Niu, 2008).It is compatible with most hyperspectral and multispectral data.The purpose of atmospheric correction is to eliminate the influence of the atmosphere, return the reflectance value of the actual object on the earth's surface, and rectify images taken in vertical (nadir) or slant-viewing geometries (Ilori et al., 2019).In addition, it removes the scattering and absorption effects from the atmosphere to obtain surface reflectance properties.The gapfill interface in the software was also used to fill the gaps (wedge-shaped gaps) in the Landsat images.A band combination of 6,5,2 for Landsat datasets was used because it discriminates between vegetation and bare land.This band combination uses a shortwave-infrared band [SWIR-1] (6), near-infrared (5), and blue (2). Modeling landslide causative factors and landslide susceptibility The spatial database for causative factors was developed using spatial analytical tools in both ArcGIS (v10.8) and ENVI software.The treated landslide causative factors include lithology, slope gradient, slope aspect, surface curvature, stream/river density, distance to lineament, normalized difference vegetation index (NDVI), and land use/cover.After careful evaluations of the Landsat datasets acquired from 2001 to 2021, the most recent Landsat 8 OLI data (2021) was used to evaluate the landslide causative factors.A 5-m resolution of Advanced Land Observing Satellite (ALOS) DEM from JAXA was used to model the landslide causative factors: lithology, slope gradient, slope aspect, surface curvature, stream/river density, and distance to 2, the analyzed datasets were transformed into raster maps with the same coordinate system and pixels.This way, detailed insights into how each of these parameters contributes to landslide occurrence were gained, including a proper understanding of the area's landslide mechanisms. The slope aspect technique, or the orientation of a slope, expressed in degrees or compass directions, controls the slope's exposure to climatic conditions such as sunlight, wind direction, rainfall, etc (Komac, 2006).Slope steepness is a risk factor associated with most soil profiles influencing slope stability.This is because the steeper the slope, the greater the probability of landslides occurrence due to gravity influences (Kornejady et al., 2018).The slope aspect also has a significant impact on vegetation cover, moisture retention, and soil strength.Similarly, slope gradient provides significant information on the rate of surficial soil erodibility and their sliding potential (Ab Talib, 2003).Also, the degrees of surface curvature, distance to lineament, and stream density are significant factors that must be ascertained in landslide study as they mitigate or adversely induce landslide occurrence.For instance, the force of gravity increases on a steeper slope as the shear stresses in loose soils or unconsolidated materials increase (Bery, 2016;Khan et al., 2019).Most landslides are associated with areas of high river/stream and those close to lineaments. Furthermore, the NDVI of the study area was produced after FLAASH atmospheric corrections via ENVI, based on Eq. 1, and the resulting map was also generated to depict the area's vegetation distribution and their contribution to landslide occurrence. (1) Where � and ��� imply the spectral reflectance measurements acquired in the red (visible) and near-infrared regions, respectively.Generally, if there is much more reflected radiation in near-infrared wavelengths than in visible wavelengths, then the vegetation in that pixel is likely to be dense and may contain some type of forest.NDVI is directly related to the photosynthetic capacity and hence energy absorption of plant canopies (Myneni et al., 1995).Low NDVI values indicate moisture-stressed and sparse vegetation while higher values indicate a higher density of green healthy and dense vegetation (Gessesse & Melesse, 2019).This parameter can be negative, even in densely populated urban areas and the NDVI is usually of a (small) positive value.Negative values are more likely to be observed in the atmosphere and some specific materials (Myneni et al., 1995). To adequately analyze the land use factor, seven land uses namely forest, waterbody, urban area, cultivated land, dry paddy, wet paddy, and cleared land, were classified from the derivation of the land use map based on supervised classification associated with the Maximum Likelihood of Landsat 8 OLI multispectral imaging (Abdelouhed et al., 2021).The classification provided significant insights into land use, and in turn, serve as clues in understanding the landslide-triggering mechanism in the study area.It is worth noting that slope instability can result from a change in land use due to human activities such as deforestation, overgrazing, excessive farming, slope cutting, construction, etc. The propensity of landslides to occur in an area is a critical evaluation, especially in the study area with an extensive hilly interior that is surrounded by low valleys and coastal plains.Hence, frequency ratio (FR) was used in the quantitative modeling of landslide occurrence and its damaging potential.FR is one of the quantitative techniques used for calculating the ratio of probabilities of landslide occurrence to non-occurrence for given causative factors using the GIS approach.To obtain the FR for each causative factor, the weight or FR of each element is obtained by dividing the total landslide-prone area by the total area coverage of each causative factor based on Equation 2 (Lee & Pradhan, 2006;Bounemeur et al., 2022).In addition, to develop the area's landslide susceptibility index (LSI), the correlation rating of each landslide causative factor (LCFR) was estimated from the relationship between a landslide and each causative factor.LCFR value estimated for each element was summed based on Equation 3, e.g., (Ab Talib, 2003;Khan et al., 2020).In general, landslide mapping and observations in the landslide causative factors derived results and maps were based on the interpretations of breaks in the forest canopy, bare soil, and other geomorphic characteristics, including landslide head and side scars, flow tracks, and debris deposits below the scars.(3) Following the derivation of the maps for landslide causative factors, all the maps were integrated to map landslide-prone sections in the study area.In total, 340 landslides within 293 km 2 were mapped and observed in aerial photographs based on the analysis and interpretation of the aforementioned geomorphic characteristics.All landslides were verified using Google Earth, ensuring that they are not misconstrued with agriculture areas such as terrain slope agriculture or dry paddy land.To achieve and accurately map the landslide susceptibility indices in the study area, predictive modeling was utilized.According to Kornejady et al. (2018), landslides are of different types including falls, rotational and translational slides, earth flows, complex, and creeps due to various predisposing factors.It is, therefore, recommended to treat landslides separately.This, however, is impossible because different landslide kinds share the same factor importance ranking, particularly top-ranked factors.As a result, the total landslides mapped were divided into a set of training (80%) and testing data (20%).The training data were used for constructing the predictive models while testing data were used to validate the models to ascertain the accuracy of the mapped landslide indices. The accuracy of the landslide susceptibility index (LSI) map was assessed using a method known as the area under the curve (AUC) proposed by Lee & Pradhan (2006).The rate curve was created to explain how well the technique predicted intended landslides.The AUC graph represents the quality and accuracy of the model used to classify landslide occurrence.To have a success rate, index values of all the landslide pixels from the LSI map were calculated and reclassified into 100 classes.The outcome data were plotted as a graphical model, with an accumulated 1% interval for the true positive rate on the Y-axis against the false positive rate results which could reveal the relationship between the true positive and false positive rates on the X-axis (Biggerstaff, 2000). Geophysical field data measurements and processing A combination of two ground-based geophysical methods (ERT and SRT) was carried out at a suspected landslide site on the main campus of Universiti Sains Malaysia (USM), Fig. 3a, due to its landscape gradient.The methods investigate the subsurface heterogeneous soil profiles and geologic structures that could trigger landslides.The infield ERT and SRT surveys lasted three months, from January to March.The periods are remarkable for their appreciable sunshine and high downpours.Generally, resistivity and seismic refraction techniques are the most used geophysical methods to investigate subsurface soil-rock conditions.For the resistivity method, the result mainly depends on the variations in resistivity factors in which soil water content plays a significant role (Kherrouba et al., 2019(Kherrouba et al., , 2022;;Zakaria et al., 2022).However, SRT is influenced by subsurface seismic P-wave velocity (Vp) variations and can be interpreted based on the measured range of velocities (Dahlin & Wisén, 2018). A single geophysical survey line in approximately NE-SW direction for both ERT and SRT measurements was established at a landslide-suspected section within USM, as a monitoring site (Fig. 3a).The profile for the ERT and TLERT was 22 m while that of the SRT was 23 m long.The ABEM SAS Terrameter 4000 was used for the ERT survey (Fig. 3b), employing the Wenner-Schlumberger array (Fig. 3c) with a 0.5 m station spacing.The Wenner-Schlumberger array configuration is more sensitive to horizontal and vertical structures than the Wenner and dipole-dipole arrays.Besides, it has a deeper depth penetration than a narrower model (Loke, 2002;Akingboye & Bery, 2022).The resulting field-measured resistivity datasets were processed and modeled using RES2D software (Loke, 2002), leveraging standard leastsquares data inversion constraints based on the finite-element method of four nodes with L 2 -norm to minimize the difference between the measured and observed resistivities.Also, a damping factor of 0.05, with a minimum value of 0.01, was used to improve the accuracy and resolution of the inversion model.The TLERT models depicting the percentage changes in resistivities for the investigated months were also derived from the same software. Moreover, the SRT data acquisition was conducted using the ABEM Terraloc Mk 8, employing a channel of 24 geophones at 12 Hz each, a 12-pound hammer, and a striker metal plate (Fig. 3d).The measured Vp datasets were processed by first enhancing weak seismic traces, employing the auto-gain control technique in the SeisOpt@2D software, to amplify the first arrival traveltimes for easy identification and picking.The first arrival traveltimes were picked using the same software from the resulting enhanced Vp datasets.The simulated annealing optimization method, an interface of SeisOpt software, was used to invert the first arrival seismic traces and to generate the resulting Vp models (Zakaria et al., 2022).Ensuring smooth interpretation and detailed landslide susceptibility characterization, the satellite-derived Bouguer gravity anomaly datasets of Penang Island, derived from the Bureau Gravimétrique International (BGI), were modeled using Oasis Montaj software.The gravity method was adopted due to its extensive area coverage, ability to differentiate crustal densities, and deeper depth penetration.The results were co-interpreted with the generated ERT and SRT models for the landslide monitoring site. Landslide susceptibility assessment using frequency ratio (FR) model The derived causative factors for characterizing landslide occurrences in Penang Island, Malaysia, are shown in Table 3. Their 2-D maps were also generated.The relationship between slope gradient and landslide occurrence is one factor widely used as a landslide-triggering mechanism because they are directly proportional.The digitized lithology map, based on their locations in the study area, as shown in Fig. 4a, was analyzed alongside all the landslides' causative factors models.It is important to note that the steeper the slope, the greater the probability of landslides due to gravity influences (Bery, 2016;Khan et al., 2019).The force of gravity increases on a steeper slope as the shear stresses in loose soils or unconsolidated materials increase.The surface runoff on the steeper slope is higher and faster than on the gentle slope.Hence, the very low landslide occurrence probability is below 5°, with an FR of 0.04.When the FR is greater than 1, significantly above a slope of 15°, it indicates a very high landslide occurrence probability (Table 3 and Fig. 4b).These zones are peculiar to the mountainous sections, especially in the central part.The residential areas are characterized by 20° slope angles, with low to moderate landslide occurrence probability. Slope aspect and surface curvature are also essential factors to be considered for the occurrence of landslides.According to Carrara et al. (1991) and Wan Mohd Muhiyuddin (2005), a sloppy surface facing sunlight is dried compared to the one prevented.The latter is responsible for most landslides due to wet or moist conditions.In this study, the highest FR values (Table 3) dominate areas in the north, followed by the southeast and northeast (Fig. 4c).Meanwhile, the curvature of the slope, representing the surface morphology, is usually parallel to the hill, hence, affects the acceleration or deceleration of water flow.A positive value indicates a sloppy concave surface with an accelerated flow.A negative value represents a sloppy convex surface with declining water flow, while a neutral value (i.e., zero) indicates a flat surface.As shown in Table 3 and Figure 4d, the FR value of 0.86 indicates a convex surface with the greatest possibility for landslide occurrence in the event of a heavy downpour due to repeated dilation and contraction.As a result, loose debris on an inclined surface is prone to induced soil creep or mudslides which could also contribute to landslide reoccurrence at existing location.Whereas a concave slope, with an FR of 1.28, will retain water for a longer period (Alkhasawneh et al., 2013). Stream densities and distance to lineaments also play a part in the occurrence of landslides.The closer a region is to streams and lineaments, the greater the likelihood of a landslide occurrence.This is because the stream will erode the slope base and saturate the underwater part, resulting in gully erosion and terrain alteration.Table 3 reveals that most landslides occur at a density of 0-0.002 m/m 2 for streams, corresponding to distances 0-125 m from lineaments (Fig. 5a-b).The comparison of Table 2 and Fig. 4a indicates that the highest FR of landslide is associated with the rocky igneous area, with Tanjung Bunga Granite at FR of 1.97.Followed by Muka Head Granite, Sungai Ara Granite, and Batu Maung Granite with FR values of 1.47, 1.42, and 1.27, respectively. In the case of the Vegetation Index (NDVI), as shown in Table 2 and Figure 5c, the NDVI values ranging from 0.6-0.8, with the highest FR of 2.68, indicate the possibility of moderately high to high landslide occurrence.The NDVI values of 0.4 (and below) are indications of low landslide occurrence.The results suggest that a low NDVI increases the likelihood of landslides.Most of Penang Island's areas with thick forests are found on steep sections (Fig. 5c). As a result, the slope instabilities could be triggered by human activities such as deforestation, overgrazing, intensive farming, slope cutting, slope agriculture, construction, and more.Most landslides will occur in cleared and cultivated areas (Bukit Bendera and the western section), with the highest FR of 2.68 (Table 3 and Fig. 5d). Interpretation of geophysical methods at the landslide-suspected area Based on the ERT results derived from the test site from January to March during the northeast monsoon season, the subsurface lithological conditions are characterized by saturated and unsaturated residual soil profiles (rich in clayey sand) with resistivity values of 5-300 Ωm and >300 Ωm, respectively (Fig. 6ac).The saturated bodies are extensive, occurring within the motley topsoil with interspersed stiff-to-hard soils (highly varied in resistivity) and granitic boulders (>1000 Ωm), Fig. 6a.In February (Fig. 6b), the water-retention level in some subsurface constituents tends to be lower than it was for the other two months.On the other hand, the SRT models (Fig. 7a-c) also depict corresponding responses, whereby the residual soil profile across the investigated site varies from 50-800 m/s, the weathered granitic layer has Vp values of 800-2000 m/s, while the integral/fresh granitic bedrock is characterized by >2000 m/s.From January to March, significant changes in the Vp values of the residual soil and weathered layers were observed (Fig. 7a-c), especially between station distances of 4 m and 18 m in Fig. 7c with an identifiable extensive deep-weathered trough.The observed variations are attributed to varying soil composition, rock structures, and degree of weathering and water affinity. The observed variations in geophysical signatures were also clearly interpreted by the percentage changes in measured resistivities between January and February (Fig. 8a) and in March (Fig. 8b).These models show the changes in the water saturation level beneath the investigated area.The residual soil and the weathered sections (between stations 10 m and 12 m) are greatly impacted by saturating water, stretching to the station position of 6 m beyond (Fig. 8ab).Based on Figures 6-8, the principal landslide triggering mechanism, for instance, is the extensive saturated residual and weathered soil profiles with >40% resistivity changes (extending to depth >15 m).As a result, the identified section and other steep contractive soil (silt/clay) sections may be subjected to intense soil creep and sliding.This is because soil creep and dissolution are enhanced by soil-rock configuration and water affinities (Bery, 2016;Zakaria et al., 2022). Interpretation of satellite-derived gravity model of Penang Island To further strengthen the interpretations of the study, the satellite-derived Bouguer gravity anomaly map of Penang Island (Fig. 9) was developed to comprehensively depict the lithological conditions and embedded geologic structures which could trigger landslides.The derived Penang Island's Bouguer anomaly map is characterized by negative and positive gravity anomalies ranging from -16 to >30 mGal.The negative Bouguer gravity values gradually attain positive values as one progresses toward the low-lying areas.The high positive Bouguer anomalies ranging from >2-30 mGal dominate the largest portion of the study area.The conspicuous central quasi-ring negative Bouguer gravity anomaly with its parts stretching southward depicts the section dominated by the mountainous granitic rocks because such features are characterized by varied mass compared to surrounding lithologies as well as effects associated with tectonic uplift, mineral composition, and structural complexities (Balogun, 2019;Okpoli & Akingboye, 2019, 2020).Gravity attraction on the hill mass due to isostatic imbalance may also reduce gravity (Balogun, 2019). The studied site within USM for monitoring the nature and degree of surficial and subsurface soil-rock profiles and hydrodynamics of a typical steep section has Bouguer gravity values between 14 and 18 mGal.Potential lineament and/or drainage features were also mapped, as shown by trending black lines in Figure 9.The varied lithological conditions of the area as observed in the Bouguer gravity model (Fig. 9) also affirm the near-surface features identified in the ERT and SRT models for the monitoring site (Figs.6-8).Comparing the Vp and resistivity results obtained at the suspected landslide sites (Figs.6-8) and those of Figures 4-5 with Figure 9, the sections with the negative Bouguer gravity anomalies (-16 to 2 mGal) may be identified as landslide-susceptible zones due to their topography reliefs (Fig. 2b) as well as soil profiles and shallow lineaments.Consequently, slope failure may be imminent at landslidesuspected sections due to intense soil creep, especially at steep slopes with contractive soils (silt/clay) and water-escape structures (Zakaria et al., 2022). Landslide susceptibility index and modeling To have a detailed update and understand the landslide susceptibility of the study area, a landslide susceptibility index (LSI) map is needed.This was achievable from the summation of all causative factors, and the resulting LSI map of Penang Island is shown in Figure 10.Based on LSI estimates from Equation 3, the region is characterized by values ranging from 2.46 to 9.23, with a standard deviation 2.37.The range of values depicts the susceptibility indices for the area.Hence, the observed and predicted landslide-susceptible areas (with moderate to very high LSI) fall on the hilly and elevated sections, especially sections with contractive silty/clayey soils and poor ground mitigation.Whereas the less susceptible sections are places with flat topographic reliefs.These results also conform with the interpreted results in Figs.4-9, hence validating the accuracy of the methods employed and their generated results to accurately predict future landslides in the study area. The LSI result was also verified based on the AUC model (Fig. 11) using pixels from the LSI map.This approach evaluates how successfully the LSI map matched the occurrence of landslides.The success rate curve was derived using testing point landslide datasets, whereas the predictive rate curve was derived from training point landslide datasets.The AUC is a graph that displays varying index numbers, often between maximum values of 1 equal 100%, and a value of 0.5 that equals 50%.AUC is one kind of accuracy metric used to predict, evaluate, or interpret natural disaster models (probabilities).A score between 0.8 and 0.9 is considered to be good and is an excellent classification, a score of 0.7-0.8 is a moderate or decent classification, and a score of less than 0.6 is an inadequate classification.As observed in the AUC model, the success rate shows the model's ability to consistently categorize the occurrence of existing landslides, whereas the AUC of the predictive rate illustrates the proposed landslide model's ability to anticipate landslide susceptibility (Pamela et al., 2018).Based on Fig. 11, the AUC prediction accuracy is 0.8347, translating to a prediction accuracy of 83.47%.On the other hand, the ratio for the success rate is 0.7968, indicating a prediction accuracy of 79.68%.This accuracy assessment is slightly higher than that of Khan et al. (2019).As a result, there is a higher chance of landslide occurrence at the identified sections in the future and even at previously reported sections with no ground mitigation.Figure 10.Landslide susceptibility map of Penang Island, Malaysia. Conclusions This study has demonstrated the efficiency of combining remotely sensed analytical models via the FR technique with the results of multi-geophysical methods (ERT, SRT, and satellite-derived gravity) to generate and update the landslide susceptibility map of Panang Island, Malaysia.Based on the results of the analyzed remotely sensed causative factors for the study area, the slope gradient and cultivation land factors are the most influential landslide causative factors, where most of the landslides are likely to occur in the northern and southern parts.This is also supported by the interpretive assertions made from the ERT, SRT, satellite-derived Bouguer gravity, and LSI models/maps of the study area. Comparatively based on these analyzed models/maps, prospective landslides were pinned on the hilly and elevated sections, especially sections with varied contractive soils and shallow structures, while the less susceptible sections were identified on flat topographic reliefs.On that account, the possibility of landslides occurring at the investigated monitoring site and other identified sections is high as suggested by the predictive AUC modeling of LSI data.Though this study does not fully leverage some necessary enhancement gravity techniques to delineate shallow to deep crustal structures attributed to regional crustal deformation, it has provided helpful recommendations in planning future urban development and mitigating the occurrence of future landslides in the study area. Figure 1 . Figure 1.Landslide-prone areas in Malaysia (modified after Habibu & Lim, 2016).The degrees of catastrophes in landslide-prone areas are not drawn to scale. the fre que ncy ratio o f e ach caus ative facto r. * is the numbe r o f pixe ls o f are a o f the o bs e rve d lands lide o f clas s a o f parame te r b. is the numbe r o f pixe ls o f are a o f clas s a o f parame te r b. *is the numbe r o f pixe ls o f the to tal are a o f the o bs e rve d lands lide . is the numbe r o f pixe ls o f the to tal are a o f the map. Figure 3 . Figure 3. (a) Aerial data acquisition map of the studied site for ERT and SRT data measurements.(b) A typical 2-D layout and setup for a two-cable reel resistivity measuring system.(c) A typical schematic diagram of a 2-D Wenner-Schlumberger ERT acquisition layout with its subsurface sensitivity pattern.(d) A typical schematic diagram of a 2-D SRT acquisition layout. Figure 5 . Figure 5. (a) Stream density map, (b) distance to lineament map, (c) vegetation index (NDVI) map, and (d) land use classification map of Penang Island, Malaysia, as derived from the FR technique. Figure 6 . Figure 6.ERT models for (a) January, (b) February, and (c) March at the landslide monitoring site. Figure 7 . Figure 7. SRT models for (a) January, (b) February, and (c) March at the landslide monitoring site. Figure 8 . Figure 8. TLERT models of the percentage resistivity changes for (a) February, and (b) March at the landslide monitoring site. Table 1 . Summary of the datasets used for the landslide susceptibility mapping. Table 2 . Summarized classification of the remote sense data layer for the study. lineament, using ArcGIS software.Based on the sub-classifications of landslide causative factors in Table Table 3 . Causative factors for evaluating landslide occurrence in Penang Island, Malaysia.
8,558.2
2023-08-16T00:00:00.000
[ "Environmental Science", "Geology" ]
A simple alternative in approximation and asymptotic expansion by exponential/trigonometric functions Function approximation plays a crucial role in applied mathematics and mathematical physics, involving tasks such as interpolation, extrapolation, and studying asymptotic properties. Over the past two centuries, several approximation methods have been developed, but no universal solution has emerged. Each method has its own strengths and weaknesses. The most commonly used approach, rational Padé approximants, has limitations, performing well only for arguments x < 1 and often containing spurious poles. This report introduces a new and straightforward procedure for exponential/trigonometric approximation that addresses these limitations. The method demonstrates accurate fitting capabilities for various functions and solutions of second-order ordinary differential equations, whether linear or nonlinear. Moreover, it surpasses the performance of Padé approximants. Notably, the proposed algorithm is remarkably simple, requiring only four values of approximating functions. The provided examples show case the potential of this method to offer a straightforward and reliable approach for a wide range of tasks in applied mathematics and mathematical physics. Introduction Approximation of functions is an indispensable tool in applied mathematics and mathematical physics. The problem has been confronted as early as the development of calculus. Newton, Euler, Lagrange, Laplace, Gauss and many others designed various approaches for interpolation, integration, extrapolation of functions and finding asymptotics. The most popular current method for approximating functions is rational Pade approximations [1][2][3]. The calculation of Pade approximants is straightforward, given the corresponding software. Explicit expressions are mostly cumbersome, even in the second order [2]. Many functions are fitted for small argument values (x < 1) much better by the Pade approximants than by the Taylor series. Pade approximation gets worse for x > 1, which cannot be circumvented by applying higher orders because they often contain spurious poles due to rational Pade expression. The problem of approximating specific functions can be closely related to the summation of divergent series. Many methods for accelerating convergence have been developed, as reviewed by Weniger [4]. In particular, he concluded that 'it still has to be discussed how one should actually proceed if the convergence of a slowly convergent sequence or series has to be accelerated or if a divergent series has to be summed. In view of the numerous different types of sequences and series, which can occur in practical problems, and because of the large number of sequence transformations, which are known, the selection of an appropriate sequence transformation is certainly a nontrivial problem'. This citation may manifest a 'status-quo' in this field and indicates its openness to innovations. An alternative approach to Pade approximation to reconstructing functions has been developed in a series of papers by Yukalov, Yukalova and Gluzman [5][6][7][8][9][10]. The authors introduce self-similar factor approximants and can fit a broad class of functions. The authors have introduced multiplicative, additive and nested exponential approximants. Four or more parameters in the approximation are determined from the Taylor expansion. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Selected functions were fitted with an accuracy that excels Pade approximants. However, a straightforward application sometimes fails and requires adjustment using 'the control parameters'. Within this framework, only steadily increasing or decreasing functions have been approximated. In this report, a novel approximation scheme dubbed exponential/trigonometric fitting is described. The formulation is simple, and the application requires a few unsophisticated calculations. The method delivers accurate approximations and outperforms the Pade and other approaches for a wide variety of functions (elementary, special and those used in celestial and quantum mechanics). It also correctly predicts the asymptotics of some non-linear ordinary differential equations (nODE). The paper is organized as follows: in part A, the procedure for finding parameters for exponential/ trigonometric approximation is described; in part B and Supplementary Material, the application to a wide class of functions is presented; in part C, the method of 'chasing' is generalized for solving non-linear ODEs; and in part D, the exponential/trigonometric approximation is applied to predict the asymptotics of nODEs. Applications As mentioned in Introduction, the self-similar factor approximants [5-10] outperform celebrated Pade approximants for many functions. I use the probe functions to assess the quality of double-exponential fitting. For example, the function x ln 1 , ) whose accurate evaluation is still a hot topic in applied mathematics [4], is closely reproduced by double-exponentials in the interval 0 < x < 4, whereas the Pade approximation works only for x < 1 (figure 1(A)). Another example from the studies mentioned above is the function which is accurately presented by the double-exponential approximation ( figure 1(B)). The Pade [2/2] approximation in this case fails as it contains a discontinuity (a spurious pole). Further examples are illustrated in figures S1-S4 in the Supplementary Material section. For example, the procedure works well for elementary transcendental functions such as sech = 1/cosh and tahn (figure S2C, D) and inverse rational functions (figure S2E, F). In figure S3, the method is applied for various functions of mathematical physics such as erfc (A), Debye-Hückel (B), Mittag-Lefler (or Voight) (C), as well as some other functions (D-F). In all these cases, double-exponential approximation shows excellent predictive accuracy (figure S3). Figure S4 shows selected results for a simple approximation of the special functions. They are obtained numerically as solutions of corresponding ODEs using the generalized method of 'chasing' described below. For the Airy function Ai, a single exponential is sufficient (figure S4A) when the internal argument is / x , 3 2 as suggested by asymptotic expansion [11,12]. Oscillating functions such as Laguerre and Hermite orthogonal polynomials and the spherical Bessel function are also well reproduced by the exponential/trigonometric approximation (figures S4B, C and D, respectively). Of note, in all the approximations mentioned above, the fitting parameters are obtained from only four values of function evaluated for x < 1. Numerical ODE solution by generalized method of 'chasing' I next explored the possibility of exponential/trigonometric approximation to present solutions of secondorder ODEs. They were obtained numerically using the 'chasing' method. It has been designed by Gelfand and Lokutsiyevskii for canonical inhomogeneous ODE, y p x y f x " + = ( ) ( ). The original algorithm is described by Berezin and Zhidkov [13], pp. 409-412 in English translation). The procedure is extended below to solve a general second order ODE First, the ODE is presented in the discrete form as (7). The two moves (forward and backward) correspond to the term 'chasing'. The procedure has been tested for various ODEs and reproduced well existing analytical solutions. The method of chasing is readily extended to finding solutions of non-linear ODEs. In many cases, they may be presented as above, with a non-linear function f (y) on the right-hand side y q x y p x y f y , The equation is solved iteratively. Starting from f y 0 = ( ) and solving the homogeneous ODE, a new f(y) is obtained and plugged into the RHS of (8) in the next step and so on. From numerical experience with different nODE types, less than 10 iterations are needed to obtain accurate solution. The exponential/trigonometric approximation works well for oscillatory functions. The periodic second solution (Bi) of the Airy equation is accurately reproduced and the relative error of approximation is small ( figure 2(A)). Small deviations in from exact function occur at points where Bi ≈ 0. Note that in this case the internal variable, / x 2 3 is used as suggested by the asymptotic expansion [11,12]. Another challenging example represents the seminal non-linear Thomas-Fermi equation that describes the screened Coulomb potential caused by a heavy charged nucleus surrounded by a cloud of electrons. The best fits achieved in previous approximations [2,8] used more than eight terms in expansion. The double-exponential with only four parameters closely reproduces a slow decay of solution. 1.4. Approximating nODE asymptotics from series expansion Finally, I applied exponential/trigonometric fitting to assess the asymptotics of non-linear ODEs. The equations are abundant in mathematical physics, and most of them cannot be solved explicitly. The asymptotic dependence is usually tackled using various perturbation approaches [2,3,[14][15][16]. As an example, I take the non-linear equation which appears in describing the calcium gradients established around single calcium channels in the cytoplasm [17]. The analytical solution is readily obtained by integrating equation (9) twice. For μ = 1, the first integration gives / y y y 3 2, 2 2 3 ¢ =  + and the second one delivers the two solutions which correspond to the (+) and (−) signs in equation (9), respectively. The integration constants w are defined by the boundary condition at x = 0, set by calcium influx into the cytoplasm. To compare with analytical results, I used the expansion: y y y y y. As in other approaches cited above, the equations in all orders represent linear inhomogeneous ODEs. The terms y n were calculated using the generalized method of chasing (part C). The left panels in figure 3 show the sums y . n å They differ from analytical solutions (the graphs in the right parts of figure 3), but attempting to fit them with Pade approximants (blue squares in figure 3) is unsuccessful: an exponential decay is overestimated ( figure 3(A)), and for the periodic solution, the oscillations are severely dampened ( figure 3(B)). The exponential/trigonometric approximations indicate perfect coincidence with analytical solutions (the red dots versus black curves in the right panels in figure 3). Discussion and conclusions Approximation of functions is a cornerstone of applied mathematics and mathematical physics. Most important tasks concern interpolation and extrapolation of functions and examination of their asymptotic properties. In the past two centuries, a multitude of methods has been developed. Nonetheless, continued efforts in the field perhaps merely indicate no universal recipe, and each method has benefits and flaws. The most popular remains the use of rational Pade approximants, but they only work reliably for arguments x < 1 and show deviations for larger x. It is logical to increase Pade order, which may bring some improvement but at the cost of complexity. Explicit expressions can be routinely obtained by computer algebra; however, they are so lengthy that it is useless to present them [2]. Even worse is appearance of spurious poles coming from unavoidable zeros of the denominator in the Pade ratios. A step beyond Pade has been made in a series of papers [5][6][7][8][9][10], where selfsimilar factor approximants are introduced. In many cases, they outperform Pade approximants, but sometimes the control functions have to be introduced to achieve convergence and accuracy. Moreover, the oscillating functions have not been considered. In this report, a novel and simple approximation procedure based on exponential/trigonometric fitting is described. The algorithm is used to accurately reproduce both functions and solutions of second-order ODE, linear and non-linear. The main advantage is the simplicity of the algorithm, as presented by equations (1) through (4). The fitting parameters are uniquely determined from only four values of probe functions. The reference points are extracted at equidistant intervals for the argument x < 1, for which the Taylor series may be used. The sum of two exponentials fits many functions that steadily increase or decrease. The examples are presented in figures 1 and S2, S3 in the supplementary material section. For alternating functions, the exponential/trigonometric fitting is appropriate. This naturally follows from the two-exponential approximation, when the exponents become complex conjugates. The examples are presented in figures 2, 3, and S4 in supplementary material. The method is also applied to assess the asymptotics of non-linear ODEs. These are usually sought in the form of perturbation series using Δ-expansion, Adomian polynomials, multiple scales etc [2,3,[14][15][16]. Exponential/trigonometric approximation may be useful in seeking the asymptotics of perturbation expansions that usually generate inhomogeneous ODEs, like system (11). The analytical solutions can be obtained as integrals from an inhomogeneous term multiplied by the solution of the homogenous ODE. When they are given by the special functions, they can be accurately presented using exponential/trigonometric functions. The required integrals are then explicitly calculated to deliver analytical asymptotics. Alternatively, the nODEs can be solved numerically, for which the method of chasing is generalized in part C. When the perturbation series is restricted to four terms as in equation (11), the exponential/trigonometric approximation can be readily applied as in figure 3. In this report, we propose a novel and straightforward procedure for exponential/trigonometric approximation. This method demonstrates accurate fitting capabilities for various functions and solutions of second-order ordinary differential equations, whether linear or nonlinear. Remarkably, it outperforms Padé approximants. The key advantages of this method lie in its algorithm's simplicity and the requirement of only four values of approximating functions. The presented examples highlight the potential of this method to provide a simple and reliable representation for a wide range of tasks in applied mathematics and mathematical physics. For example, finding solutions of non-linear PDEs in the framework of the Hirota transform is based upon perturbation expansion consisting of exponential terms with temporal and spatial dependencies [18]. This case is well suited for exponential/trigonometric approximation.
3,046.4
2023-07-31T00:00:00.000
[ "Mathematics" ]